US State Department Deploys AI Chatbot for Foreign Service Selection Amid Bias Concerns

The US State Department uses an AI chatbot, StateChat, to help select Foreign Service panel members, with final decisions made by humans. Concerns remain about how AI will ensure diversity and prevent bias.

Categorized in: AI News Human Resources
Published on: Jun 11, 2025
US State Department Deploys AI Chatbot for Foreign Service Selection Amid Bias Concerns

US State Department Uses AI Chatbot to Staff Foreign Service Panels

The US State Department has introduced an AI chatbot named StateChat to assist in selecting members for its Foreign Service Selection Boards. Developed in partnership with Palantir and Microsoft, StateChat generates candidate lists based on skills and grades. However, final evaluations and decisions remain under human control.

This process must comply with the 1980 Foreign Service Act, which mandates diverse representation. How StateChat will address diversity and inclusion remains unclear. StateChat has been used internally since early 2024, but this marks its first public role in human resources. The Foreign Service Association has requested further details, while Palantir and Microsoft have not yet commented.

Federal Agencies Embrace AI in HR Functions

The State Department’s use of AI to assign Selection Board members reflects a broader government trend of automating administrative HR tasks. A 2019 study found that 130,000 federal jobs across 80 occupations could be affected by AI adoption, especially those involving administrative duties.

Federal agencies requested $1.9 billion in fiscal year 2024 for AI research and development, signaling growing investment. Similar initiatives include the Department of Homeland Security’s AI chatbot for its 19,000 employees and the General Services Administration’s internal AI tools to aid employee tasks.

These developments indicate a shift in workforce management, with AI handling tasks that previously required manual review of qualifications and credentials.

Bias and Representation Concerns in AI-Driven Personnel Selection

Using AI for “unbiased selection” presents technical and policy challenges, especially given the Foreign Service Act’s diversity requirements. Research shows AI systems can unintentionally perpetuate existing biases. For example, the Word2vec model demonstrated gender bias by reflecting stereotypes found in training data.

To combat this, the Department of Labor published an AI & Inclusive Hiring Framework to address bias in automated hiring systems. It stresses the importance of human oversight and regular evaluation of AI impacts on candidates.

This balance between algorithmic neutrality and legal diversity requirements underlines a key challenge: efficiency gains can’t come at the expense of compliance. Technical fixes alone won’t solve this, so human involvement remains crucial in AI-assisted hiring.