Trump Signs Executive Order Targeting Bias in AI for Federal Contracts
President Donald Trump has signed an executive order to ensure that federal contracts favor companies whose AI models are free from ideological bias. This move is part of a broader “AI Action Plan” aimed at addressing concerns about what the administration calls “woke AI” — AI tools like chatbots and image generators perceived to have a liberal bias.
The order prohibits federal agencies from purchasing AI models that promote diversity, equity, and inclusion (DEI) initiatives. Trump emphasized that the government will now only engage with AI that upholds “truth, fairness, and strict impartiality.”
What Does ‘Woke AI’ Mean?
The term “woke AI” remains unclear and controversial. Experts point out that eliminating ideological bias in AI is highly challenging. Rumman Chowdhury, a former head of machine learning ethics at Twitter, notes that while “free of ideological bias” sounds good, it is practically impossible to define or achieve.
The concern over liberal bias in AI grew in 2023 when social media highlighted instances where ChatGPT supported affirmative action and transgender rights or declined to write content praising Trump. Similar issues arose with Google’s Gemini, which created ethnically diverse images in historically inappropriate contexts, such as depicting Vikings or Nazis with people of diverse ethnic backgrounds. Google later apologized and adjusted the model.
Research confirms that AI models may lean politically in various directions. For example, OpenAI’s GPT-4 often reflects views similar to those of the average Democrat on political questionnaires, but on some topics, it can produce responses aligning with Republican views. Additionally, AI image generators have been found to reinforce ethnic, religious, and gender stereotypes.
The Challenge of Bias in AI
Bias in AI is a byproduct of how these models operate. They learn from extensive data collected across the internet, which contains a mix of perspectives, some of them biased or extreme. This can result in inconsistent outputs depending on the prompt.
Efforts to reduce bias include limiting training data, using human raters to assess neutrality, or encoding explicit instructions within the AI’s programming. However, these strategies often compromise the usefulness or accuracy of AI responses.
Instances like Google Gemini’s unexpected outputs or Elon Musk’s xAI Grok chatbot, which prioritized “truth-seeking” but ended up generating offensive content, demonstrate how difficult it is to maintain political neutrality in AI. Chowdhury sums it up: political neutrality for AI “is simply not a thing.”
Moreover, perceptions of bias vary by user location and cultural context. An answer that seems liberal in Texas might feel conservative in New York or radical in countries with strict gun laws like Malaysia or France.
Implications of the Executive Order
How the administration will determine which AI tools qualify as neutral is a pressing question. The executive order explicitly excludes concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism from being incorporated into AI models.
Samir Jain of the Center for Democracy and Technology warns the order is not neutral itself, as it restricts some left-leaning viewpoints but not right-leaning ones. He also highlights potential First Amendment issues if the government tries to impose specific speech standards on private companies’ AI products.
While the government can set standards for products it purchases, these standards must relate directly to the intended use. The risk lies in turning the order into a tool for ideological censorship rather than a policy for fair AI procurement.
Some advocates see the order as less restrictive than feared. Neil Chilson from the Abundance Institute believes it is straightforward to comply with and not overly prescriptive. Mackenzie Arnold of the Institute for Law and AI appreciates that the order acknowledges the technical difficulties of creating neutral AI and allows companies to disclose their AI models’ instructions as a way to comply.
The key will be enforcement. If the administration focuses on clear disclosures rather than ideological pressure, the order may function effectively without creating harmful precedents.
For Federal Employees and Contractors
This executive order signals a shift in how federal agencies will evaluate AI tools moving forward. Companies seeking government contracts will need to demonstrate their models do not promote certain ideological views, particularly related to DEI topics. Understanding the nuances of AI bias and compliance will be essential.
For those involved in government procurement or technology policy, staying informed about AI ethics, bias mitigation strategies, and regulatory developments is crucial. To deepen your knowledge of AI tools and their ethical considerations, explore Complete AI Training’s latest courses.
Your membership also unlocks: