Why Government Censorship of AI Speech Threatens Free Expression and Democracy
AI speech is protected expression, yet government pressure threatens censorship by forcing AI developers to alter outputs. Protecting AI speech under the First Amendment is crucial.

Why We Shouldn’t Let the Government Hit Mute on AI Speech
AI speech is speech. The government should not rewrite or control it. Yet, across the country, officials are pressuring AI developers to align outputs with their political views. This threat is real, not hypothetical.
In July, Missouri’s former Attorney General Andrew Bailey sent OpenAI a letter threatening investigation. He accused ChatGPT of partisan bias after it ranked former President Donald Trump lowest among recent presidents on anti-Semitism. Bailey called the ranking “objectively” wrong, citing Trump’s embassy move to Jerusalem, the Abraham Accords, and his Jewish family ties as proof that ChatGPT’s answer ignored “objective facts.”
No lawsuit followed, but the threat alone likely pressured OpenAI to reconsider its outputs. This incident previews a future where government pressure on AI speech could become widespread—especially if courts decide AI speech lacks First Amendment protection. Lawsuits like Garcia v. Character Technologies, Inc. are already challenging whether AI outputs count as protected speech or something else.
If courts rule AI speech isn’t protected, government officials could mandate AI outputs rather than just applying pressure. That would open the door to direct censorship.
Why the First Amendment Must Protect AI Speech
The First Amendment protects expression through all mediums. AI is simply another tool for communication. The engineers creating AI and the users interacting with it are engaging in expressive activity, similar to writers or journalists.
When officials pressure AI developers to change or delete outputs, they are censoring speech. Using consumer protection laws to investigate politically sensitive AI responses, as Bailey did, twists these laws into censorship tools rather than protecting consumers from fraud or faulty products.
Bailey’s letter warned all AI developers that a single politically sensitive answer could trigger government scrutiny. Ironically, Bailey once argued in Murthy v. Missouri that the federal government’s efforts to coerce social media platforms into content moderation violated the First Amendment. Now, similar tactics threaten to silence AI speech.
Voters Want AI Political Speech Protected — Lawmakers Should Listen
Polling shows people fear AI but fear government censorship even more. Lawmakers face a critical choice: protect free speech or risk silencing political expression under the guise of regulation.
Government pressure is already reshaping AI. OpenAI’s policy warns users that ChatGPT conversations may be scanned, reviewed, and possibly reported to law enforcement. This creates a chilling effect where users must choose between privacy and access.
Without strong First Amendment safeguards, government censorship and surveillance will narrow the open inquiry that AI should support.
A Path Forward: Transparency and Constitutional Protections
The solution is clear. The government must apply the First Amendment properly to AI speech. Increased transparency is also essential to hold officials accountable when they attempt to influence AI content.
Proposed measures like the Social Media Administrative Reporting Transparency (“SMART”) Act would require federal officials to disclose communications with AI services about content moderation. This transparency helps ensure that government pressure happens in the open, not behind closed doors.
State-level reforms could follow to prevent covert government coercion of AI developers.
Free expression should not depend on which party holds power. AI engineers shouldn’t have to reshape their models every time political winds shift. Protecting AI speech under the First Amendment is the foundation for a marketplace of ideas that includes this new technology.