AI models rely on autism stereotypes when giving social advice, Virginia Tech study finds

AI models give more restrictive social advice to users who disclose autism, a Virginia Tech study found. One model told autistic users to decline social invitations 75% of the time, versus 15% without disclosure.

Categorized in: AI News Science and Research
Published on: Apr 17, 2026
AI models rely on autism stereotypes when giving social advice, Virginia Tech study finds

AI models reinforce autism stereotypes when giving social advice, Virginia Tech study finds

Autistic users who disclose their diagnosis to AI systems receive advice that tracks closely with common stereotypes about autism, according to research presented at a major computing conference this month. The finding raises questions about whether commercial AI models are personalizing responses based on identity or amplifying bias.

Researchers at Virginia Tech tested six major large language models-including GPT-4, Claude, Llama, Gemini, and DeepSeek-by generating 345,000 responses to social scenarios. When users disclosed autism, one model recommended declining social invitations nearly 75 percent of the time, compared with 15 percent when autism was not mentioned. In dating scenarios, another model suggested avoiding romance nearly 70 percent of the time after autism disclosure, versus roughly 50 percent without it.

The work was led by computer science doctoral student Caleb Wohn and presented in April at the Association for Computing Machinery's Conference on Human Factors in Computing Systems.

How the research was conducted

The team identified 12 well-documented stereotypes associated with autism-including assumptions about introversion, social awkwardness, and disinterest in romance-and created hundreds of decision-making scenarios around them. Researchers tested how advice shifted when users explicitly described themselves with stereotypical traits versus simply disclosing an autism diagnosis.

Eleven of the 12 stereotype cues significantly shifted model decisions across at least four of the six AI systems tested.

What users said

The researchers interviewed 11 autistic AI users and showed them examples of how models responded with and without autism disclosure. Reactions varied sharply.

Some participants were shocked by the results. One exclaimed: "Are we writing an advice column for Spock here?"-referencing the Star Trek character known for prioritizing logic over emotion. Others described the responses as restrictive, patronizing, or infantilizing.

But some participants said the more cautious, disclosure-based advice felt validating and supportive. This contradiction points to what researchers call a "safety-opportunity paradox": advice that feels protective to one user may feel limiting to another.

Eugenia Rho, assistant professor of computer science and a senior researcher on the project, said: "One user's bias could be another user's personalization."

The transparency problem

Wohn noted that AI systems mask their biases effectively. "AI is very good at seeming reliable," he said. "Its responses are very clean and professional, and they sound right. But when you think about it being deployed systematically, when you think about the kind of systematic biases that are actually shaping its responses, that's when it starts to get a lot more concerning."

He compared the issue to AI-generated images: they appear polished on the surface, but details fall apart under scrutiny. As models improve, they become better at concealing these flaws.

The research builds on earlier work from Rho's lab showing that autistic users frequently turn to AI tools for emotional support, interpersonal communication help, and social advice. This makes the bias findings more consequential-users are relying on these systems for decisions that affect their lives.

What comes next

The team hopes the research will push developers to build more transparent AI systems that give users control over how personal information shapes responses. One study participant expressed this directly: "I want to have control over how my identity is used."

The research team also included computer science Ph.D. students Buse Carik and Xiaohan Ding, Associate Professor Sang Won Lee, and Young-Ho Kim, a research scientist at NAVER Corporation in South Korea.

Learn more about how generative AI and LLMs are being studied for bias and fairness issues in research across institutions.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)