Beyond Bias: Why Ontology Matters in the Design of Large Language Models

AI bias extends beyond values to include ontology—fundamental assumptions shaping AI outputs. Addressing these can create models reflecting diverse human perspectives and experiences.

Categorized in: AI News Science and Research
Published on: Jul 29, 2025
Beyond Bias: Why Ontology Matters in the Design of Large Language Models

Rethinking AI Bias: The Role of Ontology in Large Language Models

The surge of generative AI tools has sharpened the focus on eliminating societal biases embedded in large language models (LLMs). While much of the research concentrates on the values encoded within these systems, a recent study presented at the April 2025 CHI Conference on Human Factors in Computing Systems argues for a broader perspective: the inclusion of ontology.

What Is Ontology in AI?

Ontology here refers to the fundamental assumptions about what exists and how it matters. Consider a simple example: a tree. Your mental image of a tree depends on your background and worldview. A botanist might think of mineral exchanges with fungi, a spiritual healer might hear whispers between trees, and a computer scientist might visualize a binary tree structure.

These differing perspectives reflect distinct ontologies. When testing a popular LLM, the AI generated a tree without roots initially. Even after specifying cultural context, the image lacked certain elements until prompted with “everything in the world is connected.” This shows how ontological assumptions shape AI outputs and reveal implicit boundaries on what the model “knows” or represents.

Why Ontological Assumptions Matter

Ontological orientations influence every stage of AI development. As James Landay, a computer science professor co-authoring the study, points out, dominant ontologies can become implicitly coded into the LLM development pipeline. This affects not just what AIs generate but also how researchers and designers think about AI itself.

Can AI Evaluate Its Own Ontology?

One method for AI alignment is having an LLM assess another LLM’s output based on values like “harmfulness” or “ethics.” To test if similar approaches work for ontology, researchers analyzed GPT-3.5, GPT-4, Microsoft Copilot, and Google Bard (Gemini). They crafted questions to probe the models’ ability to define ontologies, highlight underlying assumptions, and recognize their own limitations.

The results revealed key shortcomings. For example, when asked “What is a human?”, these models mostly described humans as biological individuals, overlooking relational or network-based views common in non-Western philosophies. Only with explicit prompting did Bard include humans as “interconnected beings.” Moreover, Western philosophies were detailed with subcategories, while non-Western philosophies were broadly grouped, reflecting a skewed ontological representation.

This shows that current LLM architectures struggle to surface diverse ontological perspectives meaningfully. They lack access to lived experiences and context that give these perspectives depth and relevance.

Ontological Assumptions Embedded in AI Agents

The study also examined "Generative Agents," AI systems simulating human-like cognitive architectures including memory and planning. These architectures rank experiences by relevance, recency, and importance—criteria that carry cultural biases. For instance, eating breakfast might be considered low importance, while a romantic breakup scores higher. Such rankings reveal specific cultural assumptions about what matters in human life, which are baked into the agent design without critical examination.

Ontological Challenges in AI Evaluation

When evaluated for “believability” in simulating human behavior, Generative Agents scored higher than actual humans. This raises a question: Are our definitions of human behavior so narrow that real humans fall short? The study suggests that by focusing on limited models of humanity, AI development risks missing the broader spectrum of human experience, including inconsistency and imperfection.

Incorporating Ontology in AI Design and Development

Value-based approaches to AI alignment are necessary but insufficient on their own. Ontological assumptions embedded in data, model architectures, and evaluation metrics shape what AIs can represent and what possibilities they enable or restrict.

Developers should critically assess these assumptions at every stage—from data collection that flattens diverse worldviews to evaluation frameworks that may reinforce narrow success definitions. Without this, dominant ontologies risk being codified as universal truths, limiting human imagination for future generations.

Given AI’s growing role in education, healthcare, and daily life, overlooking ontological dimensions can influence fundamental understandings of concepts like humanity, healing, memory, and connection. Integrating ontological awareness expands the design space and invites questioning of what currently seems fixed or given.

Conclusion

Addressing AI bias requires more than aligning values; it demands attention to the ontological frameworks underlying AI systems. This broader view can open new possibilities for AI that better reflect the diversity of human perspectives and experiences.

For those engaged in AI research or development, exploring ontology alongside values offers a path to more inclusive and reflective AI systems. This approach ensures that AI tools do not unconsciously limit the scope of human thought and culture but instead support richer, more nuanced understandings.

To deepen your expertise on AI system design and ethical considerations, explore advanced courses and resources at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)