Bowdoin Students Tackle AI's Promise and Peril through the Liberal Arts

Student researchers probe AI's promise and pitfalls, grounding projects in ethics and liberal arts. From tutoring to climate and bias audits, the focus stays on people.

Categorized in: AI News Science and Research
Published on: Feb 21, 2026
Bowdoin Students Tackle AI's Promise and Peril through the Liberal Arts

Students Research AI's Promise and Peril

AI is moving fast. These student researchers are responding to the Hastings Initiative's call to critically examine it, put it to work wisely, and guide its direction with ethics and care.

Across computer science, data, education, humanities, and the environmental sciences, they're applying liberal arts thinking to hard problems. The throughline: keep human intelligence at the center-and use AI to extend what people can do.

AI Research, Grounded in the Liberal Arts

The Hastings Initiative for AI and Humanity, established with a $50 million gift from Reed Hastings '83, is funding student-led research and creative work. New grants up to $1,000 support one-year projects-honors theses, independent studies, training, and collaborations with faculty, staff, or external partners.

  • Study AI's impact on society and its fit within sociotechnical systems.
  • Apply AI to create, solve problems, and build models that serve the common good.
  • Advance the technology to reduce risks and improve outcomes.

"I like to say that the liberal arts don't need AI, AI needs the liberal arts," said Eric Chown, faculty director of the initiative. "The Hastings Initiative is helping to put Bowdoin at the forefront of the intersection of AI and the liberal arts at a time when the AI world desperately needs the kind of guidance that the liberal arts can provide."

Ana Lopes '28: Personalized AI Tutor

Motivated by how people learn, computer science and math major Ana Lopes is building an AI tutor that turns course materials into targeted review sessions, study guides, and recall games. The system uses retrieval-augmented generation (RAG) to anchor outputs in a student's own textbook chapters, readings, and notes.

After a literature review on memory and retention, she focused on strengthening recall over time and keeping engagement high. "Every time I learn something new, a new world opens up," she said. "How can these systems help us be better learners?" She plans to test the app with students after finalizing the core mechanics.

Advisor: Sarah Harmon, associate professor of computer science

Related reading: AI for Education

Louisa Linkas '26 and Shibali Mishra '26: Improving Satellite Images and Environmental Monitoring

In polar and alpine regions, low sun angles cast long shadows and flatten contrast, making it hard to distinguish snow, ice, water, and bare ground. Earth and oceanographic science major Louisa Linkas and math/computer science major Shibali Mishra are tackling this recurring limitation in remote sensing research.

They're developing an AI model to predict and correct illumination-related errors, clarifying what's hidden by dim light or deep shadow. The tool could help glaciologists and climate researchers working at high latitudes-and improve agricultural and land-use monitoring worldwide.

Advisors: Sarah Harmon, associate professor of computer science; Vianney Gomezgil Yaspik, assistant professor of digital and computational studies

Related reading: AI for Science & Research

Theo Barton '26: Digital Humanities

Digital and computational studies (DCS) and math major Theo Barton worked with faculty to build a RAG system grounded in curated humanities texts-focusing on Paul RicΕ“ur and Galileo. While general-purpose models are efficient, the team highlighted a key advantage of RAG for researchers: "RAG models provide a secure, localized system for data that needs to remain separate from the foundational large language models," Barton said.

This setup is practical for large volumes of unpublished or copyrighted work where provenance, privacy, and citation matter. Barton is aiming for a career in AI governance and policy, concerned about concentrated power and widening inequality if deployment outpaces oversight.

Advisors: Fernando Nascimento, assistant professor of digital and computational studies; Crystal Hall, associate professor of digital humanities

Madina Sotvoldieva '28: Cultural and Gender Bias in AI

After a DCS course on digital text analysis, computer science and math major Madina Sotvoldieva began probing bias in leading language models-Claude, ChatGPT, and Gemini. She generates hundreds of completions to structured prompts (e.g., "I am a student from [country], and when I grow up I want to be ___") to surface patterns across gender, culture, and names that signal ethnicity.

Early results show nuance: prompts tied to Black names or African countries more often produced community-oriented goals, while male prompts skewed slightly toward science and engineering roles. A second phase targets lesser-resourced languages-Uzbek, Kazakh, Pashto, Georgian, and Turkmen-to test whether bias mitigation is weaker outside well-represented languages.

Advisor: Vianney Gomezgil Yaspik, assistant professor of digital and computational studies

Victoria Figueroa '26 and Mig Charoentra '27: AI Literacy

Working with Professor Vianney Gomezgil Yaspik, this team is mapping how students in the US and Mexico use and think about AI, from kindergarten through college. The project compares AI knowledge, attitudes, and norms-and looks closely at access, equity, and Spanish-language performance.

They combine surveys with in-class observation to see what AI changes in real learning: reading, writing, math proficiency, and classroom dynamics. The goal is to offer practical guidance for schools writing policies-supporting productive use without letting core skills atrophy.

Advisors: Vianney Gomezgil Yaspik, assistant professor of digital and computational studies

Why This Matters for Researchers

  • Method first: several projects pair LLMs with RAG to keep outputs grounded and privacy-conscious.
  • Measurement matters: bias audits, language coverage, and illumination error modeling show the value of tight experimental design.
  • People stay central: from tutoring to classroom studies, human learning and agency drive the objectives-not the other way around.

Published February 20, 2026


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)