Humanities meet AI at UChicago's Neubauer Collegium, setting a cross-disciplinary agenda for research and creativity

UChicago's Humanistic AI links humanities and computer science to see what actually helps research and creative work. Workshops, case studies point to practical steps.

Categorized in: AI News Writers
Published on: Nov 21, 2025
Humanities meet AI at UChicago's Neubauer Collegium, setting a cross-disciplinary agenda for research and creativity

Humanistic AI at UChicago: A pragmatic path for research across the humanities and computer science

How should humanistic scholars evaluate and use generative AI? And where can they directly influence its development? An interdisciplinary team at the University of Chicago's Neubauer Collegium is taking that on with the Humanistic AI project, led by professors Hoyt Long and Chris Kennedy.

The goal is clear: identify opportunities and challenges posed by generative models across literature, linguistics, philosophy, sociology, computer science, and adjacent fields-and build a working strategy for collaboration that moves AI research forward. The project also looks beyond academia to the creative practices already shaped by AI tools.

Why this matters for researchers

  • Shared methods: bridge humanistic inquiry with computational modeling, without diluting either.
  • Near-term outputs: case studies that test how generative models inform research questions and creative processes.
  • Field-wide impact: guidance for pedagogy, evaluation standards, and ethical practice as AI diffuses through research and teaching.

"I could not be more excited about the ways in which our faculty in the arts and humanities are thinking about innovative ways to work at the nexus of AI and culture," said Deborah Nelson, Dean of the Arts & Humanities. She noted that humanistic expertise in context and interpretation can help catalyze the next generation of AI breakthroughs.

Inside the first workshop (Oct. 17-18, 2025)

Nearly two dozen scholars from 12 institutions met at the Neubauer Collegium. After a lightning round of short talks, mixed breakout groups formed pilot ideas that cut across disciplines. The discussion was grounded, fast, and collaborative.

  • Simulations with LLMs: Use models to run controlled scenarios that probe cultural behavior, meaning, and interaction.
  • AI for discovery and creation: Test how tools can assist literature reviews, exploration of archives, and creative workflows.
  • AI "slop" as an object of study: Analyze the causes, spread, and consequences of low-quality AI text in academic and public settings (emails, essays, papers).

Chris Kennedy, a formal linguist focused on meaning and its indeterminacy, put the core challenge on the table: "How will we know whether we are learning something interesting about humans, rather than learning something about language models?" The comparative lens-human writers vs. language models-offers a route to new insights on both.

What participants are seeing across fields

  • More papers, faster submission cycles, and mixed quality as AI tools spread.
  • New datasets from human-chatbot interactions-promising, but ethically complex.
  • Corporate funding pushing technical progress while raising questions about access and standards.
  • Classroom experiments: instructors testing AI-inflected pedagogy and assessment.

Ted Underwood, professor of information sciences and English at the University of Illinois, was blunt: "AI is really good at writing lots of kinds of texts. It summarizes things well. But it has not done a great job of putting novelists out of work." Those failure modes are productive: they point to where human creativity is still distinct-and where research can push models or set guardrails.

Practical takeaways for your lab or department

  • Use LLMs as instruments, not oracles: Pre-register simulation protocols, include human baselines, and report failure cases.
  • Make "slop" measurable: Define detection criteria (stylistic artifacts, citation patterns, factual drift) and test interventions in real coursework or editorial pipelines.
  • Document creative workflows: For writing, translation, or analysis, log prompts, versions, and review steps so results are reproducible.
  • Share cross-disciplinary artifacts: Release prompts, annotation schemes, and rubrics alongside datasets and code.

Timeline and next steps

  • Project runs through June 2027.
  • Team reconvenes in June 2026 to collaborate on works in progress.
  • Final session (spring 2027): present results and chart follow-on work.

"In studying the differences between two kinds of language production systems-humans and language models-we can learn something about both of them," Kennedy said. "The differences then become the basis for new insights."

Learn more and get involved


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)