From AlphaFold to Archaeology: Arts and Science Faculty Lay Out AI's Promise and Pitfalls

AI is everywhere-helping research move faster while adding new risks. Faculty across fields share practical ways to use it, from hiring and labs to culture, policy and classrooms.

Categorized in: AI News Science and Research
Published on: Jan 30, 2026
From AlphaFold to Archaeology: Arts and Science Faculty Lay Out AI's Promise and Pitfalls

Friend or foe? AI's benefits and concerns, through the lens of research

AI is now baked into phones, feeds, labs, and offices. That convenience brings both progress and pressure-speed, scale, and new risks.

Faculty across the College of Arts and Science are studying how AI affects minds, methods, and systems. Their work points to practical ways researchers can use AI without overlooking its blind spots.

How AI is influencing thought and culture

Jay Clayton, William R. Kenan, Jr. Chair and professor of English, is leading a study of 580 films from 1927-2025 that reference AI-from computers to robots, cyborgs, and networked systems. The team tracks how portrayals change over time, including depictions in medicine and environmental futures, with plans to expand into novels and television.

His core takeaway: media narratives shape what the public expects from AI, for better and worse. A balanced reading of those stories helps separate signal from hype. For background on the bioethics context informing this work, see the NIH's ELSI program here.

Jenny L. Davis, Gertrude Conaway Vanderbilt Chair and professor of sociology, studies how AI reflects and steers social life across individuals, organizations, and institutions. Her current projects span public debates around xAI data centers in Memphis, perceptions of job displacement and deskilling, and AI's role in hiring and military decision-making.

In one hiring study, participants aligned their ratings with AI-generated candidate scores-yet later believed the AI had little influence. That gap in self-awareness is a warning sign for anyone deploying decision aids.

  • Test for automation bias in any decision workflow using A/B experiments and delayed-disclosure designs.
  • Force independent judgments before exposure to model outputs; show provenance and uncertainty alongside scores.
  • Audit downstream effects, not just model metrics-who gets hired, funded, or flagged changes behavior.

AI as a research instrument-fast, useful, and fallible

Allison Walker, assistant professor of chemistry and biological sciences, is building AI methods to surface new therapeutics from natural products. Her group uses large language models to assemble datasets from the literature and teaches students to predict protein structures with AlphaFold.

AlphaFold has reset expectations for structural biology by making many predictions feasible without immediate wet-lab work. It saves time and budget, but it isn't perfect. Cross-checks and domain expertise still matter. Explore the AlphaFold Protein Structure Database here.

  • Treat model outputs as hypotheses. Predefine validation rules before viewing predictions.
  • Version datasets, prompts, and model checkpoints to keep results reproducible.
  • Close the loop: prioritize wet-lab or field confirmation where errors are costly.

Steven Wernke, professor of anthropology, is training vision models on satellite imagery to detect archaeological sites across the Andes. Supported by National Science Foundation and National Endowment for the Humanities grants, the project fills in the "in-between" spaces to understand settlement systems and engineered landscapes at scale.

AI flags likely sites; experts and stakeholders refine what matters. The goal is a richer map that complements fieldwork, not a replacement for it.

  • Build expert-and-stakeholder review into the labeling and triage process.
  • Publish confidence tiers and error modes so field teams know where to dig-literally.
  • Combine top-down detection with bottom-up ethnographic context to prevent misreads.

Haein Kang, assistant professor of art, uses AI as one tool among many. After encountering a periodical cicada emergence in Nashville, she used generative imaging to recreate that fleeting scene-cicadas, exoskeletons, light-treating AI like a new brush, not a substitute for authorship.

Global impact: upside, downside, and the control problem

Michael Bess, Chancellor's Chair and professor of History, studies AI's long-term effects on politics, economies, culture, and daily life. He sees dual-use potential: near-term gains in discovery and efficiency, with meaningful risks-job loss, misuse, and an international race that sidelines safety.

The hard question: can AI development stay aligned with human values under competitive pressure? Without guardrails, drift is likely.

  • Invest in safety research, alignment testing, evals, and incident reporting across labs.
  • Tie deployment to thresholds: red-teaming, interpretability checks, and domain-specific risk assessments.
  • Coordinate standards across companies and countries; racing to deploy should not set the rules.

Kristy Roschke, research associate professor of the communication of science and technology, argues AI literacy must be part of media literacy. Most people use AI without knowing how it works, what data trained it, or how inputs become future outputs. That gap magnifies the risks seen with social platforms.

For younger generations, AI is also a companion technology. We now need to teach social norms alongside AI skills, giving educators practical tools rather than leaving them to instinct.

  • Teach model basics: data sources, tokens, inference, and limits.
  • Emphasize provenance, citations, and consent-what's original vs. synthetic.
  • Set norms for therapeutic or companionship use: boundaries, privacy, and escalation to humans.

What researchers can do this year

  • Add friction: require human-first judgments before any AI suggestion is viewed.
  • Build eval pipelines that test real-world impacts, not just benchmark scores.
  • Keep humans-and domain experts-in the loop where errors carry high stakes.
  • Document data lineage, prompts, and versions for every published result.
  • Teach AI literacy in your lab meetings and methods courses.

If you're building your own AI fluency for research use, explore role-based learning paths here.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide