Professors share practical findings on using AI in research
Across disciplines, faculty are weaving AI into their research to reduce grunt work, spot patterns in messy datasets and open new lines of inquiry. The tools differ by field, but the aim is consistent: move faster without lowering standards.
Adoption is rising - but applied impact lags
Associate Professor of Psychology Hudson Golino is tracking how faculty across schools are adopting AI, and how the University compares to 13 peers. Usage is climbing, from AI-supported data analysis to tools that clarify difficult concepts. "It's just like introducing electricity in universities in the early 20th century. Everybody's using it," Golino said.
Against peers, the University ranks toward the bottom on the measured impact of AI-assisted research. One exception: machine learning research shows strong output, likely tied to early investment through the School of Data Science. Golino argues for similar commitment to applied AI - projects that answer social questions or change professional practice - and warns that piling on teaching loads during an AI "gold rush" risks underusing research talent.
What AI does well in research - and where it falls short
Economics Professor Anton Korinek, named to Time's 100 most influential in AI, teaches a graduate seminar (ECON 8991) on using large language models as research assistants. His rule of thumb: treat the model like a brilliant, tireless RA with zero context. You must supply the setup, constraints and checks.
In his evaluations, AI performs well at synthesizing and editing text, and at writing and debugging code. It remains unreliable for deriving equations or setting up mathematical models without tight human supervision. The takeaway: automate the repeatable; keep humans on judgment, modeling and validation.
- High-yield use cases: literature synthesis, outline generation, code scaffolding, unit tests, data cleaning notes and refactoring.
- Keep human-in-the-loop: identification strategy, causal inference, mathematical derivations, model specification, interpretation and method selection.
Peering inside models with methods from psychology
Golino is adapting tools he built to study human cognition to probe transformer models. The goal is to map how these systems represent and relate concepts - with the clear caveat that their internal mechanics differ from the human brain. "I'm using methods I developed to understand how human beings work, but now I'm applying and adapting these methods to understand how these transformer models work," he said.
For teams interested in the technical foundations, see the original transformer paper, "Attention Is All You Need." arXiv link
Responsible workflows: disclosure, context and equity
Assistant Professor Mona Sloane integrates tools like the Perplexity search engine into lab workflows to promote hands-on learning. Her non-negotiable: disclose AI use in coursework and papers. "It's never going to be a silver bullet. It's always going to come with risks [as an] epistemic technology that reconfigures knowledge production," Sloane said.
Assistant Professor of Practice Renee Cummings stresses ethics and guardrails, given that even system creators don't fully grasp model limits. She prompts teams to ask, "What's the diversity of perspectives? Is it just a Western perspective? Whose voices are being amplified by the AI and whose voices are being excluded?"
Cummings uses AI as a comparative tool to scan for gaps rather than to generate primary findings. She urges "curiosity and accountability," and calls for transparent documentation: "documenting the prompts that are used, the tools that were used and the decision making steps⦠almost like lab protocols." For reference frameworks on risk and governance, see the NIST AI Risk Management Framework.
Actionable playbook for research teams
- Define AI-eligible tasks: literature mapping, code generation, figure captions, data dictionaries and reviewer-response drafts.
- Set a disclosure policy: note where AI contributed (what, when, which tool, version) in methods or acknowledgments.
- Log everything: prompts, settings, tool versions, datasets, acceptance criteria and human edits. Store with your repo.
- Institute model checks: adversarial questions, counterfactual prompts and unit tests for code; keep a second reader review.
- Protect validity: keep causal identification, modeling choices and statistical inference under human control.
- Audit for equity: sample outputs across demographics, geographies and time; flag omissions and biased framings.
- Guard data: prevent leakage of sensitive datasets; use local or approved environments for controlled work.
- Budget your time: automate boilerplate; reserve deep work for design, interpretation and writing the argument.
- Upskill the team: short, focused training on prompting, verification and reproducibility standards.
Bottom line
AI can speed the busywork and sharpen feedback loops. The researchers here show that the real gains come from clarity: decide where AI helps, where it doesn't, and how you'll prove the difference with documentation and review.
If you're structuring team upskilling around LLM-assisted analysis and reproducible workflows, you may find curated options here: AI courses by job.
Your membership also unlocks: