AI is changing academic research-and outpacing many PhDs. Here's how
AI has moved from novelty to infrastructure. Finance, labor, security, banking-every sector feels it. Now, core research workflows are being rebuilt in real time.
Alexander Kustov, an associate professor who studies public opinion on immigration, warned that AI systems can already match and often beat the average social science professor at key research tasks. As he put it on X, "AI can already do social science research better than most professors with PhDs… I have no idea what will happen in five years."
From blank page to publishable draft, fast
Large models can plan, draft, and revise manuscripts to top-journal standards with surprising consistency. Costs have crashed-think low hundreds of dollars and a few hours of prompt work for a full paper.
Examples keep piling up. Tibor Rutar produced a publishable-quality paper using prompts alone. Yascha Mounk noted that Anthropic's Claude can generate strong political theory drafts in hours. If you're still treating AI as a grammar tool, you're underestimating it.
What this means for you: idea generation, theory scaffolding, literature synthesis, argument structure, and revision loops can be offloaded-while you focus on the research question and empirical validity.
The end of the "artisanal" 30-page paper
The traditional, handcrafted manuscript is turning into what one scholar called "vestigial wrapping paper." AI already does literature reviews, framing, and argumentation at the level many reviewers expect.
Sean J. Westwood put it bluntly: "AI does lit reviews better. AI will do peer review. Users will skim AI summaries." The role of the researcher shifts from executor to designer-providing precise questions, pre-analysis plans, and clean data that models can work from.
Journals under pressure
Once the price of drafting collapses, submissions spike. That pushes desk-rejection rates higher and strains human peer review. Expect broader use of AI triage, AI-assisted reviewing, and post-publication review as standard.
The risk: a review bottleneck and overreliance on a few AI screening pipelines. The opportunity: faster, more consistent checks-if editors set clear policies, audit models, and keep humans in the loop for final judgment.
Roles and skills are resetting
Agentic AI can clean data, run regressions, generate code, and summarize outputs without supervision. The classic "apprenticeship" model where junior scholars learn by doing grunt work is fading.
New premium: original thinking, careful identification, and verification. If your value is unique questions, causal logic, design, and ruthless checking, you win. If your value is routine tooling, AI will do it cheaper and faster.
The upside: access, speed, and equity
There's a real silver lining. AI enables adaptive surveys at scale, rapid translation of findings into policy memos, and clearer communication across languages.
It also lowers the barrier for scholars outside elite institutions. Non-native English speakers can produce polished prose that reads on par with Cambridge or Stanford alumni. That's good for ideas, and good for science.
What to do now: a practical playbook for researchers
- Redefine your edge: Spend your best energy on questions, identification strategies, and theory that can be tested-not on boilerplate text.
- Pre-analysis first: Lock in hypotheses, measures, and codebooks before you ask AI to help. Treat the pre-analysis plan as your north star.
- Use AI where it's strong: literature maps, outlines, counter-arguments, code review, robustness check lists, and editorial polish. Keep human control over claims.
- Audit everything: benchmark model outputs, verify citations, and run replication packages end-to-end. No blind trust.
- Disclose AI use: Document prompts, models, and versions. Add a short AI methods note to your appendix.
- Tighten data governance: protect sensitive data, use local models when needed, and track provenance of all generated text, code, and figures.
- Upgrade review workflows: combine AI screening with human judgment. Use structured rubrics so both AI and humans evaluate the same criteria.
- Train your team: teach junior scholars causal inference, survey/experiment design, evaluation, and scientific writing with AI as a co-editor, not the author.
- Plan for journals' shift: expect AI-heavy triage and stricter transparency. Prepare clean repositories, reproducible code, and succinct summaries.
Models to watch and where to learn more
Claude is a strong option for literature synthesis, planning, and drafting. If you want hands-on workflows and prompts, explore resources on Claude and broader Research practices with AI.
Bottom line: AI won't make serious researchers obsolete. It will make undifferentiated work obsolete. If you own the question and the verification, you keep the value.
Your membership also unlocks: