OpenAI's Mercury targets Wall Street's grunt work-entry-level roles evolve, not vanish

OpenAI's Mercury trains on ex-bankers to offload spreadsheet cleanup, base models, and comps. Analysts won't vanish; junior work shifts to review, analysis, client-ready polish.

Categorized in: AI News Finance
Published on: Oct 23, 2025
OpenAI's Mercury targets Wall Street's grunt work-entry-level roles evolve, not vanish

OpenAI's "Mercury" is targeting analyst grunt work - not analysts

Leaked documents suggest OpenAI is training models to automate the repetitive work junior finance teams grind through. The company has reportedly tapped more than 100 former investment bankers from firms like JPMorgan, Morgan Stanley, and Goldman Sachs to teach its systems how to build financial models under a project code-named "Mercury."

The takeaway isn't mass layoffs tomorrow. Economists expect a shift in what entry-level roles do, not a clean swap. As one expert put it, "I'm not convinced that we get rid of entry-level workers anytime soon," but the skill set will change.

What changes first

  • Cleaning and formatting spreadsheets
  • Building base-case financial models (LBOs, M&A, DCF scaffolding)
  • Refreshing comps, sensitivities, and scenario tables
  • Drafting pitch book sections and versioning slides

These are structured, repeatable, and template-driven. That makes them perfect candidates for automation in the near term. Some experts expect firms to automate 60%-70% of the time analysts spend on lower-level tasks within a year.

Headcount: read the data, not the headlines

AI adoption doesn't point in one direction. A recent McKinsey analysis found many organizations expect limited near-term workforce impact overall, but larger firms are more likely to report reductions tied to time saved. In strategy and corporate finance, responses split across no change, decreases, and increases in headcount over the next three years.

Other signals are colder. The World Economic Forum reports a meaningful share of employers expect reductions where tasks can be automated. Either way, expect a rebalancing of workloads - lighter on production, heavier on review, analysis, and customization.

Developing analysts: from builders to reviewers

As automation takes the first pass, juniors will be handed more sophisticated work sooner: complex model adjustments, deeper quantitative analysis, and tighter client-ready outputs. One industry leader summed it up: "AI will give every analyst superpowers… Analysts become reviewers and customizers rather than builders from scratch."

What to do this quarter

  • Own the review layer: Build checklists for model logic, assumptions, sensitivities, and output sanity checks. Treat AI output like work from an eager first-year: useful, but verify.
  • Instrument your time: Track the top 10 recurring tasks by hours. Target the ones with existing templates or clean data sources.
  • Upgrade your stack: Pair Excel with Python or SQL for data prep and validation. Use slide automation for comps, charts, and footnotes.
  • Document prompts and templates: Standardize inputs for recurring tasks (e.g., model skeletons, comps refresh, scenario packs). Version them.
  • Tighten controls: Set rules for data access, model risk review, and compliance logging. Keep an audit trail of assumptions and edits.

A practical workflow you can roll out now

  • Intake: define scope, key drivers, and deliverables (model + 5-7 slides).
  • Data: pull from approved sources; run Python/SQL scripts to clean and tag.
  • Draft: use AI to build a model skeleton and generate base-case outputs.
  • Scenarios: prompt for 3-5 sensitivities; auto-generate tables and charts.
  • QA: run checklist (formula tracing, ties to sources, unit consistency, extremes).
  • Slides: push charts and tables to a pre-built deck template; add annotations.
  • Final review: senior pass on assumptions, messaging, and risks.
  • Archive: store prompts, files, and decision log for reuse and audit.

Skills that compound for finance teams

  • Advanced Excel (index/match/xlookup, array formulas, scenario manager)
  • Python or SQL for data prep, reconciliation, and sanity checks
  • Presentation automation (chart templates, dynamic tables, style guides)
  • Gen AI tooling for structured tasks (model stubs, write-ups, slide drafts)
  • Model risk and controls (assumption logs, peer review, versioning)
  • Valuation nuance: market context, edge cases, and exception handling

If you want curated resources that focus on practical tools for deal work and reporting, see this roundup of AI tools for finance.

For team leads

  • Run a two-week time study; pick 3 pilot workflows with clear templates.
  • Measure: hours saved, error rates, revision cycles, and client satisfaction.
  • Create a "first pass by AI, second pass by analyst" policy with review gates.
  • Stand up a lightweight model risk process and a shared prompt library.
  • Update hiring rubrics: prioritize data skills, structured thinking, and QA rigor.

Bottom line

AI will clear the low-value work first. Analysts who learn to supervise, validate, and refine AI output will move faster and handle more deals. Teams that standardize workflows, controls, and skills will capture the gains without creating new risks.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)