OpenAI's "Project Mercury" Is Aiming Straight at the Analyst Grind
OpenAI is quietly training AI to handle the routine work that eats most junior bankers' weeks, according to Bloomberg. The internal effort, called Project Mercury, is focused on financial models for restructurings, IPOs, mergers, and LBOs-exactly where hours disappear.
More than 100 former investment bankers from firms like JPMorgan, Morgan Stanley, and Goldman Sachs are involved. The goal: make AI useful where finance actually spends time-Excel, comps, and presentations-without dropping the bar on quality.
Inside the Project
Contractors are reportedly paid about $150 an hour to write prompts, build models, and fine-tune outputs. OpenAI says it works with a range of domain experts via third-party suppliers to improve and evaluate its models.
The work pulls in talent from Brookfield, Mubadala, Evercore, KKR, and MBAs from Harvard and MIT. The intent is simple: turn AI from a chat interface into a tool that removes the "pls fix" cycle and the 2 a.m. spreadsheet rework.
How It's Structured
Getting in means passing a hiring flow run almost entirely by AI: a 20-minute chatbot interview, a financial statement test, and a modeling exercise. Contractors deliver one model per week in standard industry formats with familiar Excel conventions (margin spacing, italicized percentages).
Each submission gets reviewed, feedback is applied, and the model is fed into training systems. It's a tight loop built to teach the AI how analysts actually work-not how textbooks say they should.
What This Means for Finance Teams
- First drafts move faster: baseline models, comps, and deck sections built in minutes, not hours.
- Analysts shift from keystrokes to judgment: review, tweak, and pressure-test instead of rebuilding the same tabs.
- Staffing and cost per deal change: fewer hours on low-leverage work, more on client prep and scenarios.
- Quality control becomes the moat: versioning, model standards, reviewer checklists, and audit trails matter more.
- Risk management is non-negotiable: data security, MNPI controls, reproducibility, and approvals must be explicit.
GPT-5 and the Trading/Investment Stack
OpenAI recently launched GPT-5, its most advanced model yet, with access extended to free users. Microsoft's Satya Nadella called it the most capable model from OpenAI so far, and researchers have pointed to gains in speed, accuracy, and reasoning across use cases.
If that level of reasoning pairs with Mercury's domain training, expect AI to take on more of the initial modeling, comps hygiene, and deck scaffolding-leaving teams to focus on insights, scenarios, and client narratives.
For background on OpenAI's broader approach, see the company's site: openai.com.
Practical Next Steps for Heads of IB/PE/Research
- Map your workflows: model templates, comps packs, CIM/10-K summaries, board and IC deck sections.
- Build a sandbox: test AI outputs against gold-standard models; measure error rates, review time saved, and edit counts.
- Set controls: enforce template standards, versioning, reviewer sign-off, and clear rules for MNPI and client data.
- Upskill the team: teach prompt patterns, model QA checklists, and "trust-but-verify" review habits.
- Define vendor strategy: where to use OpenAI directly, where to route through internal systems, and where to block.
- Track ROI: compare seat cost to hours reclaimed per deal and reduction in rework.
Analyst Workflow: Where AI Fits First
- Initial 3-statement stubs, LBO skeletons, and merger models built from standard assumptions.
- Comps refresh, spread updates, and sanity checks on outliers and stale data.
- Drafting of footnotes, sensitivities, and "assumptions and sources" sections for decks.
- Summaries of filings, transcripts, and diligence notes for quick situational awareness.
What to Watch
- Quality thresholds: Can outputs pass VP review without major rewrites?
- Native integration with Excel/PowerPoint and enterprise data sources.
- Auditability: traceable changes, locked assumptions, and reproducible runs.
- Legal and compliance guidance on AI-assisted analysis and disclosure.
- Talent shifts: demand for analysts who pair modeling skill with AI fluency.
The firms that standardize inputs, enforce QA, and train their teams will compress timelines without sacrificing rigor. The rest will feel the price pressure as clients start expecting the "same-day draft" by default.
Level Up Your Team's AI Fluency
- Explore practical AI tools for finance: AI tools for Finance
- Find role-specific learning paths: Courses by Job
Your membership also unlocks: