Personal finance and AI: Should you trust ChatGPT's investment advice?
Retail investors keep asking ChatGPT, "Should I buy?" And yes - the model answers. The real issue isn't whether it can reply, but whether you should act on it.
Regulators say public chatbots aren't authorised to give investment advice. Yet adoption is rising: a recent eToro survey of 11,000 retail investors across 13 countries found nearly one in five already use AI tools to shape portfolios.
Where the line is drawn in the EU
There's a hard distinction: using AI as a research co-pilot vs. receiving personalised investment advice. The latter is a regulated activity under MiFID II. According to ESMA, no publicly available AI tool is authorised to provide regulated advice in the EU.
For reference, see MiFID II/MiFIR rules at the European Commission and ESMA's guidance on the use of AI in retail investment services:
Why investors still use it
Time and cost. AI can summarise filings, scan news, and stress-test ideas faster than a human analyst. In 2023, Finder launched a 38-stock fund built and run almost entirely by ChatGPT; two and a half years later it was up nearly 55%, beating the average of the UK's ten most popular funds by over 18 percentage points.
Promising, but risky to generalise. Markets are messy. Models that look sharp in one regime can falter when volatility spikes or narratives flip.
What AI is good at - and where it fails
A 2025 study in Nature highlights a strength: processing unstructured data, from reports to queries, to support planning and risk analysis. The catch is well known: if inputs are incomplete, biased, or stale, the model can produce confident nonsense - hallucinations.
BridgeWise, which covers 50,000+ assets with AI-driven research, warns that vague prompts invite errors. Ask, "Should I invest in X?" and you'll likely get a neat but misleading answer, especially on thinly covered names. Their stance is clear: AI should support decisions; it shouldn't make them.
A practical playbook for finance pros
- Use cases that work: idea generation, factor checklists, thesis frameworks, variant views, summarising transcripts and filings, drafting client notes faster.
- Use cases to avoid: suitability assessments, explicit buy/sell/hold calls, portfolio construction without human oversight, personalised recommendations.
- Prompt patterns that reduce risk:     - "Outline a thesis framework for [ticker]: drivers, risks, KPIs, catalysts. Do not make a recommendation."
- "Summarise the last 3 quarters' MD&A for [company]. Cite sections. Flag inconsistencies."
- "List potential data sources and leading indicators for [sector] margin pressure."
 
- Controls to put in place:     - Disclosure: state that AI was used in research drafts.
- Human-in-the-loop: analyst review is mandatory before any client output.
- Auditability: log prompts, versions, and sources. Keep a review trail.
- Data hygiene: prefer retrieval-augmented workflows. Ground answers in linked, dated sources.
- Model governance: define allowed tasks, red lines, and escalation paths.
 
- Validation habits:     - Cross-check with primary sources (filings, transcripts, regulatory notices).
- Run scenario tables instead of point predictions.
- Stress-test assumptions against historical shocks and alternative regimes.
 
What ChatGPT says about itself
Asked directly, ChatGPT's stance is cautious: use it as a support tool, not a substitute for professional advice. It can produce plausible but incorrect answers, and it will do so with confidence if your question is too broad or the data is weak.
Compliance context you can't ignore
Under current EU rules, firms may use AI to assess knowledge and experience, financial situation (including risk tolerance), and objectives (including sustainability preferences) - but only within a supervised framework. ESMA underscores transparency, governance, auditability, and ongoing human oversight.
Agentic AI and robo-advisors: what's next
The next wave is agentic systems that plan tasks and use tools with more autonomy. That could compress workflows across research, monitoring, and rebalancing.
Robo-advisory is set to grow fast, with market value projected to exceed $471 billion by 2029, up from nearly $62 billion in 2024. Expect stronger demand for low-cost, automated portfolios - and stricter oversight to match.
Bottom line
Treat ChatGPT as an accelerant for research, not an oracle. Let it speed the grunt work. Keep investment judgment, suitability, and accountability squarely with qualified humans.
Further resources
Disclaimer: This information does not constitute financial advice. Always do your own research to ensure investments are right for your specific circumstances. We are a journalistic website and aim to provide the best guidance from experts. If you rely on the information on this page, then you do so entirely at your own risk.
Your membership also unlocks:
 
             
             
                            
                           