What AI's role in strategic foresight tells us about the future of thinking
Artificial intelligence is now part of how teams look ahead. Governments, companies and research groups use it to scan signals, generate scenarios and spot patterns faster than any human can.
That speed forces a practical question: if machines predict, simulate and speculate at scale, where do humans create the most value? Recent work across 167 foresight experts in 55 countries points to a clear answer: AI extends reach; humans set direction.
From assistant to architect: three levels of AI maturity
- Level 1: Analysis augmentation - Most teams are here. AI accelerates the early research phase: horizon scans, clustering signals, summarizing sources. Think "digital research assistant." Useful for coverage and speed, not for judgement.
- Level 2: Creative sparring partner - Teams use AI to stress-test ideas, propose scenarios from uploaded data, compare signals to facts and tighten structures. It challenges assumptions and widens option sets without taking the wheel.
- Level 3: Integrated workflow - The few frontier teams embed AI across the process. Custom tools and agents continuously collect, classify and analyze streams, feeding living dashboards and scenario libraries.
What teams gain (and how much)
- Time efficiency (39%) - AI takes the repetitive scan-and-summarize work, freeing time for interpretation and decision prep.
- Data processing and analysis (17%) - Better pattern detection and signal clustering across large datasets.
- Idea generation and creativity (12%) - First drafts, alternative angles and "what if" prompts.
- Scenario development (10%) - Faster iteration across multiple futures and variants.
- Quality and scope (7%) - Broader coverage and more consistent structure.
- Accessibility (4%) - Lowers the barrier for non-specialists to contribute.
Usefulness differs by context: in civil society, 43% report high utility; in the private sector, 47% rate it moderate. Either way, the net effect is acceleration.
The trade-offs you can't ignore
- Quality and trust - Outputs can feel generic or derivative. Expect to validate sources and sharpen insights.
- Bias and blind spots - English-centric, Western-weighted data can skew results and bury weak signals from other regions.
- Ethics and governance gaps - Many teams lack policies for confidentiality, transparency and accountability.
- Deskilling risk - Over-automation can dull intuition and pattern recognition. Don't outsource judgement.
- Opacity - Limited traceability can turn foresight into an audit exercise if you can't explain how an answer came to be.
What stays human
- Sense-making - Framing questions, weighing trade-offs and deciding what matters.
- Values and ethics - Choosing among futures is a moral act, not an optimization problem.
- Narrative - Stories align leaders and teams. AI drafts; humans set the plot.
- Imagination - Machines model probabilities; people choose possibilities.
A practical playbook by role
- Executives and managers
- Set scope: Where can AI cut research time by 50% without raising risk?
- Fund small pilots at Levels 1-2; review outcomes monthly; scale what works.
- Define red lines: sensitive data, decision rights, human sign-off points.
- Track metrics: cycle time, coverage, bias checks, rework rates.
- IT and data leaders
- Establish secure access, logging and data retention. Keep sensitive sources off public tools.
- Curate diverse corpora to reduce regional and language bias.
- Provide evaluation sets for hallucination and bias testing.
- Developers
- Start with a retrieval-augmented pipeline for horizon scanning and signal clustering.
- Add explainability: show sources, highlight reasoning steps, rank uncertainty.
- Automate confidence checks and route low-confidence outputs to humans.
- Foresight practitioners and analysts
- Use AI to broaden scans and generate contrasts; keep human workshops for sense-making.
- Write prompts like briefs: purpose, time horizon, drivers, constraints, output format.
- Pair AI-generated scenarios with a "risk of bias" note and counter-scenarios.
Minimal stack to get started
- Scanning - RSS aggregators + LLM summaries + clustering.
- Signal library - Tagged database with source metadata and regional markers.
- Scenario builder - Templates that force drivers, preconditions and leading indicators.
- Review loop - Weekly triage, monthly synthesis, quarterly playbook update.
Guardrails that matter
- Adopt clear principles for responsible AI use. See the OECD AI Principles.
- Run risk assessments for high-impact use cases. The NIST AI Risk Management Framework is a solid starting point.
- Document sources, decision points and human sign-offs. Treat explainability as a feature, not a footnote.
How to measure progress
- Cycle time - Days from brief to first scenarios.
- Coverage - Number of domains, regions and languages scanned.
- Diversity index - Share of non-Western, non-English sources.
- Scenario quality - Stakeholder ratings on plausibility, relevance and novelty.
- Rework and error rates - Hallucinations, missing citations, bias flags.
The future is hybrid
AI will keep changing how we anticipate change, but the purpose of foresight stays the same: help institutions move through uncertainty with wisdom, ethics and imagination. The next edge isn't more automation. It's better collaboration between human judgement and machine-scale analysis.
If you're building team capability, explore focused upskilling by function and role here: AI courses by job. For a quick scan of new options, see the latest AI courses.
Your membership also unlocks: