How AI Learns to Make Better Decisions When Faced With Uncertainty

AI struggles with uncertainty by often overconfidently presenting answers despite limited data. New methods help AI quantify confidence, aiding better decisions in business, healthcare, and daily life.

Categorized in: AI News Science and Research
Published on: Jul 24, 2025
How AI Learns to Make Better Decisions When Faced With Uncertainty

Q&A with a Computer Science Professor: How AI Confronts the Human Challenge of Uncertainty

As artificial intelligence takes on more complex decision-making roles, understanding how machines handle uncertainty becomes critical. How do AI systems weigh competing values when outcomes are unclear? What does a reasonable choice look like when information is imperfect? These questions have moved from academic debate to practical concern as AI starts to influence real-world decisions.

A new framework developed by Willie Neiswanger, assistant professor of computer science at USC Viterbi School of Engineering and the USC School of Advanced Computing, addresses these issues. Working with students, Neiswanger integrates classical decision theory and utility theory to boost AI’s capacity for decision-making under uncertainty. His research was highlighted at the 2025 International Conference on Learning Representations and published on the arXiv preprint server. He shares insights on how AI tackles uncertainty and what this means for future applications.

What distinguishes artificial intelligence from human intelligence?

Neiswanger notes that human intelligence currently excels in judgment and nuanced understanding, while AI systems offer strengths in processing vast data quickly and simulating multiple future scenarios. Large language models (LLMs), for example, can analyze extensive text data and generate diverse possible outcomes at scale. The goal is to combine these complementary strengths—leveraging LLMs’ computational power alongside human judgment.

Why do current large language models struggle with uncertainty?

Uncertainty is a fundamental hurdle in decision-making. AI models often lack the ability to express confidence levels or acknowledge gaps in knowledge. Unlike experts who can communicate degrees of certainty, LLMs tend to produce responses with apparent confidence regardless of the underlying data’s reliability. This creates challenges when decisions require nuanced balance among evidence, likelihoods, and user preferences.

How does your research tackle the issue of uncertainty?

Neiswanger’s work focuses on sequential decision-making under uncertainty, especially where data collection is costly. Applications include black-box optimization, experimental design, and scientific decision tasks like materials or drug discovery. He explores how large foundation models can both benefit from and contribute to decision-making frameworks that quantify uncertainty while aligning with human values.

What approach did you develop to improve AI handling of uncertainty?

The team created an uncertainty quantification method enabling LLMs to measure and communicate confidence levels for predictions. The process starts by identifying key uncertain variables relevant to a decision. The language model assigns probability scores in natural language for various outcomes—such as crop yields, stock prices, or shipment volumes—based on context like reports and historical data. These linguistic probabilities are then converted into numerical values that guide decisions aligned with human preferences.

Where can this research be applied immediately?

  • Business: Enhances strategic planning by providing realistic estimates of market uncertainties and competition.
  • Healthcare: Supports diagnostic and treatment decisions by better accounting for uncertainty in symptoms and test results.
  • Personal decisions: Offers users more informed advice on everyday choices that involve risk and incomplete information.

The framework’s ability to incorporate stakeholder preferences ensures that recommendations are not just mathematically optimal but also acceptable in practice, respecting human values and constraints.

What are the next steps for this research?

Future work aims to expand the framework’s use across broader decision-making scenarios such as operations research, logistics, and healthcare management. A key focus is enhancing human auditability by developing interfaces that clearly explain why an LLM makes specific decisions and why those choices are optimal.

For more details on this research, see DeLLMa: Decision Making Under Uncertainty with Large Language Models.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)