Can AI Identify Crucial Health-Related Research Questions?
Artificial intelligence (AI) has proven effective in answering questions, but can it help generate the right questions to begin with? Recent discussions in the Journal of Global Health explore whether Large Language Models (LLMs) can play a role in pinpointing important and impactful health-related research priorities.
Prioritising Research Questions
Formulating a strong research question is challenging but essential. Consider a question in your field that, if answered, could significantly improve outcomes. Such a question would likely be a top priority for researchers, policymakers, implementers, and funders alike.
In health, prioritising research questions is particularly critical. From heart disease to mental health, identifying the most urgent and impactful questions requires diverse input. This process usually involves gathering views from researchers and practitioners across regions to produce a ranked list of priorities. Funders often rely on these lists to guide which research themes deserve investment, providing an objective basis for their decisions.
Can AI Help?
With AI increasingly used in health—from diagnostics to administration—its potential role in setting research priorities is gaining attention. AI tools like ChatGPT and DeepSeek promise to save time and resources by scanning vast bodies of knowledge to identify key questions. For instance, an AI-driven exercise on pandemic preparedness produced priority questions closely matching those identified by human experts.
The Challenge of Opacity
One major hurdle is explainability. Human-led research priority settings can be transparently documented, including who participated and how data was analyzed. AI-generated outputs, however, often come from opaque processes, sometimes called the “black box” problem. Without clarity on how AI tools generate and rank research questions, trust in their results is limited.
Addressing this requires researchers using AI to clearly outline the data sources, algorithms, and criteria behind the AI’s outputs. Transparency is essential to build confidence that AI-generated priorities are valid and meaningful.
Why Human Involvement Still Matters
Even with better explainability, there’s a risk that AI-generated priority lists might alienate the human stakeholders crucial for implementing research outcomes. Human-led exercises foster engagement, giving researchers and practitioners a sense of inclusion and ownership. If priorities simply emerge from an AI tool without stakeholder involvement, they may face skepticism or resistance, especially in fields with limited existing collaboration.
Conversely, in areas with strong networks, AI could complement human insights, enhancing the overall priority-setting process.
Ensuring Validity and Reliability
The more AI-generated priorities align with those from human exercises, the stronger the argument for their use. Additionally, it’s important to confirm that results are stable—not overly sensitive to the choice of AI tool or minor changes in prompts.
Exploring AI’s Potential
While caution is necessary, AI holds promise as a research assistant that leverages extensive knowledge to guide future investigations. Questions remain:
- Can AI identify priority questions as effectively but more quickly and affordably than humans?
- Could AI uncover novel, groundbreaking questions that humans might miss?
- Could AI provide a more democratic approach by drawing on a wider set of electronically available perspectives?
The answer is still uncertain, but the topic deserves close attention.
For professionals interested in understanding AI’s role in research and health, exploring AI tools like ChatGPT and learning about prompt engineering can provide useful insights into how AI models can be guided to generate meaningful outputs.
Your membership also unlocks: