Spanish Research Fund Uses AI to Screen Out Grant Proposals, Sparking Controversy Among Evaluators
The La Caixa Foundation uses AI to screen biomedical research proposals, filtering out weaker ones before human review. Evaluators worry AI may miss innovative projects due to limited content understanding.

Spanish Foundation Uses AI to Screen Research Proposals, Raising Evaluator Concerns
The La Caixa Foundation, a non-profit organization allocating €145 million annually to research funding, has introduced generative artificial intelligence to filter out weaker proposals in one of its biomedical funding calls. About one in six applications were screened out by an AI model trained specifically for this task. However, these AI-driven rejections are subsequently reviewed by two human evaluators to ensure fairness.
This approach aims to reduce the workload of human reviewers by quickly identifying proposals unlikely to succeed. Despite this practical goal, the use of large language models (LLMs) in this context is controversial. LLMs lack genuine comprehension of research content and cannot provide explanations for their decisions. Instead, they operate by statistically analyzing whether the language used in a proposal matches patterns typically found in unsuccessful submissions.
Why the Skepticism?
Evaluators express caution because AI's inability to grasp the scientific merit behind proposals limits its reliability in making funding decisions. Since these models rely on linguistic patterns rather than content understanding, there is concern about potentially overlooking innovative or unconventional research that may not fit established language norms.
The process involves human reviewers verifying AI recommendations, but this raises questions about accountability and transparency in funding decisions. The balance between efficiency gains and the risk of missing promising projects remains a key debate.
Implications for Research Funding
- AI can assist in managing large volumes of applications, easing pressure on evaluators.
- However, reliance on AI models that cannot explain their choices challenges the fairness and openness of the review process.
- Human oversight continues to be essential to maintain the integrity of funding decisions.
As AI tools become more common in research administration, funders and evaluators must carefully consider their roles and limitations.
For those interested in how AI is impacting research workflows and funding, exploring practical AI training resources can be valuable. Courses on prompt engineering and AI automation may offer insights into combining human expertise with AI tools effectively. Visit Complete AI Training for relevant courses.