Explainable AI Enhances Trust in Agricultural Decisions
Artificial intelligence is increasingly used in agriculture to analyze farm data and recommend key decisions. However, farmers often face a challenge: they receive AI-generated recommendations but lack insight into the reasoning behind them. This opacity raises concerns about trust and reliability, especially when AI systems might produce inaccurate or "hallucinated" outputs.
Professor Sruti Das Choudhury from the University of Nebraska–Lincoln is addressing this gap by developing explainable AI systems tailored for agriculture. Explainable AI (XAI) reveals how decisions are made by highlighting which data points influenced the outcome and to what extent. This transparency allows farmers to validate AI recommendations against their own knowledge.
Projects Focused on Explainable AI in Agriculture
Das Choudhury is leading two notable projects within the School of Natural Resources:
- Explainable AI for Precision Agriculture: This project focuses on crop recommendation. Given about 50 data inputs—such as soil pH, rainfall, and temperature—the AI explains which factors most affected its crop choice.
- Explainable AI for Phenotype-Genotype Mapping: This involves using neural networks and time-series multimodal image data to connect observable plant traits (phenotypes) with genetic information (genotypes), while providing interpretable insights into the model’s predictions.
"We will have an answer, an explanation of the output of the model, and we can verify that explanation with the existing knowledge of the farmers," Das Choudhury explains. The goal is to create AI models that are transparent, interpretable, and trustworthy, aligning with ethical AI principles.
Research Team and Early Results
Working alongside Das Choudhury are two senior undergraduates from the Institute of Engineering and Management in Kolkata, India: Sanjan Baitalik and Rajashik Datta. They began this research in January 2025 and quickly produced results, submitting a paper by August of the same year.
Despite operating without formal funding, the team has made significant progress. Das Choudhury hopes that initial results will support future grant applications and enable proper compensation for the students.
Baitalik has applied explainability techniques like LIME and SHAP to large agricultural datasets. "Applying these methods in a practical context helped deepen my comprehension of their utility and limitations," he says. Datta has focused on machine learning models for crop classification, using clustering algorithms such as K-means and DBSCAN, along with deep neural networks. She emphasizes improved skills in communicating AI behavior to non-technical users—a crucial aspect for adoption in agriculture.
Looking Ahead: Building AI Teams and Education
Das Choudhury aims to expand her team of AI scientists to apply explainable AI across various agricultural challenges. She has initiated four projects, including crop recommendation and phenotype-genotype prediction models. Ensuring these models provide interpretable outputs is central to their approach.
Phenotypes are visible characteristics like leaf shape and plant height, while genotypes represent the underlying genetic makeup that influences these traits. Environmental and nutritional factors also affect phenotypes, which adds complexity to the analysis.
Alongside research, Das Choudhury has proposed a semester-long course titled "Artificial Intelligence, Computer Vision and Data Analytics for Agriculture and Natural Resources." It will cover explainable AI and is planned to be offered through the School of Natural Resources and the Department of Biological Systems Engineering.
Successful implementation of explainable AI in agriculture promises to make AI decisions more transparent and ethical. Farmers would gain clarity on why certain predictions or recommendations are made, moving beyond blind acceptance to informed decision-making.
Further Learning
For IT professionals interested in AI applications and explainability techniques, exploring specialized training can be valuable. Resources like Complete AI Training's latest AI courses provide in-depth knowledge on AI interpretability methods and practical use cases.
Your membership also unlocks: