Explainable AI for Agriculture: Building Trust in Model Decisions
Farmers use AI to guide high-stakes decisions-crop choice, input use, risk planning. The problem: they see the inputs and the output, but not the reasoning. That gap blocks trust and adoption.
Sruti Das Choudhury, research associate professor in the School of Natural Resources, is tackling that gap with explainable AI (XAI). Her projects aim to show the "why" behind model outputs so farmers can verify decisions against their own field knowledge.
Two projects, one target: transparent decisions
Project 1: "Explainable AI for Precision Agriculture: A Data-Driven Approach to Crop Recommendation." The system ranks which features drive a recommendation-pH, rainfall, temperature range, and dozens more-along with contribution strength.
Project 2: "Explainable Artificial Intelligence for Phenotype-Genotype Mapping Using Time-Series Data Analytics." The team is applying XAI to neural-network models that learn from realistic multimodal time-series image data. The goal is to make genotype predictions from phenotypes interpretable and verifiable.
"We will have an answer, an explanation of the output of the model, and we can verify that explanation with the existing knowledge of the farmers," Das Choudhury said.
Why this matters to IT and development teams
XAI turns black-box guidance into auditable decisions. For production systems in agriculture-where errors cost time, money, and yield-feature attribution and clear rationale are a practical requirement, not a luxury.
The approach also speaks to AI ethics: transparency, interpretability, and trustworthiness. It reduces blind acceptance of outputs and enables human-in-the-loop validation.
Methods in play
- Local and global explanations: the team applies LIME and SHAP on large agricultural datasets to surface factor importance.
- Modeling mix: K-means, DBSCAN, and Gaussian Mixture Models for clustering and pattern discovery; deep neural networks for classification; TensorBoard for training and clustering visualization.
- Data modes: time-series and multimodal imagery for phenotype-genotype mapping, built to reflect real field conditions.
How explanations are validated
Explanations are checked against domain expertise from farmers. If the model says "soil pH and rainfall drove the choice," farmers can match that logic to what they see in the field.
"Deeper insight into the predictions" is the aim, Das Choudhury said-making the model more transparent, interpretable, and trustworthy while aligning with ethical use.
Team, progress, and momentum
Collaborators include Sanjan Baitalik and Rajashik Datta, senior undergraduate students at the Institute of Engineering and Management in Kolkata, India. The group began in January 2025 and moved fast-submitting a paper in early August.
They are currently self-funded volunteers while seeking grants. Early results help establish groundwork for funding and broader deployment.
What the students are building
Baitalik has applied XAI methods like LIME and SHAP in a real dataset, moving from coursework theory to production-like constraints. "Applying these methods in a practical context helped deepen my comprehension of their utility and limitations," he said.
Datta focuses on model development and evaluation for crop classification and pattern recognition. She uses clustering algorithms and deep nets, and relies on TensorBoard to visualize training dynamics and cluster behavior. Communicating model behavior to non-technical users is a key outcome of her work.
More XAI work in the pipeline
Beyond crop recommendations, Das Choudhury has started four XAI projects in agriculture. She has built a model to predict a crop's genotype from phenotypes and plans to use XAI to verify that the outputs match biological and environmental reality.
Phenotypes are visible traits like leaf shape and plant height. Genotype is the plant's full genetic makeup, which-along with environment and nutrition-influences those traits.
Education and skill-building
To grow talent, she has proposed a semester-long course, "Artificial Intelligence, Computer Vision and Data Analytics for Agriculture and Natural Resources," offered through the School of Natural Resources and the Department of Biological Systems Engineering. The course includes units on explainable AI.
Implementation ideas for your AI stack
- Ship explanations next to predictions: rank top features with contribution values; expose confidence and calibration.
- Use both local and global views: per-decision attributions and cohort-level patterns to spot spurious signals.
- Stabilize explanations: fix random seeds, use consistent background datasets for SHAP, and average across runs.
- Test explanation quality: run deletion/insertion tests, gradient-based sanity checks, and counterfactual probes.
- Close the loop with domain experts: collect "explanation feedback" labels and retrain when attributions fail expert checks.
- Operationalize: log explanation artifacts, track experiments (e.g., TensorBoard), monitor drift, and publish model cards.
- UX for non-technical users: plain-language summaries, minimal jargon, and side-by-side evidence (maps, charts, time-series).
Why this approach will scale
It treats AI as decision support with accountability. That makes adoption easier for users who live with the outcomes-farmers today, and any operator-facing workflow tomorrow.
As Das Choudhury put it, XAI helps people understand why a system makes certain predictions rather than accepting them blindly.
Resources
- SHAP (GitHub) - model-agnostic and model-specific explainers.
- LIME (GitHub) - local surrogate explanations.
- Latest AI courses (Complete AI Training) - stay current on methods you can put into production.
Your membership also unlocks: