Smarter AI Needs Less Data to Accurately Identify Wildlife in Trail Camera Photos
Oregon State researchers improved AI to identify wildlife in camera trap images using fewer, species-specific photos. This boosts accuracy across diverse locations while cutting costs and computing needs.

AI Training Advances Boost Wildlife Species Identification in Camera Trap Images
Scientists at Oregon State University have enhanced artificial intelligence (AI) models to better identify wildlife species captured by motion-activated trail cameras. Their study introduces a “less-is-more” strategy, training AI on more selective data to improve accuracy while reducing costs and computing resources.
Motion-activated cameras are widely used for wildlife monitoring, but manually sorting thousands of images is time-consuming. Existing AI models often struggle with accuracy, especially when applied to images from new locations different from their training data.
Improving Accuracy Across Diverse Locations
“A major challenge in wildlife AI is limited accuracy when models classify images from locations they haven’t encountered before,” explains Christina Aiello, a research associate involved in the study. The team’s approach increased accuracy not only at familiar sites but also at novel locations, creating more consistent results across varied environments.
The study, led by undergraduate researcher Owen Okuley, focused on desert bighorn sheep as a test species. Their findings, published in Ecological Informatics, suggest that the training methods are broadly applicable across different wildlife monitoring projects.
Selective Training Data Yields Better Results
Instead of training AI on large, mixed datasets, the researchers limited the scope to images of a single species and included diverse environmental backgrounds specific to the project area. This targeted approach helped the model achieve near 90% accuracy in identifying bighorn sheep, even in images from locations not included in training.
“By narrowing training objectives and incorporating varied backgrounds, we used only about 10,000 images—far fewer than usual—to reach high accuracy,” Okuley notes. This reduction in data lowers the computing power and energy needed, which benefits both research budgets and conservation efforts.
Hands-On Research Experience and Future Applications
Okuley, mentored through the Fisheries and Wildlife Undergraduate Mentoring Program, gained extensive experience managing camera trap data and genetic surveys before leading his AI project. He credits the opportunity to handle every research phase—from conceptualization to publication—with his growth as a scientist.
Looking ahead, Okuley will pursue a Ph.D. at the University of Texas at El Paso. His goal is to develop AI tools to classify waterfowl traits sequentially, enabling identification of both species and hybrids. The original bighorn study also involved collaborators from Johns Hopkins University, the California Department of Fish and Wildlife, and the National Park Service.
Study Highlights
- AI models trained on species-specific datasets with varied backgrounds outperform general models.
- Training with fewer, carefully selected images achieves accuracy comparable to models using much larger datasets.
- Reduced data needs translate to lower computational and energy costs.
- Approach improves AI performance on images from novel locations, a key challenge in wildlife monitoring.
For those interested in AI applications in ecology and wildlife research, exploring focused training techniques can optimize results while conserving resources.
More details on the study: Improving AI performance in wildlife monitoring through species and environment-specific training, Ecological Informatics (2025).