Japan's Drone School Builds AI System to Spot and Track Bears
A drone school in northeastern Japan is building an AI system to find and track bears as sightings spike. D-Academy Tohoku, based in Gojome, Akita Prefecture, expects the system to go live in 2026, and local authorities are already lining up.
The goal is simple: detect bears early, share precise locations fast, and give responders more options than lethal control. For IT and dev teams, it's a clean example of sensor fusion, real-time inference, and multi-agent coordination under real-world constraints.
How the system works
The aircraft measures 98 cm long, 76 cm wide, and 48 cm tall. It carries a night-vision camera and an infrared sensor to capture heat signatures in low light or dense cover.
When a sighting comes in, a pilot launches the drone. Video streams back to a PC where AI analyzes the feed; if a bear is detected, the drone switches to autonomous mode and tracks the target.
- Flight time: ~1 hour per aircraft, with an automatic handoff to another drone before battery depletion
- Autonomous return-to-base triggers before critical battery levels
- GPS positions are shared in real time with government, police, and hunting clubs via a dedicated smartphone app
Training data and occlusion handling
The team worked with a zoo in Kitaakita to collect images of black and brown bears. That dataset improved detection even when most of the animal is hidden, where human observers might miss it.
For developers: this leans on robust data curation, aggressive augmentation, and models that keep confidence under occlusion and low light. Think thermal/RGB fusion, temporal cues, and careful thresholding to balance recall and false positives.
What this means for engineers
- Inference path: onboard vs. ground. They're running AI on a PC, which simplifies model size and heat/power issues, but adds link reliability and latency considerations.
- Sensor fusion: combine night-vision RGB with thermal for better recall in brush or at dusk. Calibration and sync matter more than model choice.
- Tracking: once detected, switch to autonomous follow. You'll want ID persistence, re-identification across partial views, and loss recovery.
- Multi-drone orchestration: timed handoffs, shared state, and geofencing. Build for battery-aware scheduling and conflict-free path planning.
- Ops and safety: strict failsafes, no-fly boundaries, and clear human-in-the-loop points. Audit logs for every detection and alert.
- Quality metrics: evaluate per lighting condition, canopy density, distance bands, and animal posture. Track precision/recall and time-to-first-detect.
Ecosystem momentum
Other groups are moving too. NTT Docomo Business has been flying drones with visual bear search in Fukushima since October, and Fujitaka launched AI-enabled bear-detection drones in November.
The market signal is clear: authorities want fast, reliable detection and real-time coordination. 2026 gives room to harden the pipelines and field-test at scale.
Useful references for your build
- Object detection workflows (YOLO family) - baselines, export targets, and deployment notes
- Thermal imaging basics - what thermal sees, where it fails, and how to tune it
Field notes that matter
- False positives: dogs, boars, and people in thermal. Collect hard negatives and use multi-frame confirmation to cut noise.
- Latency target: keep end-to-end under a few hundred milliseconds for reliable follow.
- Offline resilience: cache maps, keep last-known positions, and degrade gracefully on link loss.
- Human UX: high-signal alerts, one-tap route sharing, and confidence bands that mean something in the field.
What the team says
Kanako Ishii of D-Academy Tohoku said the system lets a small team cover wide areas. Early detection creates choices: drive bears away, warn residents, and avoid lethal outcomes. The aim is fewer bad encounters and clearer separation between people and bears.
Level up your AI deployment skills
If you're building similar pipelines for field robotics, wildlife monitoring, or public safety, sharpen your stack and ops playbook. You can explore focused learning paths here: AI courses by job.
Specs snapshot
- Dimensions: 98 × 76 × 48 cm
- Sensors: night-vision camera + infrared (thermal)
- AI: ground-based PC processing with autonomous follow-on detect
- Flight: ~1 hour, auto-handoff, auto-RTB
- Data: GPS positions, real-time sharing via dedicated app
- Timeline: targeting 2026
Your membership also unlocks: