AI counts kids' bites to help prevent childhood obesity

Penn State's ByteTrack counts kids' bites from mealtime video, with ~97% face ID and ~70% bite detection. Early results point to scalable obesity-prevention tools.

Published on: Oct 16, 2025
AI counts kids' bites to help prevent childhood obesity

Counting bites with AI could help prevent childhood obesity

An interdisciplinary team at Penn State built an AI model to count children's bite rate from mealtime videos-reducing a task that usually demands hours of manual review. In a pilot study, the system matched humans on face identification at roughly 97% and detected bites at about 70% of human performance.

That early accuracy is good enough to show real promise for scaling research beyond small lab studies and into real-world settings. With further training, the team expects the system-called ByteTrack-to better separate true bites from lookalike actions and support interventions at home, in clinics, and in schools.

Why bite rate matters

"The faster you eat, the faster it goes through your stomach, and the body cannot release hormones in time to let you know you are full," said Kathleen Keller, professor and Helen A. Guthrie Chair of nutritional sciences at Penn State. Prior work from Keller's group links faster bite rate-and larger bites-to higher obesity risk in children. Larger bite size can also raise choking risk.

"Bite rate is often the target behavior for interventions aimed at slowing eating rate," added Alaina Pearce, research data management librarian at Penn State. The problem: manual counting is tedious and expensive, which limits sample size and generalizability.

What the team built

Led by doctoral candidate Yashaswini Bhat in collaboration with human development researcher Timothy Brick, the team developed an AI pipeline that locates a child's face among multiple people and detects bite events during eating. The model was trained and tested on videos from Keller's Food and Brain Study.

The dataset included 1,440 minutes of footage from 94 children (ages 7-9) across four separate meals with identical foods. Researchers hand-labeled bites in 242 videos to train the model, then evaluated it on 51 held-out videos.

"The system we developed was very successful at identifying the children's faces," Bhat said. It performed best when the face was clearly visible and the eating motion was unobstructed.

Where accuracy drops

  • Occlusion: partial views and off-angle faces reduced bite detection.
  • Lookalike actions: chewing on utensils, playing with food, or fidgeting often mimicked bites, especially late in meals.
  • Non-bite events: sipping beverages triggered false positives.

Even so, the pilot confirmed feasibility. "One day, we might be able to offer a smartphone app that warns children when they need to slow their eating so they can develop healthy habits that last a lifetime," Bhat said.

Why this matters to IT, dev, and research teams

Automating bite-rate measurement cuts annotation cost and speeds up study timelines, enabling larger, more diverse datasets and deployment outside controlled labs. For applied ML teams, this is a clear, bounded action-recognition problem with measurable outcomes (bites per minute, time-to-satiety proxies).

Focus areas that can move accuracy forward without inflating complexity: stronger occlusion handling, better discriminators for utensil gestures vs. hand-to-mouth events, and clear separation of bites from sips. On-device inference and privacy-by-design will be crucial for any pediatric app.

Practical next steps for teams exploring similar systems

  • Data: expand annotated clips across camera angles, lighting, utensils, and mealtime contexts; include end-of-meal behaviors.
  • Labels: add event subtypes (bite, sip, utensil chew, fidget) to reduce confusion during training.
  • Evaluation: report precision/recall for bite events in addition to overall agreement with human raters.
  • UX and policy: build consent workflows for minors, default to local processing when possible, and publish clear data-retention rules.
  • Generalization: validate across ages, cultures, and food types to avoid brittle performance.

Funding and collaboration

This work was supported by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of General Medical Sciences, the Penn State Institute for Computational and Data Sciences, and the Penn State Clinical and Translational Science Institute.

Learn more


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)