From Dissertation Defense to Meta: Physics-Aware AI That Understands How People Move
Just days after defending his dissertation at the University of South Florida's Bellini College of Artificial Intelligence, Cybersecurity and Computing, Cole Hill loaded his car and headed to California. It wasn't a victory lap. It was day one. He had already accepted a full-time role with Meta - before walking at commencement.
The twist: his path was powered by walking itself - or more precisely, gait analysis, a field in computer vision focused on identifying people by how they move.
Making AI smarter about movement
Most systems identify people by faces or clothing. That's fragile. Faces get covered. Clothes change. Gait analysis looks at motion patterns, which are harder to fake and can work at a distance. Here's the classic definition.
The hard part is generalization. Change the lighting, camera angle, or environment and models often stumble. They're trained on narrow datasets and tend to memorize instead of reason. As Hill put it: "They work well on the datasets they were trained on, but what performs perfectly on one system might fail completely on another."
The research: teach AI the physics behind motion
Hill's dissertation - Dynamics-Consistent Representation Learning for Human Motion Analysis and Identification - tackles that gap. The idea: embed the physics of movement into the learning process so models understand what's happening, not just patterns in pixels.
"If we can help AI understand the mechanics behind what it sees," Hill said, "it becomes more efficient and far more reliable when the situation changes." This approach aligns with what many call physics-informed modeling in machine learning (overview).
Academic backing and validation
Hill's committee included USF Bellini College Launch Dean Sudeep Sarkar, Associate Professor Mauricio Pamplona Segundo, Professors Dmytri Goldgof and Kyle Reed, and Professor Kevin Bowyer from Notre Dame University. The defense was chaired by Scott McCloskey from Kitware, Inc.
"Cole's work helps solve one of AI's most pressing challenges: its dependence on large, narrowly focused datasets," Sarkar said. "By incorporating physics into training, his models begin with a foundation of knowledge about how the body moves. That makes them more reliable, adaptable and efficient, qualities that are vital for deploying AI in the real world."
Why this matters beyond the lab
Better motion reasoning helps anywhere video meets messy conditions. Public safety and defense can benefit from improved search in complex scenes. Physical rehabilitation tools can better track progress. Animation, robotics, and human-computer interaction can predict and respond to movement with fewer failures across new settings.
The path that wasn't planned
Hill didn't set out to earn a PhD. He finished undergraduate degrees in electrical engineering and mathematics at USF, shifted to computer science for his master's, then found computer vision through Sarkar's group. COVID hit, projects kept coming, and the work pulled him deeper.
With Sarkar and Pamplona Segundo, Hill advanced physics-informed modeling for gait recognition under a DoD IARPA/Kitware-funded effort. The team also built SynthGait, a large-scale synthetic 3D dataset generating lifelike motion under different clothing, viewpoints, and walking styles.
Internships with the U.S. Department of Defense and the National Security Agency reinforced a consistent theme: make systems that hold up when conditions shift.
Next stop: Meta
Hill is starting at Meta on the AI side. The specifics will come, but the direction is set. He credits USF for the mentorship and room to build real systems - and plans to return to Tampa for graduation in December.
Practical takeaways for educators, researchers, and builders
- Prioritize generalization. Test across lighting, viewpoints, frame rates, and environments. If performance drops, your model is memorizing.
- Bake in constraints. Physics, geometry, and kinematics can act as guardrails when data is scarce or out-of-distribution.
- Synthesize smartly. Use simulation to generate edge cases (clothing, speed, load, viewpoint) and stress-test your pipeline.
- Measure what matters. Track consistency across cameras and domains, not just single-dataset accuracy.
- Close the loop with users. For rehab, robotics, or safety, design feedback channels so models improve with real-world inputs.
Want to skill up in AI for real-world use?
Explore hands-on learning paths by skill at Complete AI Training. If you're aiming for roles at major labs and companies, compare programs on AI courses by leading companies.
Hill's story is simple: teach machines how motion works, and they stop guessing. That's how you build AI that holds up when the world changes - and how a new PhD drove west with a job waiting.
Your membership also unlocks: