Yann LeCun plans to leave Meta to launch a startup built on "world models"
Yann LeCun, 65, the Turing Award-winning pioneer behind convolutional neural networks, has told colleagues he intends to leave Meta in the coming months to start an AI company, according to the Financial Times. If he follows through, it would be a rare move by a Big Tech chief scientist and a signal that a new technical path in AI is about to be tested outside a corporate lab.
LeCun joined Facebook in 2013 to build FAIR, Meta's long-term research group, while continuing as a Silver Professor at NYU, where he has taught since 2003. He is widely known for LeNet, the early CNN that cracked handwritten digit recognition and set the pace for computer vision. In 2019, he received the ACM Turing Award with Geoffrey Hinton and Yoshua Bengio for advances that made deep learning essential in modern computing. See the official citation from the ACM here.
From tinkerer to global influence
Born July 8, 1960, in Soisy-sous-Montmorency, France, LeCun grew up fixing and building electronics with his engineer father. He earned an electrical engineering diploma from ESIEE Paris in 1983, then a PhD in computer science in 1987 from UniversitΓ© Pierre et Marie Curie, focusing on connectionist learning models and early backprop ideas.
After a postdoc with Geoffrey Hinton in Toronto, he joined AT&T Bell Labs in 1988. There he developed convolutional neural networks and deployed them at scale: NCR check readers using his work handled an estimated 10%-20% of all checks in the U.S. in the mid-1990s. He also led DjVu, an image-compression tech adopted by digital libraries, before time at NEC Research and his long tenure at NYU.
Meta's pivot-and why LeCun is breaking away
Meta has reworked its AI strategy. In June, it invested $14.3 billion in Scale AI and placed CEO Alexandr Wang in charge of a new group called Meta Superintelligence Labs, changing LeCun's reporting line from Chris Cox to Wang.
The company is pushing hard on large language models and product rollouts after Llama 4 underdelivered against rivals like OpenAI and Google. LeCun has been blunt: LLMs alone, he argues, won't reach human-level reasoning and planning.
The bet: "world models" over text-only training
LeCun's startup discussions reportedly center on "world models"-systems that learn from video and spatial signals to build an internal model of their environment. Instead of predicting the next word, these systems learn dynamics, simulate cause and effect, and forecast outcomes.
He has estimated this approach could take roughly a decade to hit its stride. If he's right, the next wave of AI progress may come from perception, prediction, and planning grounded in the physical and digital environments we live in-not just from bigger text corpora.
Inside Meta: a research shift with real costs
Former staff have said FAIR has been fading as Meta shifts focus to commercial teams. Many authors of the original Llama paper left within months of publication, and Meta cut about 600 roles in its AI group in October.
So, while LeCun's exit would be a headline move, it also reflects a deeper split: long-horizon research versus near-term product cycles.
Why this matters to engineers, researchers, educators, and IT leaders
- Technical direction: Expect more interest in self-supervised learning on video, 3D geometry, differentiable simulation, and model-based planning.
- Data strategy: Products that capture safe, consented multimodal signals (video, spatial, interaction) will gain an edge for training and fine-tuning.
- Tooling: Competence in PyTorch, JAX, efficient video pipelines, and evaluation for prediction/control tasks will be increasingly valuable.
- Hiring pulse: Watch for early roles at LeCun's venture and at labs prioritizing perception, dynamics, and autonomous decision-making.
Practical next steps
- Skill up on self-supervised learning, video representation learning, 3D/SLAM basics, and model-based RL. If you're curating a curriculum or retraining a team, start there.
- Prototype agents that learn from short video clips and spatial traces, then test planning on constrained tasks (robotics sims, UI agents, or warehouse flows).
- Audit your data pipeline for legal/ethical collection of multimodal signals. Add synthetic data from simulators where real data is scarce or sensitive.
- For educators: build modules around perception + dynamics rather than text-only tasks. Students will need both.
If you want structured paths to build these skills, browse AI courses by skill here: Complete AI Training: Courses by Skill.
The takeaway
LeCun helped kickstart the deep learning era with CNNs. His next act tests whether agents that learn from the world-through video, space, and interaction-can go further than text-driven systems.
Whether you write code, lead teams, or teach the next cohort, this is the moment to expand beyond text models and build competence in perception, prediction, and control.
Your membership also unlocks: