Jaron Lanier: AI Is Human Collaboration, Not Alien Intelligence
Jaron Lanier told a standing-room-only audience at Brown University that the tech industry has fundamentally misunderstood what artificial intelligence is. The celebrated scientist, futurist and writer said most people - especially those building AI systems in Silicon Valley - treat AI as an autonomous entity rather than what it actually represents: large-scale human collaboration.
"Normally we talk about AI as a thing," Lanier said during his April 23 lecture at Brown's Engineering Research Center. "The AI did this; the AI did that. But it's a collaboration of humans."
The Erasure Problem
Language models train on vast amounts of text created by scientists, writers, entertainers and others. Lanier argues that erasing those human contributions from AI output distorts how people understand the technology. The perception of AI as "this new super alien angel who's going to come and save us or kill us" obscures its true nature: "a new, very high-level, very large-scale form of cooperation."
Lanier is a founding scientist in virtual reality who has worked at Atari and currently serves as prime unifying scientist at Microsoft Research. He is also a vocal critic of Silicon Valley, arguing that people should be paid for their contributions to platforms like Google and that social media is manipulative.
Criticism Targets Context, Not Core Technology
Lanier's critique focuses on the ecosystem surrounding AI, not the technology itself. "This is a criticism of all the surrounding stuff - the cultural, psychological, spiritual, economic and political conundra around it sucks," he said. "The actual tool, the actual thing at the core, I'm kind of down with it."
He sees potential in how AI aggregates and processes human knowledge. But the scientific tradition of citations and references must remain intact. "We don't erase each other. We need that chain - not just because it's fair, not just because it keeps our enterprise going, not just because it's decent and ethical - but because without that chain of thought, we can't have core reality."
An Overlooked Scientific Question
Lanier highlighted a largely unexamined question: Why does manipulating language generate creative output in AI systems? "It's essentially doing statistics on word order and distance in a big amount of data, and then using that to create a new thing," he said. "Isn't it surprising that natural language can support this rather simple thing and get this result?"
This gap in understanding suggests deeper complexity in how language itself works. "Shouldn't we be thinking to ourselves, 'Wow, there's kind of more going on with natural language than we realize?'"
For students tempted to use AI to write assignments: "Don't do it. It's bad for you," Lanier said.
The Musical Interlude
Lanier's lecture concluded with a jazz performance alongside physics professor Stephon Alexander, percussionist JesΓΊs Andujar and bassist Donnie Aikins. During the set, Lanier played a khaen, an ancient Southeast Asian flute-like instrument.
He traced a chain of innovation from the khaen to modern instruments like the piano, then to the player piano - whose punch-card automation inspired Charles Babbage's concept of the digital computer. "So this was the origin and the thing to blame for it all," Lanier said with a laugh, holding up the khaen.
The lecture was the second annual Leon Cooper Lecture, sponsored by the Brown Center for Theoretical Physics and Innovation and the Office of the Provost. The series honors the late Nobel Laureate and Brown physics professor by bringing speakers with perspectives that cross academic boundaries.
For researchers working with AI systems, understanding attribution chains and the human labor embedded in training data remains central to maintaining scientific integrity. Generative AI and LLM courses can help professionals navigate these technical and ethical dimensions.
Your membership also unlocks: