Empire of AI: Two Paths Society Can Take
The University's Digital Technology for Democracy Lab hosted technology journalist Karen Hao to discuss her book, "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI." The evening opened with a talk, moved into a conversation with Media Studies Prof. Seth C. Lewis and Mona Sloane, and ended with a book signing. The focus: what happens when an ambitious tech sector collides with public interest - and what we can do about it.
Hao's thesis is blunt. The AI sector - centered on firms like OpenAI - looks less like a typical industry and more like an empire forming in real time. There is still time to steer where this goes, but the window is narrowing. The choice isn't technical; it's political, economic, and cultural.
Inside the "empire" logic
Hao described a Silicon Valley mindset convinced it is building systems for the greater good. Around AGI, she noted two quasi-religious camps: the "boomers" who see a positive transformation and the "doomers" who see the opposite. Neither side is backed by hard evidence about the future, and both extremes, she argued, pull attention away from present-day costs and the concentration of control.
She pointed to Sam Altman's stated vision to build 250 gigawatts of data centers by 2033 - a $10 trillion build-out, roughly the power use of 50 New York Cities. "People that are making the AI are using it as cover to ultimately take control of land, take control of our power lines, take control of our water lines, take control of all of our data," Hao said. The stakes aren't abstract; they're about infrastructure and who owns it.
Policy, campuses, and work: pressure points
Dean Christa Acampora underscored a core tension: AI can help or harm, and the liberal arts remain vital for context, ethics, and judgment. Education isn't a niche skillset; it's training for how to tell fact from fiction and live together wisely. Hao echoed the need for public rules - and public imagination - to decide what AI serves.
"Policymakers can implement strong data privacy and transparency rules and update intellectual property protections to return people's agency over their data and work," Hao said. She urged people to resist industry narratives that hide social and environmental costs behind a promise of inevitable progress.
Lewis captured the mood on campus: "It feels kind of like a runaway train." Referencing a recent Stanford study, Hao noted early labor signals - a 13 percent decline since late 2022 in employment for 22- to 25-year-olds in AI-exposed jobs. Lewis added that "supervising 50 AI agents" may soon replace supervising large human teams, reframing the value of human labor. For ongoing research and context, see Stanford HAI.
Students are stepping in. Fourth-years Celia Calhoun and Owen Kitzmann described the Student Technology Council Project, a student-led effort to co-create data and technology policy with administrators. "We need to put students at the forefront of conversations around data and technology at the University," Kitzmann said.
Second-year Rishi Chandra, after a course on democracy and AI, now hosts workshops with the club Societal AI. His takeaway: individuals still have agency. "We as people also have the power to reframe how we think about AI," he said.
Practical next steps
- For policymakers: Enact data privacy and transparency standards; modernize IP protections for training data; require disclosure of compute, energy, and water use for large models.
- For universities: Establish student-faculty councils for tech governance; codify data retention and vendor risk policies; publish clear guidelines on AI use in learning and research.
- For labs and companies: Audit datasets and provenance; track environmental costs; red-team for social risks; align incentives with measured public benefits, not just scale.
- For professionals: Build clear AI literacy across teams, secure internal data, and redesign workflows where tools augment - not replace - expert judgment. If you're formalizing training, see curated options by role at Complete AI Training.
The decision in front of us
Hao argued that Big Tech is selling an "everything machine" - a promise that AGI will fix cancer, climate, and more. That promise can justify massive extraction of data, land, water, and energy without adequate consent or oversight. "If someone comes up to you and is like, 'I have a solution for all of your problems, and the price is everything you have ever owned … and $10 trillion,' you realize what's actually happening," she said.
The fork is clear. Either we let concentrated interests set the terms, or we build guardrails that center human needs, rights, and evidence. The second path won't happen by default.
Your membership also unlocks: