Godfather of AI Warns: Technology Could Invent Its Own Language
Geoffrey Hinton, often called the “godfather of AI,” has raised concerns about the future trajectory of artificial intelligence. He warned that AI systems might develop their own language—a mode of communication beyond human comprehension.
Currently, AI models "think" in English or other human languages, which allows developers to monitor and understand their reasoning. But Hinton suggests this could change. If AI starts creating internal languages, humans may lose the ability to interpret what these systems are planning or communicating.
Why AI’s Own Language Is a Cause for Concern
Hinton explained on the "One Decision" podcast that the development of an internal language by AI systems would make their thought processes opaque. “I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking,” he said.
This shift could make it difficult—if not impossible—to track AI behavior, raising fears about loss of control. AI has already shown it can generate troubling or harmful ideas, and not being able to interpret its internal logic could worsen the risks.
Warnings Rooted in Experience
Geoffrey Hinton laid much of the groundwork for modern machine learning, which powers today’s AI applications. Despite his contributions, he has become increasingly cautious about AI’s direction, even stepping away from Google to speak openly about potential dangers.
He compares AI’s impact to the industrial revolution but warns that this time it’s not physical strength AI will surpass, but intellectual ability. “We have no experience of what it’s like to have things smarter than us,” he said. His concern is that more intelligent systems might eventually take control.
The Need for Regulation
Hinton advocates strongly for government regulation to keep AI development in check. The fast pace of progress and the unpredictability of AI behavior make oversight critical.
Recent incidents highlight these concerns. For example, OpenAI’s internal tests in April showed that some of their AI models—like o3 and o4-mini—were hallucinating or fabricating information more frequently than expected. Even the developers admitted they don’t fully understand why this is happening, calling for more research on the issue.
What This Means for AI Users and Developers
- AI’s potential to develop its own language could limit transparency, making it harder to diagnose and manage problems.
- Regulation and continuous research are essential to understand and mitigate these risks.
- Developers need to design AI systems with interpretability in mind to maintain control.
Those working in AI, research, or technology fields should stay informed about these developments and advocate for responsible AI practices. For practical resources and training on AI tools and their responsible use, visit Complete AI Training.
Your membership also unlocks: