OpenAI targets 2028 for fully autonomous AI researcher
OpenAI is redirecting its research efforts toward building an independent AI system capable of solving complex scientific and technical problems without human intervention. Chief scientist Jakub Pachocki said the project now serves as the company's primary long-term objective.
The system, called an "AI researcher," combines advances in reasoning models, autonomous agents, and interpretability tools. OpenAI plans to develop an "autonomous AI research intern" by September that can complete limited research tasks independently, with a more advanced multi-agent platform targeted for 2028.
What the system could do
The platform could tackle problems in mathematics, physics, biology, chemistry, business, and policy-essentially any challenge described through text, code, or diagrams. Pachocki described the potential end state as "a whole research lab in a data center."
Pachocki told MIT Technology Review that recent progress suggests AI models may soon work for extended periods with minimal guidance. "I think we are getting close to a point where we'll have models capable of working indefinitely in a coherent way just like people do," he said.
Building on existing tools
OpenAI's work builds on systems like Codex, an agent-based coding tool that already performs tasks automatically, analyzes documents, and generates reports. Future versions would handle longer tasks and manage multiple subtasks without supervision.
Across the industry, coding agents have demonstrated that AI can manage complex workflows. Researchers caution, however, that errors compound when tasks chain together, making long-term scientific research difficult to execute reliably.
Safety concerns drive parallel effort
OpenAI is studying safety risks tied to highly autonomous systems, including misuse, hacking, and unintended behavior. One approach monitors the reasoning steps of AI models in real time, allowing researchers to track decisions as they happen.
Pachocki said safeguards will become essential as systems grow more capable. "If you believe that AI is about to substantially accelerate research, that's a big change in the world, and it comes with serious unanswered questions," he said.
The shift reflects intensifying competition from Anthropic and Google DeepMind, putting pressure on OpenAI to define what comes after large language models. While Pachocki said systems matching human intelligence across all domains remain unlikely near-term, even less-capable AI could produce significant economic impact.
For researchers and scientists interested in understanding how these systems work, AI Research Courses offer practical grounding in the underlying technologies and methodologies.
Your membership also unlocks: