How Thinking Machines Lab Is Making AI Responses Reproducible

Thinking Machines Lab tackles AI response randomness by controlling GPU kernels during inference for more consistent outputs. Their work aims to improve reliability and reinforcement learning.

Published on: Sep 11, 2025
How Thinking Machines Lab Is Making AI Responses Reproducible

Thinking Machines Lab’s Approach to AI Model Reproducibility

Thinking Machines Lab, backed by $2 billion in seed funding and an all-star team of former OpenAI researchers, is drawing attention for its focus on creating AI models that produce reproducible responses. Their recent blog post offers insight into one of their key projects tackling nondeterminism in AI model outputs.

Why AI Responses Vary

Anyone who has asked ChatGPT the same question multiple times knows the answers can differ quite a bit. This randomness has been accepted as normal because current AI models are generally non-deterministic. However, Thinking Machines Lab believes this problem can be fixed.

Their research points to the way GPU kernels—the small programs running within Nvidia’s chips—are combined during inference as the main source of randomness. Inference is what happens after you submit a prompt to an AI model. By controlling how these GPU kernels are orchestrated, the lab suggests AI outputs can become more consistent and predictable.

Benefits of Reproducible AI Models

  • Reliable Responses: Enterprises and researchers can depend more on AI outputs if they are consistent.
  • Improved Reinforcement Learning (RL): RL trains AI by rewarding correct answers, but inconsistent outputs create noise. More reproducible responses could smooth out training and improve results.

Thinking Machines Lab intends to customize AI models for businesses using RL, as previously reported. Their first product, expected soon, aims to help researchers and startups develop custom models. It’s not yet clear if it will incorporate these reproducibility techniques.

Commitment to Open Research

The lab plans to frequently share blog posts, code, and research updates to benefit the public and improve their internal research culture. Their new blog series, “Connectionism,” marks the start of this transparency effort. This contrasts with some larger AI companies that have become more closed off over time.

What This Means for AI Development

This peek into one of Silicon Valley’s most secretive AI startups reveals that Thinking Machines Lab is addressing one of the core challenges in AI: unpredictability in model responses. The real test lies in whether they can solve this and build practical products that justify their $12 billion valuation.

For developers and IT professionals interested in AI model consistency and advanced training techniques, keeping an eye on Thinking Machines Lab’s progress could provide valuable insights. To explore more AI courses and resources that cover topics like model training and reinforcement learning, check out Complete AI Training’s latest courses.