U.S. AI Edge Slips as Closed Labs Stall and China Opens Up

U.S. AI leadership is slipping as top labs go quiet, slowing idea flow and weakening network effects. China's open releases are catching up fast, prompting calls to share more.

Categorized in: AI News Science and Research
Published on: Nov 16, 2025
U.S. AI Edge Slips as Closed Labs Stall and China Opens Up

Declining openness puts U.S. AI leadership at risk

The U.S. is losing ground in AI because its top labs have gone quiet. That's the warning from Andy Konwinski, co-founder of Databricks, who called the shift an existential threat to both innovation and democratic norms.

His core argument is simple: progress in AI has always come from open exchange. Shut the doors, and you slow the rate of new ideas, the quality of peer feedback, and the speed at which research compounds.

The openness gap

Major U.S. labs-OpenAI, Meta, Anthropic-are still shipping impressive work, but much of it is now proprietary. The talent market reinforces the trend as top PhD candidates are pulled into private labs with compensation universities can't match.

Konwinski put it bluntly: "If you talk to PhD students at Berkeley and Stanford in AI right now, they'll tell you that they've read twice as many interesting AI ideas in the last year that were from Chinese companies than American companies."

Meanwhile, China's ecosystem is actively promoting open research. Work from labs such as DeepSeek and Alibaba's Qwen is encouraged to be released in ways others can build on-accelerating iteration and adoption.

Perception vs. reality

Many Americans assume U.S. AI leadership is unassailable. Yet the Belfer Center argues China is now a full-spectrum peer in economic and national security applications of AI, and is succeeding at scaling its capabilities.

Belfer Center for Science and International Affairs has repeatedly highlighted trends that contradict the belief that China is merely a "near-peer."

Output is high, diffusion is low

The U.S. led model output in 2024 (40 models), followed by China (15) and Europe (3). But parity on performance is closing, and Chinese models are being tested globally as affordable alternatives to U.S. offerings.

Here's the problem for researchers: when leading labs restrict access, downstream experimentation, reproducibility, and tooling improvements stall. The network effects that once made U.S. AI dominant weaken.

What this means for scientists and research leads

  • Idea flow beats secrecy for compounding progress. Preprints, ablations, and negative results still move the field.
  • Even partial releases matter. If you can't ship weights, ship eval harnesses, training logs, data cards, and reproducibility checklists.
  • Cross-pollination is leverage. Seminars, open colloquia, and short-term exchanges revive the "scientists talking to scientists" loop.
  • Procurement and funding can reward openness. Tie grants and contracts to documented contributions to shared benchmarks or tools.

Konwinski's counter-strategy

To push back on the closed-door trend, Konwinski is backing the Laude Institute, which offers grants and a venture fund with NEA veteran Pete Sonsini and Antimatter CEO Andrew Krioukov. The focus: fund academic AI innovation and help it ship.

It's a practical approach-support researchers directly, and create pathways that don't force talent to choose between publication and payroll.

Why China is catching up

China produces a high volume of AI publications and patents and is fielding models that compete with U.S. systems. Smaller firms are shipping fast, and global buyers are testing Chinese LLMs for cost-performance trade-offs.

The difference isn't just raw output. It's the compounding effect of open releases that let others test, fine-tune, and deploy at speed.

Actions labs and funders can take now

  • Adopt a default-to-share policy: preprint first, then staged releases of code, evals, and weights where safe.
  • Fund compute fellowships for academia; require open artifacts as deliverables.
  • Stand up open, hard-to-game benchmarks with prize tracks and transparent leaderboards.
  • Support cross-lab residencies and sabbaticals that include explicit knowledge transfer goals.
  • Standardize release packs: model cards, data statements, safety notes, and reproducibility scripts.

What to watch next

  • Rate of high-quality open-source releases vs. closed reports from top U.S. labs.
  • Academic-to-industry brain flow and the health of university labs.
  • Adoption of Chinese models in enterprise workloads and research baselines.
  • Funding mechanisms that reward openness, not just leaderboard wins.

If your team needs a structured way to stay current and build hands-on skill, browse curated AI courses by job role here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)