Blueprint Over Data? Biologically Inspired AI Shows Brain-Like Activity Before Training
New research from Johns Hopkins University reports that certain biologically inspired AI architectures can mimic human brain activity before any training. The work, published in Nature Machine Intelligence, suggests that the blueprint you choose may matter more than months of training, huge energy bills, and massive datasets-at least for early visual processing.
The short version: Design choices can set powerful priors. For visual systems, the right structure can place a model closer to how cortex processes images from the start.
What the team tested
The researchers compared three model families commonly used in modern AI: transformers, fully connected networks, and convolutional neural networks (CNNs). They systematically modified each architecture to create dozens of untrained models and then presented them with images of objects, people, and animals.
They compared those model responses to brain activity patterns in humans and nonhuman primates shown the same images. The goal: identify which architectures produce brain-like activity de novo-without learning.
Key results
- Scaling transformers and fully connected networks (adding many more artificial neurons) produced little change in brain alignment.
- Similar architectural tweaks to CNNs generated activity patterns that more closely matched human visual cortex.
- Untrained CNNs rivaled conventionally trained models in brain-alignment metrics, indicating that architecture can outperform sheer data and compute in setting strong initial priors.
The implication is direct: if massive training were the only path to brain-like representations, architecture-only changes shouldn't get you there. Yet they did-at least for vision.
Why this matters for science and research teams
- Cost and energy: Better blueprints can cut the need for billion-image training runs just to reach sensible early representations.
- Sample efficiency: Starting closer to cortex may accelerate downstream learning with less data.
- Model selection: For visual tasks, prioritize architectures with inductive biases that align with biological vision.
- Evaluation: Add brain-alignment checks to your pretraining diagnostics. If an untrained model already aligns well, you likely picked a strong starting point.
Practical guidance
- If your pipeline relies on visual perception, benchmark untrained CNN variants for brain alignment before large-scale training.
- Use architectural search focused on biologically plausible constraints (e.g., locality, receptive fields) alongside standard performance goals.
- Revisit compute budgets: a stronger prior can reduce epochs, dataset size, or both-especially for early-stage representation learning.
- Track how alignment shifts with light-touch learning rules; the authors plan to explore simple, biology-inspired algorithms next.
Open questions
- Does this effect extend beyond vision to audition, language, or multimodal processing?
- How do architecture-derived priors interact with different training curricula and objectives?
- Can lightweight, biologically motivated updates preserve alignment while improving task generalization?
Citation
Nature Machine Intelligence: "Convolutional architectures are cortex-aligned de novo" (13 November 2025). DOI: 10.1038/s42256-025-01142-3
Next steps
The team is now developing simple, biology-inspired learning algorithms that could inform a new deep learning framework. If successful, this line of work could redefine how we kick off training-less brute force, more thoughtful design.
Want to upskill your team on architecture-first AI? Explore curated options by role at Complete AI Training.
Your membership also unlocks: