Do we have seven senses? A seven-dimensional model of memory and what it means for AI and robotics
Skoltech researchers propose a mathematical model where memory works best in a seven-dimensional conceptual space. In this framework, each concept is encoded by features-akin to senses-that define its position in a mental space. The model shows that engrams, the basic units of memory, evolve toward a steady state, and that memory capacity peaks when concepts are described by seven features. The takeaway: broader, well-chosen sensory inputs can increase how many distinct concepts a system can store without confusion.
What the model actually says
An engram is a sparse set of neurons that fire together for a given concept. In the model, each concept is a point in a conceptual space whose dimensions correspond to features (for humans, senses; for machines, sensor or feature channels). Over time, the distribution of engrams settles into a stable pattern.
When you maximize the number of distinct engrams that can be stored with minimal interference, the optimum occurs at seven dimensions. This result appears robust to specifics of the stimuli or feature distributions. Similar engrams that cluster around a center are counted as one concept, which reflects how we treat closely related memories in practice.
Why seven matters for system design
For cognitive function-biological or artificial-feature diversity matters. Too few features bottleneck capacity; too many add redundancy and interference. The model suggests aiming for seven independent feature families to maximize distinct, stably retrievable concepts.
Practical choices for robotics and AI
- Define seven independent feature families: e.g., appearance (color/texture), 3D geometry/depth, sound, tactile/pressure, temperature/thermal, chemical/air quality, and proprioception/force-torque. Treat each family as one "sense" with internally related channels.
- Engineer decorrelation: Use PCA/ICA or learned projections to reduce redundancy across families. Penalize cross-family correlations during training.
- Stabilize memory: Use attractor-like memory (e.g., modern Hopfield layers) or vector databases with consolidation routines. Periodically rehearse or distill to maintain a steady engram distribution.
- Manage interference: Apply sparsity, orthogonal regularization, and contrastive objectives to keep concepts distinct in the conceptual space.
- Fuse late, decide jointly: Maintain separate encoders per family, fuse representations near the decision stage to preserve independence while still enabling joint inference.
Experimental plan for researchers
- Simulate concept learning with controllable feature dimensionality d ∈ {3,…,12}. Measure capacity (distinct concepts retrievable at fixed error), interference, and stability over time.
- Test different encoder designs: independent encoders vs. shared backbone, with/without decorrelation losses.
- Stress-test with noise, missing modalities, and distribution shifts to see if the seven-feature setup maintains an advantage.
- Vary the "similarity threshold" that merges nearby engrams and observe how optimal d shifts under different application tolerances.
Why this could aid robotics
Consider bin-picking or mobile manipulation. A seven-family sensor suite-vision, depth/geometry, audio, tactile, thermal, chemical, and proprioceptive force-can reduce confusions between look-alike objects or states. You get richer differentiation without drowning the system in redundant inputs that collide in memory.
For autonomous platforms, this setup also supports graceful degradation. If one family fails (e.g., vision in low light), the remaining six still span the conceptual space well enough to maintain performance.
Caveats and open questions
- The model is abstract. Seven refers to dimensionality of concept features, not necessarily seven physical sensors. Multiple physical sensors can map to one feature family.
- Independence matters. If families are strongly correlated (e.g., RGB and HSV without decorrelation), effective dimensionality drops.
- Task constraints can shift the sweet spot. If you must merge very similar items, the capacity metric changes and the optimal dimensionality may shift slightly.
Further reading
- Scientific Reports (journal) - Publication venue referenced in the study.
- Engram (neuropsychology) - Background on memory engrams.
Upskilling for multimodal and memory-centric AI
If you're building systems that rely on multi-sense inputs, feature decorrelation, and long-term memory, focused training helps. Explore curated AI course tracks for technical teams.
Your membership also unlocks: