MemoraX AI Closes Seed Round to Build Memory Systems for Large Language Models
MemoraX AI, a Shenzhen-based startup, completed seed funding of tens of millions of dollars led by L2F Light Source Entrepreneurs Fund and Zhongding Capital. The company is building what it calls "endogenous memory" - memory capabilities built into language models rather than bolted on as external retrieval systems.
The funding will support development of the company's core technology, Agentic RL (Agent Reinforcement Learning), and launch of standardized memory products for enterprise knowledge management and personal AI assistants within the next 12 months.
The Problem: Language Models Forget
Large language models struggle with long-term memory. Current solutions treat memory as external retrieval - essentially giving AI a notebook it can search through but not truly remember from. This creates fragmented recall, static information storage, and difficulty applying learned patterns across different scenarios.
MemoraX AI's approach differs. Instead of attaching memory to a model, the company internalizes memory as a native capability. The model learns, refines, and updates memory dynamically through interaction, similar to how humans remember.
The Team: Academia Meets Industrial Scale
Founder Haojianye is a professor at Tianjin University and ranks in the top 2% of scientists globally by citations. He previously led algorithm research at Huawei's large-model laboratory and served as technology president of Huawei's medical division, overseeing projects that generated tens of billions of yuan in economic value.
The core team combines researchers from top AI conferences with engineers from Huawei, Alibaba, and Tencent who have built hundred-billion-parameter models and deployed AI agents in production. Team members have shipped reinforcement learning technology in chip design automation, autonomous vehicles, game AI, and advertising systems.
What MemoraX Built
The company's technology achieves three capabilities:
- Continuous evolution: Memory updates and reorganizes through interaction rather than remaining static.
- Accurate recall: On the LoCoMo-Refined benchmark, MemoraX's approach led the second-place solution by 30%. Training efficiency improved 400-fold.
- Cross-scenario generalization: Memory learned in one context transfers to enterprise knowledge management, digital companions, coding assistants, and other applications.
Agentic RL is the technical foundation. The team has already deployed this approach in chip design (ranked first on EPFL's logic synthesis benchmark for two consecutive years), industrial optimization (won first place on a global solver benchmark), autonomous driving (deployed in hundreds of thousands of vehicles), and game AI.
The Product Roadmap
MemoraX plans two product tracks. For enterprises, standardized memory modules will reduce repeated inquiries in customer service, knowledge management, and specialized fields like finance and law. For consumers, the company is building personalized AI assistants that learn individual habits, preferences, and work patterns rather than offering generic responses.
The vision shifts the human-AI relationship from tool to partner. An AI with real memory can understand users over time rather than treating each conversation as isolated.
For product development teams, this represents a shift in how to think about AI-powered features. Rather than building external knowledge bases or retrieval systems, products can now embed learning directly into models. For teams working with generative AI and LLMs, the memory problem is foundational - solving it changes what's possible in production systems.
Your membership also unlocks: