Engineering
Written by Memori Team
Memori Labs Hits Another GitHub Milestone, Leads in Industry Performance Benchmarks ⭐️⭐️
Memori Labs has reached 13,000 stars on GitHub. While we try not to get too caught up in metrics, this one feels different. It is more than just a number on a dashboard. It is a signal that the tools we are building truly resonate with the people who matter most: the developer community shaping the future of AI.
From day one, our goal has been to bridge the gap between complex machine learning systems and human-centric memory. Seeing that vision come to life through a global community of contributors is incredibly humbling. Every fork, pull request, and bug report has helped lay the foundation for what Memori is today.
“Reaching 13,000 stars is a testament to the power of community-driven innovation,” said Adam B. Struck, CEO and Co-Founder of Memori Labs. “By combining our SQL-native ease of use with benchmark-breaking performance, we are proving that developers do not have to choose between simplicity and power. We are very grateful to every developer who has pushed us to reach this milestone.”
In parallel with its open-source growth, Memori continues to demonstrate industry-leading performance on the LoCoMo benchmark, the most widely cited evaluation for long-context memory systems. Memori achieved 81.95% overall accuracy, outperforming competing systems including Zep (~79%), LangMem (~78%), and Mem0 (~62%).
Critically, Memori achieved this while using only ~1,294 tokens per query, representing approximately 4.98% of the cost of full-context prompting and more than 20× lower context cost. Compared to alternatives, Memori also reduced token usage by roughly 67% versus Zep, demonstrating that higher accuracy does not require larger context windows.
These results point to a fundamental shift in how memory systems should be built. Scaling context isn’t the answer - it’s inefficient and brittle. Memori uses intelligent recall to transform interactions into a continuously evolving knowledge graph, where memories follow variable decay curves and are retrieved based on relevance, not recency alone.
The result: the right memory, at the right time - without bloating context or driving up inference costs.
Up Next: Structured memory from agent execution
Looking ahead, Memori Labs is preparing to release a major product update that expands its capabilities beyond conversational memory. The upcoming release introduces structured memory derived not only from agent interactions, but also from agent trace and execution, capturing the sequence of states, decisions, actions and observations an agent produces to create a more complete and durable representation of state. This advancement is expected to unlock a new generation of agent-native applications, where memory is built from what agents do, not just what they say.
To our community:
You are the heartbeat of this project and whether you have been with us since the alpha days or just joined the community yesterday, your input has been our north star. We have watched you implement Memori in ways we never even imagined. From the bottom of our hearts, thank you for being part of our journey. Let us keep building something memorable together and here’s to the next milestone 🚀