Memori

Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

What is Memori?

Memori is an open-source memory layer to give your AI agents human-like memory. It remembers what matters, promotes what's essential, and injects structured context intelligently into LLM conversations.

Why Memori?

Memomi uses multi-agents working together to intelligently promote essential long-term memories to short-term storage for faster context injection.

Give your AI agents structured, persistent memory with professional-grade architecture:

# Before: Repeating context every time
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a Python expert..."},
        {"role": "user", "content": "Remember, I use Flask and pytest..."},
        {"role": "user", "content": "Help me with authentication"}
    ]
)

# After: Automatic context injection
from memori import Memori

memori = Memori(openai_api_key="your-key")
memori.enable()  # Auto-records ALL LLM conversations

# Context automatically injected from memory
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Help me with authentication"}]
)
# Memori automatically knows about your FastAPI Python project!

Key Features

  • Universal Integration: Works with ANY LLM library (LiteLLM, OpenAI, Anthropic)
  • Intelligent Processing: Pydantic-based memory with entity extraction
  • Auto-Context Injection: Relevant memories automatically added to conversations
  • Multiple Memory Types: Short-term, long-term, rules, and entity relationships
  • Advanced Search: Full-text search with semantic ranking
  • Production-Ready: Comprehensive error handling, logging, and configuration
  • Database Support: SQLite, PostgreSQL, MySQL
  • Type Safety: Full Pydantic validation and type checking

Memory Types

TypePurposeRetentionUse Case
Short-termRecent conversations7-30 daysContext for current session
Long-termImportant insightsPermanentUser preferences, key facts
RulesUser preferences/constraintsPermanent"I prefer Python", "Use pytest"
EntitiesPeople, projects, technologiesTrackedRelationship mapping

Quick Start

Get started with Memori in minutes! Follow our easy quick start guide:

Quick Start Guide

Learn how to install Memori, set up your first memory-enabled agent, and see the magic of automatic context injection in action.

Universal Integration

Works with ANY LLM library:

See all supported LLMs

memori.enable()  # Enable universal recording

# OpenAI (recommended)
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(...)

# LiteLLM
from litellm import completion
completion(model="gpt-4", messages=[...])

# Anthropic
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)

# All automatically recorded and contextualized!

Multiple Database Support

Supports multiple relational databases for production-ready memory storage:

Database Configuration Guide

Use with serverless databases

Get FREE serverless database instance in GibsonAI platform. You can just prompt to create and deploy a new database.

GibsonAI Integration Guide

Framework Integrations

Seamlessly integrates with popular AI agent frameworks and tools:

View All Integrations

Multi-Agent Architecture

Learn about Memori's intelligent multi-agent system that powers memory processing:

Understanding Memori Agents

Configuration

Learn more about advanced configuration options:

Configuration Settings Guide

Made for developers who want their AI agents to remember and learn