Use Cases

Memori is designed for any application where AI agents need to remember context across conversations. Here are the most common use cases — all running with your own database.

Want a zero-setup option? Try Memori Cloud at app.memorilabs.ai.

Customer Support Chatbots

Build support bots that remember customer history, preferences, and previous issues. No more "Can you repeat your account number?" — Memori recalls everything automatically.

Benefits:

  • Remember customer preferences and history
  • Recall previous support tickets and resolutions
  • Personalize responses based on past interactions
  • Track issues across multiple sessions
  • All data stays in your database for compliance
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from memori import Memori
from openai import OpenAI

engine = create_engine("sqlite:///memori.db")
SessionLocal = sessionmaker(bind=engine)

client = OpenAI()
mem = Memori(conn=SessionLocal).llm.register(client)

# Each customer gets their own memory space
mem.attribution(
    entity_id="customer_456",
    process_id="support_bot"
)

# Memori automatically recalls relevant context
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{
        "role": "user",
        "content": "I'm having that issue again"
    }]
)
# Memori injects: "Customer previously reported
# login timeout issues on 2024-01-15"

Personalized AI Assistants

Create AI assistants that learn and adapt to each user over time. Memori builds a profile of preferences, skills, and context that makes every interaction more relevant.

Benefits:

  • Learn coding preferences and tech stack
  • Remember project context across sessions
  • Adapt communication style to user preferences
  • Build long-term user profiles automatically
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from memori import Memori
from anthropic import Anthropic

engine = create_engine("sqlite:///memori.db")
SessionLocal = sessionmaker(bind=engine)

client = Anthropic()
mem = Memori(conn=SessionLocal).llm.register(client)

mem.attribution(
    entity_id="developer_789",
    process_id="code_assistant"
)

# Over time, Memori learns:
# - "Uses Python 3.12 with FastAPI"
# - "Prefers type hints and dataclasses"
# - "Works on e-commerce platform"
response = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[{
        "role": "user",
        "content": "How should I structure this endpoint?"
    }]
)

Multi-Agent Workflows

Coordinate multiple AI agents that share context through Memori. Each agent contributes to a shared memory space while maintaining its own process identity.

Benefits:

  • Share context between specialized agents
  • Track which agent contributed what information
  • Maintain conversation continuity across handoffs
  • Build collective knowledge graphs
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from memori import Memori
from openai import OpenAI

engine = create_engine("sqlite:///memori.db")
SessionLocal = sessionmaker(bind=engine)

client = OpenAI()
mem = Memori(conn=SessionLocal).llm.register(client)

# Research agent gathers information
mem.attribution(
    entity_id="project_alpha",
    process_id="research_agent"
)
client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{
        "role": "user",
        "content": "Research competitor pricing"
    }]
)

# Analysis agent recalls research findings
mem.attribution(
    entity_id="project_alpha",
    process_id="analysis_agent"
)
# Memori shares context across agents
# for the same entity