Agno

Memori integrates with Agno at the model layer. Register your Agno model with llm.register(...) and Memori captures run(), arun(), and streamed responses automatically.

Want a zero-setup option? The Memori Cloud at app.memorilabs.ai.

Quick Start

Agno Integration
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from memori import Memori
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

engine = create_engine("sqlite:///memori.db")
SessionLocal = sessionmaker(bind=engine)

model = OpenAIChat(id="gpt-4o-mini")

mem = Memori(conn=SessionLocal).llm.register(openai_chat=model)
mem.config.storage.build()
mem.attribution(entity_id="user_123", process_id="agno_agent")

agent = Agent(
    model=model,
    instructions=["Be helpful and concise"],
    markdown=True,
)

response = agent.run("Hello!", session_id="support-session")
print(response.content)

Different Providers

Agno supports multiple model families. Use the matching registration keyword in Memori.

PackageModel ClassRegistration Keyword
agno.models.openaiOpenAIChatopenai_chat=model
agno.models.anthropicClaudeclaude=model
agno.models.googleGeminigemini=model
agno.models.xaixAIxai=model
Agno Providers
from agno.models.openai import OpenAIChat

model = OpenAIChat(id="gpt-4o-mini")
mem = Memori(conn=SessionLocal).llm.register(openai_chat=model)

Supported Modes

ModeMethod
Syncagent.run()
Asyncawait agent.arun()
Streamedagent.run(stream=True)