Agno
Memori integrates with Agno at the model layer. Register your Agno model with llm.register(...) and Memori captures run(), arun(), and streamed responses automatically.
Want a zero-setup option? The Memori Cloud at app.memorilabs.ai.
Quick Start
Agno Integration
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from memori import Memori
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine("sqlite:///memori.db")
SessionLocal = sessionmaker(bind=engine)
model = OpenAIChat(id="gpt-4o-mini")
mem = Memori(conn=SessionLocal).llm.register(openai_chat=model)
mem.config.storage.build()
mem.attribution(entity_id="user_123", process_id="agno_agent")
agent = Agent(
model=model,
instructions=["Be helpful and concise"],
markdown=True,
)
response = agent.run("Hello!", session_id="support-session")
print(response.content)
Different Providers
Agno supports multiple model families. Use the matching registration keyword in Memori.
| Package | Model Class | Registration Keyword |
|---|---|---|
agno.models.openai | OpenAIChat | openai_chat=model |
agno.models.anthropic | Claude | claude=model |
agno.models.google | Gemini | gemini=model |
agno.models.xai | xAI | xai=model |
Agno Providers
from agno.models.openai import OpenAIChat
model = OpenAIChat(id="gpt-4o-mini")
mem = Memori(conn=SessionLocal).llm.register(openai_chat=model)
Supported Modes
| Mode | Method |
|---|---|
| Sync | agent.run() |
| Async | await agent.arun() |
| Streamed | agent.run(stream=True) |