Integration Overview

Memori Cloud works with all major LLM providers and frameworks. Register any supported client and Memori handles memory capture, augmentation, and recall automatically — with your Memori API key and provider credentials, no database setup required.

Supported Providers

ProviderIntegrationInstall
OpenAIDirect SDK wrapperpip install memori openai
AnthropicDirect SDK wrapperpip install memori anthropic
Google GeminiDirect SDK wrapperpip install memori google-genai
xAI GrokOpenAI-compatiblepip install memori openai
Nebius AI StudioOpenAI-compatiblepip install memori openai
DeepSeekOpenAI-compatiblepip install memori openai
AWS BedrockLangChain adapterpip install memori langchain-aws
LangChainFramework supportpip install memori langchain-openai
AgnoFramework supportpip install memori agno
Pydantic AIFramework supportpip install memori pydantic-ai

All providers support sync, async, streamed, and unstreamed modes.

Pydantic AI

Register the Agent instance directly — Memori wraps run_sync and run automatically.

from memori import Memori
from pydantic_ai import Agent

agent = Agent("openai:gpt-4o-mini")

mem = Memori().llm.register(agent)
mem.attribution(entity_id="user_123", process_id="pydantic_agent")

result = agent.run_sync("Hello!")
print(result.output)

OpenAI-Compatible Providers

Any provider with an OpenAI-compatible API works by setting a custom base_url. Dedicated guides: xAI Grok, Nebius AI Studio, DeepSeek. Same pattern works for Azure OpenAI, NVIDIA NIM, and others.

import os
from memori import Memori
from openai import OpenAI

client = OpenAI(
    base_url="https://api.studio.nebius.com/v1/",
    api_key=os.getenv("NEBIUS_API_KEY"),
)
mem = Memori().llm.register(client)
mem.attribution(entity_id="user_123", process_id="my_agent")

response = client.chat.completions.create(
    model="meta-llama/Llama-3.3-70B-Instruct",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

OpenAI Responses API

from memori import Memori
from openai import OpenAI

client = OpenAI()

mem = Memori().llm.register(client)
mem.attribution(entity_id="user_123", process_id="my_agent")

response = client.responses.create(
    model="gpt-4o-mini",
    input="Hello!",
    instructions="You are a helpful assistant."
)
print(response.output_text)