Google Gemini
Memori integrates with Google Gemini via the google-genai SDK. Register the GenerativeModel instance and all generate_content() calls are automatically captured.
Want a zero-setup option? The Memori Cloud at app.memorilabs.ai.
Quick Start
Gemini Integration
import os
from memori import Memori
import google.generativeai as genai
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine("sqlite:///memori.db")
SessionLocal = sessionmaker(bind=engine)
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
client = genai.GenerativeModel("gemini-2.0-flash-exp")
mem = Memori(conn=SessionLocal).llm.register(client)
mem.config.storage.build()
mem.attribution(entity_id="user_123", process_id="gemini_assistant")
response = client.generate_content("Hello!")
print(response.text)
Supported Modes
| Mode | Method |
|---|---|
| Sync | client.generate_content() |
| Async | await client.generate_content_async() |
| Streamed | stream=True parameter |
Multi-Turn Conversations
Use start_chat() for multi-turn interactions. Memori tracks the full conversation automatically.
chat = client.start_chat()
response = chat.send_message("My name is Alice.")
print(response.text)
response = chat.send_message("What's my name?")
print(response.text)