Quick Start
Get Memori running in less than a minute.
1. Install
pip install memorisdk openai
2. Set API Key
export OPENAI_API_KEY="sk-your-openai-key-here"
3. Basic Usage
Create demo.py:
from memori import Memori
from openai import OpenAI
# Initialize OpenAI client
openai_client = OpenAI()
# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()
# First conversation - establish context
response1 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "I'm working on a Python FastAPI project"
}]
)
print("Assistant:", response1.choices[0].message.content)
# Second conversation - memory provides context
response2 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "Help me add user authentication"
}]
)
print("Assistant:", response2.choices[0].message.content)
4. Run
python demo.py
5. See Results
- First response: General FastAPI help
- Second response: Contextual authentication help (knows about your FastAPI project!)
- Database created:
memori.dbwith your conversation memories
What Happened?
- Universal Recording:
memori.enable()automatically captures ALL LLM conversations - Intelligent Processing: Extracts entities (Python, FastAPI, projects) and categorizes memories
- Context Injection: Second conversation automatically includes relevant memories
- Persistent Storage: All memories stored in SQLite database for future sessions