Advanced Augmentation

Advanced Augmentation is the AI engine inside Memori Cloud that turns raw conversations into structured, searchable memories. It runs asynchronously in the background to minimize impact on your response path.

What It Does

When your application has a conversation through a Memori-wrapped LLM client, the augmentation engine:

  1. Reads the full conversation (user messages and AI responses)
  2. Identifies facts, preferences, skills, and attributes
  3. Extracts semantic triples (subject-predicate-object relationships)
  4. Generates vector embeddings for semantic search
  5. Stores everything in your managed memory space

No extra code required — just initialize Memori and set attribution.

How It Works

The augmentation flow is fully asynchronous and designed to avoid blocking your main request path.

  1. Your app makes an LLM call through the wrapped client
  2. Memori returns the response immediately
  3. In the background, the conversation is queued for processing
  4. The augmentation engine extracts structured memories
  5. Memories are stored in Memori Cloud for future recall

In short-lived scripts, call mem.augmentation.wait() to ensure processing completes before exit.

from memori import Memori
from openai import OpenAI

client = OpenAI()
mem = Memori().llm.register(client)
mem.attribution(entity_id="user_123", process_id="my_agent")

# This returns immediately — no augmentation delay
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "I love hiking in the mountains."}
    ]
)
print(response.choices[0].message.content)

# Only needed in short-lived scripts
mem.augmentation.wait()

Extraction Types

TypeWhat it capturesScope
FactsObjective information with vector embeddingsPer entity — shared across processes
PreferencesUser choices, opinions, and tastesPer entity
Skills & KnowledgeAbilities and expertise levelsPer entity
AttributesProcess-level information about what your agent handlesPer process

Semantic Triples

Advanced Augmentation uses named-entity recognition to extract semantic triples (subject, predicate, object). These form the building blocks of the Knowledge Graph.

Example — from "My favorite database is PostgreSQL and I use it with FastAPI":

SubjectPredicateObject
userfavorite_databasePostgreSQL
userusesFastAPI
useruses_withPostgreSQL + FastAPI

Memori automatically deduplicates triples — if the same fact is mentioned multiple times, it increments the mention count and updates the timestamp.

Context Recall

When a query is sent to an LLM through a wrapped client, Memori automatically:

  1. Intercepts the outbound LLM call
  2. Uses semantic search to find entity facts matching the query
  3. Ranks facts by vector similarity
  4. Injects the most relevant facts into the system prompt
  5. Forwards the enriched request to the LLM provider