Memori

Engineering

Written by Memori Team

Memori Now Speaks TypeScript: Production-Grade AI Memory for GitHub's Most-Used Language

TypeScript is now the most-used language on GitHub, growing 66% year-over-year. Much of that growth is driven by AI development: over 1.1 million public repositories already depend on LLM SDKs from OpenAI, Anthropic, and Google.

Today, we're launching @memorilabs/memori, our official TypeScript SDK, so those teams can add persistent memory to their AI applications without leaving the ecosystem they already build in.

Why TypeScript for AI memory

TypeScript's type system is a natural fit for AI tooling. Structured tool definitions, function schemas, and API contracts all benefit from strong typing. Memory is no different: when your SDK can enforce the shape of facts, preferences, and relationships at compile time, you catch integration issues before they reach production.

The TypeScript SDK brings the same capabilities as our Python SDK to the JavaScript ecosystem, with an API that feels native to TypeScript developers.

Three lines to persistent memory

The SDK wraps your existing LLM client (OpenAI, Anthropic, or Google Gemini) with middleware that handles memory automatically:

import OpenAI from 'openai';
import { Memori } from '@memorilabs/memori';

const client = new OpenAI();
const mem = new Memori().llm.register(client);
mem.attribution('user_123', 'test-ai-agent');

From that point, every conversation is captured and enriched in the background. No new database to provision, no embedding pipeline to build, no changes to your existing application logic.

What happens under the hood

Memori splits memory work between the request path and background processing:

  • Synchronous capture stores conversations as they happen, adding zero latency to the user experience
  • Advanced Augmentation asynchronously extracts facts, preferences, skills, and relationships after the response is sent
  • Intelligent Recall ranks and injects the most relevant memories into future prompts
  • Knowledge graph construction turns extracted triples into connected, queryable memory
  • Smart decay deprioritizes stale context so only relevant memories surface

This is the same architecture behind our Python SDK and Memori Cloud, now available to TypeScript teams.

Three-tier attribution

The SDK supports attribution at three levels:

LevelPurpose
EntityTie memories to a specific user, customer, or agent identity
ProcessGroup memories by workflow, feature, or use case
SessionScope memories to a single conversation or interaction

This gives you fine-grained control over what gets remembered, for whom, and in what context.

Full observability with Cloud Dashboard

Every memory created through the TypeScript SDK is visible in the Memori Cloud Dashboard:

  • Memories - inspect stored facts, subjects, retrieval counts, and graph relationships
  • Analytics - track memory volume, cache hit rates, sessions, users, and quota usage
  • Playground - test memory creation and recall interactively

No guessing about what your AI remembers or why it recalled something.

Get started

Install the SDK and sign up for a free API key:

npm install @memorilabs/memori