The Jean Memory Python SDK provides a simple, headless interface to our powerful Context API. It’s designed to be integrated directly into your backend services, AI agents, or data processing pipelines.

Installation

pip install jeanmemory

Usage: Adding Context to an Agent

The primary use case for the Python SDK is to retrieve context that you can then inject into a prompt for your chosen Large Language Model. The example below shows a typical workflow where we get context from Jean Memory before calling the OpenAI API.
import os
from openai import OpenAI
from jeanmemory import JeanMemoryClient

# 1. Initialize the clients
jean = JeanMemoryClient(api_key=os.environ.get("JEAN_API_KEY"))
openai = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# 2. Get the user token from your frontend (or use auto test user)
# Production: Token from OAuth flow via @jeanmemory/react
# Development: Leave empty for automatic test user
user_token = get_user_token_from_request()  # Or None for test user 

# 3. Get context from Jean Memory
user_message = "What were the key takeaways from our last meeting about Project Phoenix?"
context_response = jean.get_context(
    user_token=user_token,
    message=user_message,
    # All defaults: tool="jean_memory", speed="balanced", format="enhanced"
)

# 4. Engineer your final prompt
final_prompt = f"""
Using the following context, please answer the user's question.
The context is a summary of the user's memories related to their question.

Context:
---
{context_response.text}
---

User Question: {user_message}
"""

# 5. Call your LLM
completion = openai.chat.completions.create(
    model="gpt-4-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": final_prompt},
    ],
)

print(completion.choices[0].message.content)

This code block demonstrates the complete “golden path” for using the headless Python SDK. Here’s a step-by-step breakdown:
  1. Initialization: It creates instances of the JeanMemoryClient and a large language model client (in this case, OpenAI).
  2. Authentication: It retrieves a user_token that your frontend would have acquired through the OAuth sign-in flow. This token is crucial as it identifies the user whose memory you want to access.
  3. Context Retrieval: It calls jean.get_context(), sending the user’s token and their latest message. This is the core of the integration, where Jean Memory performs its context engineering.
  4. Prompt Engineering: It constructs a final prompt for the LLM, strategically placing the retrieved context before the user’s actual question. This gives the LLM the necessary background information to provide a relevant, personalized response.
  5. LLM Call: It sends the final, context-rich prompt to the LLM to get the answer.

A Note on Authentication

The user_token is the critical piece that connects a request to a specific user’s memory. In a production application, your frontend should use our React SDK’s <SignInWithJean /> component (or a manual OAuth 2.1 PKCE flow) to authenticate the user and receive this token. Your frontend then passes this token to your backend, which uses it to make authenticated requests with the Python SDK. Headless Authentication (Backend-Only) For headless applications without a frontend, you have several options:
# Option 1: Test mode (development)
jean = JeanMemoryClient(api_key="jean_sk_your_key")
context = jean.get_context(
    user_token=None,  # Explicitly None for test user
    message="Hello"
)

# Option 2: Manual OAuth flow (production)
jean = JeanMemoryClient(api_key="jean_sk_your_key")

# Generate OAuth URL for manual authentication
auth_url = jean.get_auth_url(callback_url="http://localhost:8000/callback")
print(f"Visit: {auth_url}")

# After user visits URL and you get the code:
user_token = jean.exchange_code_for_token(auth_code)

# Option 3: Service account (enterprise)
jean = JeanMemoryClient(
    api_key="jean_sk_your_key",
    service_account_key="your_service_account_key"
)
For information on implementing a secure server-to-server OAuth flow for backend services, see the Authentication guide.

Configuration Options (Optional)

For 99% of use cases, the defaults work perfectly. But when you need control:
# Speed-optimized (faster, less comprehensive)
context = jean.get_context(
    user_token=user_token,
    message=user_message,
    speed="fast"  # vs "balanced" (default) or "comprehensive"
)

# Different tools for specific needs
context = jean.get_context(
    user_token=user_token,
    message=user_message,
    tool="search_memory"  # vs "jean_memory" (default)
)

# Simple text response instead of full metadata
context = jean.get_context(
    user_token=user_token,
    message=user_message,
    format="simple"  # vs "enhanced" (default)
)

Advanced: Direct Tool Access

For advanced use cases, the JeanMemoryClient also provides a tools namespace for direct, deterministic access to the core memory functions.
# The intelligent, orchestrated way (recommended):
context = jean.get_context(user_token=..., message="...")

# The deterministic, tool-based way:
jean.tools.add_memory(user_token=..., content="My favorite color is blue.")
search_results = jean.tools.search_memory(user_token=..., query="preferences")

# Advanced tools for complex operations:
deep_results = jean.tools.deep_memory_query(user_token=..., query="complex relationship query")
doc_result = jean.tools.store_document(user_token=..., title="Meeting Notes", content="...", document_type="markdown")