For developers who want to power their existing AI agents with our headless SDK.
Copy
# 1. Install the Python SDK# pip install jeanmemory openai# 2. Get context before calling your LLMimport osfrom jeanmemory import JeanMemoryClientfrom openai import OpenAIjean = JeanMemoryClient(api_key=os.environ["JEAN_API_KEY"])openai = OpenAI(api_key=os.environ["OPENAI_API_KEY"])context = jean.get_context( user_token="USER_TOKEN_FROM_FRONTEND", message="What was our last conversation about?").textprompt = f"Context: {context}\\n\\nUser question: What was our last conversation about?"# 3. Use the context in your LLM callcompletion = openai.chat.completions.create( model="gpt-4-turbo", messages=[{"role": "user", "content": prompt}])
Full-Stack Integration: User signs in with React SDK, then the same user token works across all SDKs. Frontend handles auth, backend gets context.
Don't like reading docs?
Just copy and paste our full documentation into your AI agent (Cursor, Claude, etc.) and tell it what you want to build.
Want to test different depth levels interactively? Check out our Memory Playground with working code examples and live performance comparisons.