Installation
Usage: Adding Context to an Agent
The primary use case for the Python SDK is to retrieve context that you can then inject into a prompt for your chosen Large Language Model. The example below shows a typical workflow where we get context from Jean Memory before calling the OpenAI API.- Initialization: It creates instances of the
JeanMemoryClient
and a large language model client (in this case,OpenAI
). - Authentication: It retrieves a
user_token
that your frontend would have acquired through the OAuth sign-in flow. This token is crucial as it identifies the user whose memory you want to access. - Context Retrieval: It calls
jean.get_context()
, sending the user’s token and their latest message. This is the core of the integration, where Jean Memory performs its context engineering. - Prompt Engineering: It constructs a final prompt for the LLM, strategically placing the retrieved context before the user’s actual question. This gives the LLM the necessary background information to provide a relevant, personalized response.
- LLM Call: It sends the final, context-rich prompt to the LLM to get the answer.
A Note on Authentication
Theuser_token
is the critical piece that connects a request to a specific user’s memory. In a production application, your frontend should use our React SDK’s <SignInWithJean />
component (or a manual OAuth 2.1 PKCE flow) to authenticate the user and receive this token. Your frontend then passes this token to your backend, which uses it to make authenticated requests with the Python SDK.
Headless Authentication (Backend-Only)
For headless applications without a frontend, you have several options:
Configuration Options (Optional)
For 99% of use cases, the defaults work perfectly. But when you need control:Advanced: Direct Tool Access
For advanced use cases, theJeanMemoryClient
also provides a tools
namespace for direct, deterministic access to the core memory functions.