Quickstart¶
Get a working AI agent with memory in under 5 minutes.
1. Setup¶
git clone https://github.com/zacxyonly/SimpleContext.git
cp -r SimpleContext/simplecontext .
cp -r SimpleContext/agents .
Create config.yaml:
storage:
backend: sqlite
path: ./data.db
agents:
folder: ./agents
default: general
plugins:
enabled: true
folder: ./plugins
2. Write your first bot¶
# main.py
from simplecontext import SimpleContext
sc = SimpleContext("config.yaml")
def your_llm(messages):
# Replace with your actual LLM call
# Works with Gemini, OpenAI, Anthropic, Ollama — any provider
import litellm
return litellm.completion(
model="gemini/gemini-2.0-flash",
messages=messages
).choices[0].message.content
user_id = "user_123"
while True:
message = input("You: ")
if message.lower() == "exit":
break
# v4 API — one-liner
ctx = sc.chat(user_id, message)
reply = your_llm(ctx.messages)
reply = ctx.save(reply)
print(f"Bot: {reply}")
3. Run it¶
You: Hi, I'm Alice and I work with Python
Bot: Hi Alice! Python is a great choice. What are you working on?
You: I'm building a FastAPI service
Bot: Nice! FastAPI is excellent for building APIs. Are you using async endpoints?
You: What's my name again?
Bot: Your name is Alice, and you mentioned you're building a FastAPI service with Python.
SimpleContext automatically extracted the facts ("user name is Alice", "user uses Python", "user project FastAPI") and used them to answer the last question — without you doing anything.
LLM Provider Examples¶
import anthropic
client = anthropic.Anthropic(api_key="YOUR_ANTHROPIC_KEY")
def your_llm(messages):
sys_msg = next(m["content"] for m in messages if m["role"] == "system")
history = [m for m in messages if m["role"] != "system"]
return client.messages.create(
model="claude-3-5-sonnet-20241022",
system=sys_msg,
messages=history,
max_tokens=1024
).content[0].text
What Happened?¶
When you called sc.chat(user_id, message), SimpleContext:
- Routed the message to the best matching agent
- Planned retrieval based on detected intent
- Retrieved relevant memories from all tiers
- Scored and selected nodes within the context budget
- Built a structured prompt with memory context
When you called ctx.save(reply), SimpleContext:
- Stored the user message and assistant reply to working memory
- Extracted any facts mentioned (names, tools, projects)
- Deduplicated against existing facts using Jaccard similarity
- Updated importance scores of used nodes
Next Steps¶
- Memory Tiers — understand working/episodic/semantic
- Agent System — configure routing and personalities
- API Reference — full API documentation