Open Source Projects
Tools and libraries we've built and released to the community. All MIT licensed, all actively maintained.
soul.py
FeaturedPersistent identity and memory for any LLM agent
Your AI forgets everything when the conversation ends. soul.py fixes that — from simple markdown injection to full RAG + RLM hybrid retrieval. v0.1 uses pure markdown files. v2.0 indexes those memories and automatically routes queries to semantic search (RAG) or exhaustive reasoning (RLM) based on what the question needs. Human-readable files you can edit and git-version, with intelligent retrieval under the hood.
pip install soul-agent 🔥 Community Response
soul.py hit #1 on r/ollama with 50,000+ views in under 48 hours.
soul.py — Persistent memory for any LLM in 10 lines (works with Ollama, no database)
by u/the_ai_scientist in r/ollama
📖 The Book — Now on Amazon!
Soul: Building AI Agents That Remember Who They Are — the complete guide to persistent AI memory. Covers identity vs memory architecture, RAG + RLM hybrid retrieval, multi-agent coordination, and Darwinian evolution of agent identity. Working code in every chapter.
→ Get on Amazon | → Gumroad Bundle (PDF + setup wizard + cheatsheets)
🤖 Meet Darwin — the AI companion built with soul.py that helps you explore the book. A living demonstration of everything it teaches.
🗺️ Roadmap
Planned features and improvements. PRs welcome!
Vector Database Support
- ✅ Qdrant (current)
- ✅ ChromaDB (local, zero-config) v0.1.2
- 🔜 RuVector — self-learning vector DB (GNN improves search over time, tamper-proof audit chain, graph queries, MIT/free forever) — github.com/ruvnet/ruvector
- 🔲 pgvector (PostgreSQL)
- 🔲 FAISS (local, fast)
- 🔲 Pinecone (cloud)
- 🔲 Weaviate
Embedding Providers
- ✅ Azure OpenAI (current)
- ✅ OpenAI direct v0.1.2
- 🔲 Cohere
- 🔲 Local embeddings (sentence-transformers)
- 🔲 Ollama embeddings
CLI & Developer Experience
- ✅
soul initwizard - ✅
soul chatinteractive CLI v0.1.2 - ✅
soul statusmemory stats v0.1.2 - ✅ Graceful Ollama/local handling in CLI v0.1.3
- 🔲
config.yamlfile support - 🔲 VSCode extension
Memory Features
- ✅ Timestamped conversation logging
- ✅ RAG + RLM hybrid routing
- 🔲 Automatic memory summarization
- 🔲 Memory importance scoring
- 🔲 Tiered memory (hot/warm/cold)
- 🔲 Archive-before-prune — index to vector DB before deleting old files
- 🔲 Frozen storage — S3/GCS backup for disaster recovery
- 🔲 Memory export/import
Retrieval Enhancements
- 🔲 LLM Reranking — score/filter RAG results before generation
- 🔲 Hybrid search — combine BM25 + semantic scores
- 🔲 Query expansion — LLM rewrites query for better recall
- 🔲 Dynamic snippet extraction — context windows around matches
Integrations
- ✅ Anthropic Claude
- ✅ OpenAI
- ✅ Ollama / OpenAI-compatible
- ✅ Google Gemini v0.1.6
- ✅ LangChain memory backend langchain-soul v0.1.1
- ✅ LlamaIndex integration llamaindex-soul v0.1.1
- 🔲 n8n node (official)
Your digital estate vault — encrypted, AI-queryable, with a dead man's switch
When someone dies, their family spends months hunting for documents. soul-legacy fixes that. Store your assets, insurance, wills, debts, beneficiaries, and final wishes in one encrypted vault. Upload documents and ask questions in plain English. Configure a dead man's switch to automatically grant scoped access to your designated inheritors when the time comes.
pip install soul-legacy Auto-document your data warehouse in 3 minutes
You inherit 100 tables. Zero docs. Columns named cust_ltv, flg_b2b, reg_cd.
The person who knew what they meant left in 2019. soul-schema connects to any database, reads the schema,
and uses an LLM to generate human-readable descriptions for every table and column. Corrections are
"locked" — future runs won't overwrite your edits. The semantic layer learns over time.
pip install soul-schema The soul ecosystem for CrewAI agents
CrewAI's built-in memory is a black box. crewai-soul stores memories in human-readable markdown files you can edit and git-version. Same drop-in API, full RAG+RLM hybrid retrieval under the hood via soul-agent. Choose local (file-based) or managed (SoulMate API) — same great memory either way.
pip install crewai-soul ✨ What's Included
- soul-agent: RAG + RLM hybrid memory (required dep)
- soul-schema: Database semantic layers (required dep)
- SoulMateMemory: Drop-in managed cloud backend
- SchemaMemory: Database context for Text-to-SQL agents
The soul ecosystem for LangChain
Drop-in persistent memory for LangChain. Same soul-agent RAG+RLM, same SoulMate cloud option, same SchemaMemory for database intelligence. Works with ConversationChain, RunnableWithMessageHistory, and any LangChain component that uses memory.
pip install langchain-soul The soul ecosystem for LlamaIndex
Drop-in chat storage for LlamaIndex. Uses soul-agent's hybrid RAG+RLM retrieval under the hood. Works with ChatMemoryBuffer, FunctionAgent, and any LlamaIndex component that uses chat stores. Same file-based or SoulMate cloud options as the rest of the ecosystem.
pip install llamaindex-soul 🐳 soul-stack
NewOne Docker command to give n8n persistent memory
n8n is stateless by design — every workflow execution starts fresh. soul-stack fixes that. A single Docker container running n8n + soul.py + Jupyter Lab. Your workflows can now remember previous interactions, build context over time, and make intelligent decisions based on history. Works with Anthropic, OpenAI, or 100% local with Ollama.
docker run -d -p 8000:8000 -p 8888:8888 -p 5678:5678 -e ANTHROPIC_API_KEY=sk-ant-... pgmenon/soul-stack:latest ✨ Features (v0.1.3)
- Multi-provider: Anthropic, OpenAI, or Ollama (100% local)
- Backend selection: BM25 (default), ChromaDB, or Qdrant via
SOUL_BACKEND - OpenAI embeddings: Direct support, not just Azure
- CLI tools:
soul chatandsoul statuswith graceful Ollama handling
Looking for Enterprise?
SoulMate brings soul.py to production at scale — HIPAA-compliant healthcare, telecom support for millions of customers, financial services personalization. The commercial embodiment of persistent AI memory.
Licensing: The SoulMate API backend (soulmate-api) is source-available under BSL 1.1. Self-host freely, deploy on your own cloud — you just can't resell it as a competing hosted service. Automatically converts to MIT on March 4, 2030. Also available on Docker Hub.
Learn About SoulMate →Want to Contribute?
All projects welcome PRs. Check the GitHub issues for good first contributions, or open a discussion if you have ideas.