litecrew: Multi-Agent Orchestration in 100 Lines of Python
TL;DR: litecrew is multi-agent orchestration in ~150 lines of Python. Sequential handoffs, parallel execution, tool calling, token tracking — no YAML, no config files, no framework to learn. Install:
pip install litecrew
Most multi-agent libraries want you to learn their abstractions, their decorators, their 47 integration patterns. You just want two LLMs to pass data to each other.
20% of the features. 1% of the code. Zero learning curve.
Why Does This Need to Exist?
| Framework | To run 2 agents in sequence, you need… |
|---|---|
| CrewAI | Crew, Task, Agent classes, YAML config, decorators |
| LangGraph | StateGraph, nodes, edges, conditional routing |
| AutoGen | ConversableAgent, GroupChat, GroupChatManager |
| litecrew | sequential(agent1, agent2) |
We’re not better. We’re smaller. If you need complex orchestration, use the big frameworks. If you need something working in 5 minutes, we’re here.
What Does a Complete Workflow Look Like?
from litecrew import Agent, crew
researcher = Agent(
name="researcher",
model="gpt-4o-mini",
system="You are a research assistant. Find key facts."
)
writer = Agent(
name="writer",
model="claude-3-5-sonnet-20241022",
system="You are a writer. Create engaging content."
)
@crew(researcher, writer)
def write_article(topic: str) -> str:
research = researcher(f"Research {topic}, return 5 key facts")
return writer(f"Write an article using: {research}")
article = write_article("quantum computing")
That’s a complete multi-agent system in 15 lines.
How Do I Install litecrew?
pip install litecrew # Core
pip install litecrew[openai] # With OpenAI
pip install litecrew[anthropic] # With Anthropic
pip install litecrew[all] # Everything + memory
What Features Does litecrew Include?
Sequential handoffs:
from litecrew import Agent, sequential
pipeline = sequential(researcher, writer, editor)
result = pipeline("Write about AI safety")
Parallel fan-out:
from litecrew import Agent, parallel
security = Agent("security", system="Review for security issues.")
performance = Agent("performance", system="Review for performance.")
style = Agent("style", system="Review for code style.")
review_all = parallel(security, performance, style)
results = review_all(code_to_review)
# Returns: ["Security: SQL injection...", "Performance: ...", "Style: ..."]
Tool calling:
from litecrew import Agent, tool
@tool(schema={
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
})
def search(query: str) -> str:
return f"Results for: {query}"
agent = Agent("assistant", tools=[search])
response = agent("Search for the latest AI news")
Token tracking:
response = agent("What is 2+2?")
print(agent.tokens) # {"in": 15, "out": 8}
Persistent memory (optional):
from litecrew import Agent, with_memory
agent = with_memory(Agent("assistant"), namespace="my-bot")
agent("My name is Alice")
# ... restart ...
agent("What's my name?") # "Alice"
Does litecrew Work With Local Models?
Yes. Anything with an OpenAI-compatible API works:
import openai
from litecrew import Agent
openai.base_url = "http://localhost:11434/v1" # Ollama
openai.api_key = "ollama"
agent = Agent("local", model="llama3.2")
response = agent("Hello!")
Mix cloud and local in the same workflow:
researcher = Agent("researcher", model="llama3.2") # Free via Ollama
writer = Agent("writer", model="gpt-4o") # Quality via OpenAI
Works with Ollama, LM Studio, vLLM, LocalAI, and any other OpenAI-compatible server.
How Does litecrew Compare to Alternatives?
| Feature | litecrew | CrewAI | LangGraph | AutoGen |
|---|---|---|---|---|
| Lines of Code | ~150 | ~15,000 | ~50,000 | ~30,000 |
| Learning Curve | Minutes | Hours | Days | Days |
| Sequential Handoffs | ✅ | ✅ | ✅ | ✅ |
| Parallel Execution | ✅ | ✅ | ✅ | ✅ |
| Hierarchical Agents | ❌ | ✅ | ✅ | ✅ |
| State Machines | ❌ | ⚠️ | ✅ | ✅ |
| Human-in-Loop | ❌ | ✅ | ✅ | ✅ |
| YAML Config | ❌ | ✅ | ❌ | ❌ |
| Streaming | ❌ | ✅ | ✅ | ✅ |
The deal: We do 20% of what CrewAI does in 1% of the code. That’s a tradeoff. If you need the other 80%, you’ve outgrown us — and that’s fine.
When Should I Use litecrew?
Use litecrew when:
- You have 2-5 agents passing data
- You’re prototyping quickly
- You want readable, debuggable code
- You’re learning multi-agent patterns
Don’t use litecrew when:
- You need hierarchical management
- You need complex state machines
- You need human approval workflows
- You need 47 integrations
If you need more, fork it (it’s 150 lines) or graduate to CrewAI with crewai-soul.
Is litecrew Really BYOK?
Yes. litecrew never touches your API keys.
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
The official openai and anthropic libraries read these automatically. litecrew just calls those libraries.
- ✅ No litecrew account required
- ✅ No API proxy
- ✅ No telemetry
- ✅ No key storage
- ✅ Works with local models (Ollama via OpenAI-compatible API)
The Soul Ecosystem
litecrew is part of a family of simple, composable AI tools:
| Package | Purpose | When to Use |
|---|---|---|
| litecrew | Minimal orchestration | Prototypes, simple workflows |
| soul-agent | Persistent memory | Add memory to any agent |
| crewai-soul | CrewAI + memory | Full framework power |
| langchain-soul | LangChain + memory | Complex chains |
| llamaindex-soul | LlamaIndex + memory | RAG pipelines |
Philosophy
“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away.” — Antoine de Saint-Exupéry
Most frameworks race to add features. We race to keep them out.
The SQLite strategy: SQLite doesn’t try to be PostgreSQL. It does one thing well and says “if you need more, use something else.” That’s litecrew.
Frequently Asked Questions
What is litecrew?
A minimal Python library for multi-agent orchestration. Sequential, parallel, tools, tokens — nothing else.
How do I install litecrew?
pip install litecrew # Core
pip install litecrew[all] # With OpenAI, Anthropic, memory
Does litecrew work with Ollama?
Yes. Set openai.base_url = "http://localhost:11434/v1" and use any local model.
Will you add feature X?
Probably not. The value is staying small. Fork it if you need more.
Is this production-ready?
For simple workflows with 2-5 agents, yes. For complex enterprise needs, use CrewAI.
Do you store my API keys?
No. litecrew never sees your keys. The official SDKs read them from environment variables.
Can I use different models for different agents?
Yes. Each agent has its own model parameter — mix OpenAI, Anthropic, and local models freely.
Community
litecrew – Multi-agent orchestration in ~100 lines (no frameworks, no magic)
by u/the_ai_scientist in r/Python
Links: