Gitclaw: The AI Agent Framework Where Your Agent IS a Git Repo

By Prahlad Menon 4 min read

What if your AI agent wasn’t a black box running on someone else’s infrastructure, but a git repository you could fork, branch, diff, and merge? That’s the premise behind Gitclaw (formerly GitAgent Protocol), an open-source framework by open-gitagent that makes git the native substrate for AI agents.

Your Agent Is a Repo

In Gitclaw, every aspect of an agent’s existence lives as version-controlled files:

  • agent.yaml — The manifest. Think docker-compose.yml but for an AI agent. Declares the model, tools, permissions, and configuration.
  • SOUL.md — The agent’s identity. Who it is, how it thinks, what personality it carries.
  • RULES.md — Constraints and boundaries. What the agent can and cannot do.
  • memory/ — Persistent memory as files. Git log becomes the agent’s episodic history.
  • tools/ and skills/ — Capabilities the agent can invoke, defined declaratively.

This isn’t just an organizational pattern — it’s a paradigm shift. You can git diff two versions of an agent’s rules to see exactly what changed. You can git log memory/ to trace how its knowledge evolved. You can fork an agent, tweak its SOUL.md, and have an entirely different personality running from the same codebase.

Getting Started in 30 Seconds

Gitclaw is written in TypeScript and installs with a single command:

npm install -g gitclaw

That’s it. It supports 383 models out of the box — OpenAI, Anthropic, local models, you name it. The SDK mirrors the Claude Agent SDK pattern but runs entirely in-process. No subprocesses, no container orchestration, no infrastructure headaches.

Local Repo Mode

One of Gitclaw’s most compelling features is local repo mode. Clone any GitHub repository, point Gitclaw at it, and the agent works directly on the codebase. Every change gets auto-committed to a session branch, giving you a complete audit trail of what the agent did and why. Roll back a bad decision with git revert. Compare approaches across branches. Code review your agent’s work the same way you review a colleague’s PR.

Safety Without Sacrificing Speed

AI agents that can execute tools need guardrails. Gitclaw builds these in through lifecycle hookspreToolUse and postToolUse callbacks that let you gate dangerous operations before they execute. Want to require human approval before any file deletion? Add a hook. Need to log every API call for compliance? There’s a hook for that.

The framework supports configurable risk levels and human-in-the-loop workflows. Every tool invocation gets logged to .gitagent/audit.jsonl, creating an immutable audit trail. For regulated industries or security-conscious teams, this isn’t a nice-to-have — it’s table stakes.

The Docker Compose Analogy

The best way to understand Gitclaw’s philosophy is the “Docker Compose for AI agents” analogy. Docker Compose gave us declarative YAML to define multi-container applications — write once, run anywhere, version control your infrastructure. Gitclaw does the same for agents. Your agent.yaml is portable. Check it into a repo, share it with your team, deploy it on any machine with Node.js installed.

Part of a Bigger Ecosystem

Gitclaw doesn’t exist in isolation. It connects to the emerging agent ecosystem:

  • AGENTS.md (the OpenAI-popularized convention for agent instructions)
  • Google ADK (Agent Development Kit)
  • A2A protocol (Agent-to-Agent communication)

There’s even a Voice UI at localhost:3333 for conversational interaction with your agents during development.

Why This Matters

We’re in the early innings of AI agent development, and most frameworks treat agents as ephemeral processes — spin up, do work, disappear. Gitclaw bets that agents should be durable, auditable, and composable — the same properties that made git indispensable for code.

When your agent is a repo, collaboration looks like pull requests. Governance looks like branch protection rules. Evolution looks like commit history. The tools developers already know become the tools for managing AI.

That’s not just elegant engineering. It’s a bet that the future of AI agents looks a lot like the future of software — open, version-controlled, and built to last.

GitHub: github.com/open-gitagent/gitagent