Build AI agents that actually do things.
Combine local tools and MCP servers in a single, elegant runtime.
Write agents in 5 lines of code. Run them anywhere.
Instead of spending days wiring together LLMs, tools, and execution environments, Agentic Framework gives you a production-ready setup instantly.
- Write Less, Do More: Create a fully functional agent with just 5 lines of Python using the zero-config
@AgentRegistry.registerdecorator. - Context is King (MCP): Native integration with Model Context Protocol (MCP) servers to give your agents live data (Web search, APIs, internal databases).
- Hardcore Local Tools: Built-in blazing fast tools (
ripgrep,fd, AST parsing) so your agents can explore and understand local codebases out-of-the-box. - Stateful & Resilient: Powered by LangGraph to support memory, cyclic reasoning, and human-in-the-loop workflows.
- Docker-First Isolation: Every agent runs in isolated containersβno more "it works on my machine" when sharing with your team.
In this single command, the framework orchestrates 3 distinct AI sub-agents working together to plan a tripβbuilt entirely in just 126 lines of Python.
- π§° Available Out of the Box
- π Quick Start (Zero to Agent in 60s)
- π οΈ Build Your Own Agent
- ποΈ Architecture
- π» CLI Reference
- π§βπ» Local Development
- π¬ See it in Action
- π€ Contributing
| Agent | Purpose | MCP Servers | Local Tools |
|---|---|---|---|
developer |
Code Master: Read, search & edit code. | webfetch |
All codebase tools below |
travel-coordinator |
Trip Planner: Orchestrates agents. | kiwi-com-flight-searchwebfetch |
Uses 3 sub-agents |
chef |
Chef: Recipes from your fridge. | webfetch |
- |
news |
News Anchor: Aggregates top stories. | webfetch |
- |
travel |
Flight Booker: Finds the best routes. | kiwi-com-flight-search |
- |
simple |
Chat Buddy: Vanilla conversational agent. | - | - |
github-pr-reviewer |
PR Reviewer: Reviews diffs, posts inline comments & summaries. | - |
View toolsget_pr_diffget_pr_commentspost_review_commentpost_general_commentreply_to_review_commentget_pr_metadata
|
| Tool | Capability | Example |
|---|---|---|
find_files |
Fast search via fd |
*.py finds Python files |
discover_structure |
Directory tree mapping | Understands project layout |
get_file_outline |
AST signature parsing (Python, TS, Go, Rust, Java, C++, PHP) | Extracts classes/functions |
read_file_fragment |
Precise file reading | file.py:10:50 |
code_search |
Fast search via ripgrep |
Global regex search |
edit_file |
Safe file editing | Inserts/Replaces lines |
π Advanced: edit_file Formats
RECOMMENDED: search_replace (no line numbers needed)
{"op": "search_replace", "path": "file.py", "old": "exact text", "new": "replacement text"}Line-based operations:
replace:path:start:end:content | insert:path:after_line:content | delete:path:start:end
| Server | Purpose | API Key Needed? |
|---|---|---|
kiwi-com-flight-search |
Search real-time flights | π’ No |
webfetch |
Extract clean text from URLs & web search | π’ No |
The framework supports 10+ LLM providers out of the box, covering 90%+ of the LLM market:
| Provider | Type | Use Case |
|---|---|---|
| Anthropic | Cloud | State-of-the-art reasoning (Claude) |
| OpenAI | Cloud | GPT-4, GPT-4.1, o1 series |
| Azure OpenAI | Cloud | Enterprise OpenAI deployments |
| Google GenAI | Cloud | Gemini models via API |
| Google Vertex AI | Cloud | Gemini models via GCP |
| Groq | Cloud | Ultra-fast inference |
| Mistral AI | Cloud | European privacy-focused models |
| Cohere | Cloud | Enterprise RAG and Command models |
| AWS Bedrock | Cloud | Anthropic, Titan, Meta via AWS |
| Ollama | Local | Run LLMs locally (zero API cost) |
| Hugging Face | Cloud | Open models from Hugging Face Hub |
Provider Priority: Anthropic > Google Vertex > Google GenAI > Azure > Groq > Mistral > Cohere > Bedrock > HuggingFace > Ollama > OpenAI (fallback)
You need an LLM API key to breathe life into your agents. The framework supports 10+ LLM providers via LangChain!
# Copy the template
cp .env.example .env
# Edit .env and paste your API key
# Choose one of the following providers:
# OPENAI_API_KEY=sk-your-key-here
# ANTHROPIC_API_KEY=sk-ant-your-key-here
# GOOGLE_API_KEY=your-google-key
# GROQ_API_KEY=gsk-your-key-here
# MISTRAL_API_KEY=your-mistral-key-here
# COHERE_API_KEY=your-cohere-key-here
# For Ollama (local), no API key needed:
# OLLAMA_BASE_URL=http://localhost:11434
# For Azure OpenAI:
# AZURE_OPENAI_API_KEY=your-azure-key
# AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
# For Google Vertex AI:
# GOOGLE_VERTEX_PROJECT_ID=your-project-id
# For AWS Bedrock:
# AWS_PROFILE=your-profile
# For Hugging Face:
# HUGGINGFACEHUB_API_TOKEN=your-hf-token
β οΈ Note: Set your preferred provider's API key. Priority: Anthropic > Google Vertex > Google GenAI > Azure > Groq > Mistral > Cohere > Bedrock > HuggingFace > Ollama > OpenAI (default fallback).
No pip, no virtualenv, no "it works on my machine" excuses.
# Clone the repository
git clone https://github.com/jeancsil/agentic-framework.git
cd agentic-framework
# Build the Docker image
make docker-build
# Unleash your first agent!
bin/agent.sh developer -i "Explain this codebase"
# Or try the chef agent
bin/agent.sh chef -i "I have chicken, rice, and soy sauce. What can I make?"π Required Environment Variables
| Provider | Variable | Required? | Default Model |
|---|---|---|---|
| Anthropic | ANTHROPIC_API_KEY |
π’ Yes* | claude-haiku-4-5-20251001 |
| OpenAI | OPENAI_API_KEY |
π’ Yes* | gpt-4o-mini |
| Azure OpenAI | AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT |
βͺ No | gpt-4o-mini |
| Google GenAI | GOOGLE_API_KEY |
βͺ No | gemini-2.0-flash-exp |
| Google Vertex AI | GOOGLE_VERTEX_PROJECT_ID |
βͺ No | gemini-2.0-flash-exp |
| Groq | GROQ_API_KEY |
βͺ No | llama-3.3-70b-versatile |
| Mistral AI | MISTRAL_API_KEY |
βͺ No | mistral-large-latest |
| Cohere | COHERE_API_KEY |
βͺ No | command-r-plus |
| AWS Bedrock | AWS_PROFILE or AWS_ACCESS_KEY_ID |
βͺ No | anthropic.claude-3-5-sonnet-20241022-v2:0 |
| Ollama | OLLAMA_BASE_URL |
βͺ No | llama3.2 |
| Hugging Face | HUGGINGFACEHUB_API_TOKEN |
βͺ No | meta-llama/Llama-3.2-3B-Instruct |
Model Override Variables (optional):
ANTHROPIC_MODEL_NAME,OPENAI_MODEL_NAME,AZURE_OPENAI_MODEL_NAME,GOOGLE_GENAI_MODEL_NAME,GROQ_MODEL_NAME, etc.
β οΈ Note: Only one provider's API key is required. The framework auto-detects which provider to use based on available credentials.
from agentic_framework.core.langgraph_agent import LangGraphMCPAgent
from agentic_framework.registry import AgentRegistry
@AgentRegistry.register("my-agent", mcp_servers=["webfetch"])
class MyAgent(LangGraphMCPAgent):
@property
def system_prompt(self) -> str:
return "You are my custom agent with the power to fetch websites."Boom. Run it instantly:
bin/agent.sh my-agent -i "Summarize https://example.com"Want to add your own Python logic? Easy.
from langchain_core.tools import StructuredTool
from agentic_framework.core.langgraph_agent import LangGraphMCPAgent
from agentic_framework.registry import AgentRegistry
@AgentRegistry.register("data-processor")
class DataProcessorAgent(LangGraphMCPAgent):
@property
def system_prompt(self) -> str:
return "You process data files like a boss."
def local_tools(self) -> list:
return [
StructuredTool.from_function(
func=self.process_csv,
name="process_csv",
description="Process a CSV file path",
)
]
def process_csv(self, filepath: str) -> str:
# Magic happens here β¨
return f"Successfully processed {filepath}!"Under the hood, we seamlessly bridge the gap between user intent and execution:
flowchart TB
subgraph User [π€ User Space]
Input[User Input]
end
subgraph CLI [π» CLI - agentic-run]
Typer[Typer Interface]
end
subgraph Registry [π Registry]
AR[AgentRegistry]
AD[Auto-discovery]
end
subgraph Agents [π€ Agents]
Chef[chef agent]
Dev[developer agent]
Travel[travel agent]
end
subgraph Core [π§ Core Engine]
LGA[LangGraphMCPAgent]
LG[LangGraph Runtime]
CP[(Checkpointing)]
end
subgraph Tools [π§° Tools & Skills]
LT[Local Tools]
MCP[MCP Tools]
end
subgraph External [π External World]
LLM[LLM API]
MCPS[MCP Servers]
end
Input --> Typer
Typer --> AR
AR --> AD
AR -->|Routes to| Chef & Dev & Travel
Chef & Dev & Travel -->|Inherits from| LGA
LGA --> LG
LG <--> CP
LGA -->|Uses| LT
LGA -->|Uses| MCP
LT -->|Reasoning| LLM
MCP -->|Queries| MCPS
MCPS -->|Provides Data| LLM
LLM --> Output[Final Response]
Command your agents directly from the terminal.
# π List all registered agents
bin/agent.sh list
# π΅οΈ Get detailed info about what an agent can do
bin/agent.sh info developer
# π Run an agent with input
bin/agent.sh developer -i "Analyze the architecture of this project"
# β±οΈ Run with an execution timeout (seconds)
bin/agent.sh developer -i "Refactor this module" -t 120
# π Run with debug-level verbosity
bin/agent.sh developer -i "Hello" -v
# π Access logs (same location as local)
tail -f agentic-framework/logs/agent.logPrefer running without Docker? We got you.
System Requirements & Setup
Requirements:
- Python 3.12+
ripgrep,fd,fzf
# Install dependencies (blazingly fast with uv β‘)
make install
# Run the test suite
make test
# Run agents directly in your environment
uv --directory agentic-framework run agentic-run developer -i "Hello"Useful `make` Commands
make install # Install dependencies with uv
make test # Run pytest with coverage
make format # Auto-format codebase with ruff
make check # Strict linting (mypy + ruff)We love contributions! Check out our AGENTS.md for development guidelines.
The Golden Rules:
make checkshould pass without complaints.make testshould stay green.- Don't drop test coverage (we like our 80% mark!).
This project is licensed under the MIT License. See LICENSE for details.
Stand on the shoulders of giants:
If you find this useful, please consider giving it a β or buying me a coffee!
Β
