Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 63 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,20 +105,63 @@ Instead of spending days wiring together LLMs, tools, and execution environments

---

### 🧠 Supported LLM Providers

The framework supports **10+ LLM providers** out of the box, covering 90%+ of the LLM market:

| Provider | Type | Use Case |
|----------|-------|----------|
| **Anthropic** | Cloud | State-of-the-art reasoning (Claude) |
| **OpenAI** | Cloud | GPT-4, GPT-4.1, o1 series |
| **Azure OpenAI** | Cloud | Enterprise OpenAI deployments |
| **Google GenAI** | Cloud | Gemini models via API |
| **Google Vertex AI** | Cloud | Gemini models via GCP |
| **Groq** | Cloud | Ultra-fast inference |
| **Mistral AI** | Cloud | European privacy-focused models |
| **Cohere** | Cloud | Enterprise RAG and Command models |
| **AWS Bedrock** | Cloud | Anthropic, Titan, Meta via AWS |
| **Ollama** | Local | Run LLMs locally (zero API cost) |
| **Hugging Face** | Cloud | Open models from Hugging Face Hub |

**Provider Priority:** Anthropic > Google Vertex > Google GenAI > Azure > Groq > Mistral > Cohere > Bedrock > HuggingFace > Ollama > OpenAI (fallback)

---

## 🚀 Quick Start (Zero to Agent in 60s)

### 1. Add your Brain (API Key)
You need an **LLM API key** (OpenAI or Anthropic) to breathe life into your agents. The framework uses Langchain under the hood, so standard environment functions perfectly!
You need an **LLM API key** to breathe life into your agents. The framework supports 10+ LLM providers via LangChain!

```bash
# Copy the template
cp .env.example .env

# Edit .env and paste your API key
# Choose one of the following providers:
# OPENAI_API_KEY=sk-your-key-here
# ANTHROPIC_API_KEY=sk-ant-your-key-here
# GOOGLE_API_KEY=your-google-key
# GROQ_API_KEY=gsk-your-key-here
# MISTRAL_API_KEY=your-mistral-key-here
# COHERE_API_KEY=your-cohere-key-here

# For Ollama (local), no API key needed:
# OLLAMA_BASE_URL=http://localhost:11434

# For Azure OpenAI:
# AZURE_OPENAI_API_KEY=your-azure-key
# AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com

# For Google Vertex AI:
# GOOGLE_VERTEX_PROJECT_ID=your-project-id

# For AWS Bedrock:
# AWS_PROFILE=your-profile

# For Hugging Face:
# HUGGINGFACEHUB_API_TOKEN=your-hf-token
```
> ⚠️ **Note:** At minimum, set your preferred provider's API key. Without it, your agents will sleep forever! 💤
> ⚠️ **Note:** Set your preferred provider's API key. Priority: Anthropic > Google Vertex > Google GenAI > Azure > Groq > Mistral > Cohere > Bedrock > HuggingFace > Ollama > OpenAI (default fallback).

### 2. Build & Run
No `pip`, no `virtualenv`, no *"it works on my machine"* excuses.
Expand All @@ -141,11 +184,24 @@ bin/agent.sh chef -i "I have chicken, rice, and soy sauce. What can I make?"
<details>
<summary><strong>🔑 Required Environment Variables</strong></summary>

| Variable | Required? | Description |
|----------|-----------|-------------|
| `OPENAI_API_KEY` | 🟢 **Yes*** | OpenAI API key (*if using OpenAI) |
| `ANTHROPIC_API_KEY`| 🟢 **Yes*** | Anthropic API key (*if using Anthropic) |
| `OPENAI_MODEL_NAME` | ⚪ No | Model to use (default: `gpt-4o`/`gpt-4`) |
| Provider | Variable | Required? | Default Model |
|----------|-----------|-------------|---------------|
| **Anthropic** | `ANTHROPIC_API_KEY` | 🟢 **Yes*** | `claude-haiku-4-5-20251001` |
| **OpenAI** | `OPENAI_API_KEY` | 🟢 **Yes*** | `gpt-4o-mini` |
| **Azure OpenAI** | `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT` | ⚪ No | `gpt-4o-mini` |
| **Google GenAI** | `GOOGLE_API_KEY` | ⚪ No | `gemini-2.0-flash-exp` |
| **Google Vertex AI** | `GOOGLE_VERTEX_PROJECT_ID` | ⚪ No | `gemini-2.0-flash-exp` |
| **Groq** | `GROQ_API_KEY` | ⚪ No | `llama-3.3-70b-versatile` |
| **Mistral AI** | `MISTRAL_API_KEY` | ⚪ No | `mistral-large-latest` |
| **Cohere** | `COHERE_API_KEY` | ⚪ No | `command-r-plus` |
| **AWS Bedrock** | `AWS_PROFILE` or `AWS_ACCESS_KEY_ID` | ⚪ No | `anthropic.claude-3-5-sonnet-20241022-v2:0` |
| **Ollama** | `OLLAMA_BASE_URL` | ⚪ No | `llama3.2` |
| **Hugging Face** | `HUGGINGFACEHUB_API_TOKEN` | ⚪ No | `meta-llama/Llama-3.2-3B-Instruct` |

**Model Override Variables** (optional):
- `ANTHROPIC_MODEL_NAME`, `OPENAI_MODEL_NAME`, `AZURE_OPENAI_MODEL_NAME`, `GOOGLE_GENAI_MODEL_NAME`, `GROQ_MODEL_NAME`, etc.

> ⚠️ **Note:** Only one provider's API key is required. The framework auto-detects which provider to use based on available credentials.

</details>

Expand Down
5 changes: 5 additions & 0 deletions agentic-framework/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,11 @@ dependencies = [
"tree-sitter==0.21.3",
"tree-sitter-languages>=1.10.2",
"requests>=2.32.5",
"langchain-groq>=1.1.2",
"langchain-mistralai>=1.1.1",
"langchain-cohere>=0.5.0",
"langchain-aws>=1.3.1",
"langchain-huggingface>=1.2.0",
]

[project.scripts]
Expand Down
190 changes: 177 additions & 13 deletions agentic-framework/src/agentic_framework/constants.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import os
from pathlib import Path
from typing import Literal
from typing import Any, Literal

from dotenv import load_dotenv

Expand All @@ -9,39 +9,203 @@
BASE_DIR = Path(__file__).resolve().parent.parent.parent
LOGS_DIR = BASE_DIR / "logs"

Provider = Literal["openai", "anthropic"]
Provider = Literal[
"anthropic",
"openai",
"ollama",
"azure_openai",
"google_vertexai",
"google_genai",
"groq",
"mistralai",
"cohere",
"bedrock",
"huggingface",
]

# Default models for each provider
DEFAULT_MODELS: dict[Provider, str] = {
"anthropic": "claude-haiku-4-5-20251001",
"openai": "gpt-4o-mini",
"ollama": "llama3.2",
"azure_openai": "gpt-4o-mini",
"google_vertexai": "gemini-2.0-flash-exp",
"google_genai": "gemini-2.0-flash-exp",
"groq": "llama-3.3-70b-versatile",
"mistralai": "mistral-large-latest",
"cohere": "command-r-plus",
"bedrock": "anthropic.claude-3-5-sonnet-20241022-v2:0",
"huggingface": "meta-llama/Llama-3.2-3B-Instruct",
}


def detect_provider() -> Provider:
"""Detect which LLM provider to use based on available API keys.

Returns:
"anthropic" if ANTHROPIC_API_KEY is set, "openai" otherwise.
The detected provider name. Priority order:
1. anthropic (ANTHROPIC_API_KEY)
2. google_vertexai (GOOGLE_VERTEX_PROJECT_ID or GOOGLE_VERTEX_CREDENTIALS)
3. google_genai (GOOGLE_API_KEY)
4. azure_openai (AZURE_OPENAI_API_KEY)
5. groq (GROQ_API_KEY)
6. mistralai (MISTRAL_API_KEY)
7. cohere (COHERE_API_KEY)
8. bedrock (AWS_PROFILE or AWS_ACCESS_KEY_ID)
9. huggingface (HUGGINGFACEHUB_API_TOKEN)
10. ollama (OLLAMA_BASE_URL or localhost:11434)
11. openai (OPENAI_API_KEY)
12. openai (fallback)

Note:
OpenAI is the default fallback when ANTHROPIC_API_KEY is absent.
When both keys are available, Anthropic takes precedence.
Ollama is special as it runs locally without an API key.
It's checked via OLLAMA_BASE_URL environment variable.
"""
# Check in order of priority
if os.getenv("ANTHROPIC_API_KEY"):
return "anthropic"
if os.getenv("GOOGLE_VERTEX_PROJECT_ID") or os.getenv("GOOGLE_VERTEX_CREDENTIALS"):
return "google_vertexai"
if os.getenv("GOOGLE_API_KEY"):
return "google_genai"
if os.getenv("AZURE_OPENAI_API_KEY"):
return "azure_openai"
if os.getenv("GROQ_API_KEY"):
return "groq"
if os.getenv("MISTRAL_API_KEY"):
return "mistralai"
if os.getenv("COHERE_API_KEY"):
return "cohere"
if os.getenv("AWS_PROFILE") or os.getenv("AWS_ACCESS_KEY_ID"):
return "bedrock"
if os.getenv("HUGGINGFACEHUB_API_TOKEN"):
return "huggingface"
if os.getenv("OLLAMA_BASE_URL") or os.getenv("OLLAMA_ENABLED"):
return "ollama"
# Final fallback
if os.getenv("OPENAI_API_KEY"):
return "openai"
return "openai"


def get_default_model() -> str:
"""Get the default model name based on available provider.

Returns:
Default model name for the detected provider.

Examples:
- Anthropic: "claude-haiku-4-5-20251001"
- OpenAI: "gpt-4o-mini"
Default model name for the detected provider. Can be overridden
with environment variables like ANTHROPIC_MODEL_NAME, OPENAI_MODEL_NAME, etc.
"""
provider = detect_provider()
if provider == "anthropic":
return os.getenv("ANTHROPIC_MODEL_NAME", "claude-haiku-4-5-20251001")
return os.getenv("OPENAI_MODEL_NAME", "gpt-4o-mini")

# Allow override via environment variables
env_model_names = {
"anthropic": os.getenv("ANTHROPIC_MODEL_NAME"),
"openai": os.getenv("OPENAI_MODEL_NAME"),
"ollama": os.getenv("OLLAMA_MODEL_NAME"),
"azure_openai": os.getenv("AZURE_OPENAI_MODEL_NAME"),
"google_vertexai": os.getenv("GOOGLE_VERTEX_MODEL_NAME"),
"google_genai": os.getenv("GOOGLE_GENAI_MODEL_NAME"),
"groq": os.getenv("GROQ_MODEL_NAME"),
"mistralai": os.getenv("MISTRAL_MODEL_NAME"),
"cohere": os.getenv("COHERE_MODEL_NAME"),
"bedrock": os.getenv("BEDROCK_MODEL_NAME"),
"huggingface": os.getenv("HUGGINGFACE_MODEL_NAME"),
}

if env_model_name := env_model_names.get(provider):
return env_model_name

return DEFAULT_MODELS.get(provider, "gpt-4o-mini")


# Legacy constant for backward compatibility
DEFAULT_MODEL = get_default_model()


def _create_model(model_name: str, temperature: float) -> Any:
"""Create the appropriate LLM model instance based on detected provider.

Args:
model_name: Name of the model to use.
temperature: Temperature setting for the model.

Returns:
The appropriate Chat model instance for the detected provider.
"""
provider = detect_provider()

if provider == "anthropic":
from langchain_anthropic import ChatAnthropic

return ChatAnthropic(model=model_name, temperature=temperature) # type: ignore[call-arg]

if provider == "ollama":
from langchain_community.chat_models import ChatOllama

return ChatOllama(
model=model_name,
temperature=temperature,
base_url=os.getenv("OLLAMA_BASE_URL", "http://localhost:11434"),
)

if provider == "azure_openai":
from langchain_openai import AzureChatOpenAI
from pydantic.types import SecretStr

api_key = os.getenv("AZURE_OPENAI_API_KEY")
return AzureChatOpenAI(
model=model_name,
temperature=temperature,
api_key=SecretStr(api_key) if api_key else None,
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_version=os.getenv("AZURE_OPENAI_API_VERSION", "2024-08-01-preview"),
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
)

if provider == "google_vertexai":
from langchain_google_vertexai import ChatVertexAI

return ChatVertexAI(model=model_name, temperature=temperature)

if provider == "google_genai":
from langchain_google_genai import ChatGoogleGenerativeAI

return ChatGoogleGenerativeAI(model=model_name, temperature=temperature)

if provider == "groq":
from langchain_groq import ChatGroq

return ChatGroq(model=model_name, temperature=temperature)

if provider == "mistralai":
from langchain_mistralai import ChatMistralAI

return ChatMistralAI(model_name=model_name, temperature=temperature)

if provider == "cohere":
from langchain_cohere import ChatCohere

return ChatCohere(model=model_name, temperature=temperature)

if provider == "bedrock":
from langchain_aws import ChatBedrock

# Set AWS region via environment variable if specified
if bedrock_region := os.getenv("BEDROCK_REGION"):
os.environ["AWS_DEFAULT_REGION"] = bedrock_region

return ChatBedrock(model=model_name, temperature=temperature)

if provider == "huggingface":
from langchain_huggingface import ChatHuggingFace

# HuggingFace ChatModel may not support temperature in all cases
try:
return ChatHuggingFace(model_id=model_name, temperature=temperature)
except Exception:
return ChatHuggingFace(model_id=model_name)

# Default fallback to OpenAI
from langchain_openai import ChatOpenAI

return ChatOpenAI(model=model_name, temperature=temperature)
20 changes: 1 addition & 19 deletions agentic-framework/src/agentic_framework/core/langgraph_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,32 +2,14 @@
from typing import Any, Dict, List, Sequence, Union

from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import InMemorySaver

from agentic_framework.constants import detect_provider, get_default_model
from agentic_framework.constants import _create_model, get_default_model
from agentic_framework.interfaces.base import Agent
from agentic_framework.mcp import MCPProvider


def _create_model(model_name: str, temperature: float): # type: ignore[no-any-return]
"""Create the appropriate LLM model instance based on detected provider.

Args:
model_name: Name of the model to use.
temperature: Temperature setting for the model.

Returns:
Either ChatAnthropic or ChatOpenAI instance.
"""
provider = detect_provider()
if provider == "anthropic":
return ChatAnthropic(model=model_name, temperature=temperature) # type: ignore[call-arg]
return ChatOpenAI(model=model_name, temperature=temperature)


class LangGraphMCPAgent(Agent):
"""Reusable base class for LangGraph agents with optional MCP tools."""

Expand Down
20 changes: 1 addition & 19 deletions agentic-framework/src/agentic_framework/core/simple_agent.py
Original file line number Diff line number Diff line change
@@ -1,31 +1,13 @@
from typing import Any, Dict, List, Union

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import BaseMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

from agentic_framework.constants import detect_provider, get_default_model
from agentic_framework.constants import _create_model, get_default_model
from agentic_framework.interfaces.base import Agent
from agentic_framework.registry import AgentRegistry


def _create_model(model_name: str, temperature: float): # type: ignore[no-any-return]
"""Create the appropriate LLM model instance based on detected provider.

Args:
model_name: Name of the model to use.
temperature: Temperature setting for the model.

Returns:
Either ChatAnthropic or ChatOpenAI instance.
"""
provider = detect_provider()
if provider == "anthropic":
return ChatAnthropic(model=model_name, temperature=temperature) # type: ignore[call-arg]
return ChatOpenAI(model=model_name, temperature=temperature)


@AgentRegistry.register("simple", mcp_servers=None)
class SimpleAgent(Agent):
"""
Expand Down
Loading