Skip to content

kayba-ai/agentic-context-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

430 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Kayba Logo

Agentic Context Engine (ACE)

GitHub stars Discord Twitter Follow PyPI version Python 3.12 License: MIT

AI agents that get smarter with every task

⭐ Star this repo if you find it useful!


What is ACE?

ACE enables AI agents to learn from their execution feedbackβ€”what works, what doesn'tβ€”and continuously improve. No fine-tuning, no training data, just automatic in-context learning.

The framework maintains a Skillbook: a living document of strategies that evolves with each task. When your agent succeeds, ACE extracts patterns. When it fails, ACE learns what to avoid. All learning happens transparently in context.

  • Self-Improving: Agents autonomously get smarter with each task
  • 20-35% Better Performance: Proven improvements on complex tasks
  • 49% Token Reduction: Demonstrated in browser automation benchmarks
  • No Context Collapse: Preserves valuable knowledge over time

LLM Quickstart

  1. Direct your favorite coding agent (Cursor, Claude Code, Codex, etc) to Quick Start Guide
  2. Prompt away!

Quick Start

1. Install

pip install ace-framework

2. Set API Key

export OPENAI_API_KEY="your-api-key"

3. Run

from ace import ACELiteLLM

agent = ACELiteLLM(model="gpt-4o-mini")

answer = agent.ask("What does Kayba's ACE framework do?")
print(answer)  # "ACE allows AI agents to remember and learn from experience!"

Done! Your agent learns automatically from each interaction.

β†’ Quick Start Guide | β†’ Setup Guide


Use Cases

Claude Code with Learning β†’ Quick Start

Run coding tasks with Claude Code while ACE learns patterns from each execution, building expertise over time for your specific codebase and workflows.

Automated System Prompting

The Skillbook acts as an evolving system prompt that automatically improves based on execution feedbackβ€”no manual prompt engineering required.

Enhance Existing Agents

Wrap your existing agent (browser-use, LangChain, custom) with ACE learning. Your agent executes tasks normally while ACE analyzes results and builds a skillbook of effective strategies.

Build Self-Improving Agents

Create new agents with built-in learning for customer support, data extraction, code generation, research, content creation, and task automation.


Demos

The Seahorse Emoji Challenge

A challenge where LLMs often hallucinate that a seahorse emoji exists (it doesn't).

Seahorse Emoji ACE Demo

In this example:

  1. The agent incorrectly outputs a horse emoji
  2. ACE reflects on the mistake without external feedback
  3. On the second attempt, the agent correctly realizes there is no seahorse emoji

β†’ Try it yourself

Tau2 Benchmark

Evaluated on the airline domain of Ο„2-bench (Sierra Research) β€” a benchmark for multi-step agentic tasks requiring tool use and policy adherence. Agent: Claude Haiku 4.5. Strategies learned on the train split with no reward signals; all results on the held-out test split.

pass^k = probability that all k independent attempts succeed. Higher k is a stricter test of agent consistency.

Tau2 Benchmark Results - Haiku 4.5

ACE doubles agent consistency at pass^4 using only 15 learned strategies β€” gains compound as the bar gets higher.

Browser Automation

Online Shopping Demo: ACE vs baseline agent shopping for 5 grocery items.

Online Shopping Demo Results

In this example:

  • ACE learns to navigate the website over 10 attempts
  • Performance stabilizes and step count decreases by 29.8%
  • Token costs reduce 49.0% for base agent and 42.6% including ACE overhead

β†’ Try it yourself & see all demos

Claude Code Loop

In this example, Claude Code is enhanced with ACE and self-reflects after each execution while translating the ACE library from Python to TypeScript.

Python β†’ TypeScript Translation:

Metric Result
Duration ~4 hours
Commits 119
Lines written ~14k
Outcome Zero build errors, all tests passing
API cost ~$1.5 (Sonnet for learning)

β†’ Claude Code Loop


Integrations

ACE integrates with popular agent frameworks:

Integration ACE Class Use Case
LiteLLM ACELiteLLM Simple self-improving agent
LangChain ACELangChain Wrap LangChain chains/agents
browser-use ACEAgent Browser automation
Claude Code ACEClaudeCode Claude Code CLI
ace-learn CLI ACEClaudeCode Learn from Claude Code sessions
Opik OpikIntegration Production monitoring and cost tracking

β†’ Integration Guide | β†’ Examples


How Does ACE Work?

Inspired by the ACE research framework from Stanford & SambaNova.

ACE enables agents to learn from execution feedback β€” what works, what doesn't β€” and continuously improve. No fine-tuning, no training data, just automatic in-context learning. Three specialized roles work together:

  1. Agent β€” Your agent, enhanced with strategies from the Skillbook
  2. Reflector β€” Analyzes execution traces to extract learnings. In recursive mode, the Reflector writes and runs Python code in a sandboxed REPL to programmatically query traces β€” finding patterns, errors, and insights that single-pass analysis misses
  3. SkillManager β€” Curates the Skillbook: adds new strategies, refines existing ones, and removes outdated patterns based on the Reflector's analysis

The key innovation is the Recursive Reflector β€” instead of summarizing traces in a single pass, it writes and executes Python code in a sandboxed environment to programmatically explore agent execution traces. It can search for patterns, isolate errors, query sub-agents for deeper analysis, and iterate until it finds actionable insights. These insights flow into the Skillbook β€” a living collection of strategies that evolves with every task.

flowchart LR
    Skillbook[(Skillbook<br>Learned Strategies)]
    Start([Query]) --> Agent[Agent<br>Enhanced with Skillbook]
    Agent <--> Environment[Task Environment<br>Evaluates & provides feedback]
    Environment -- Feedback --> Reflector[Reflector<br>Analyzes traces via<br>sandboxed code execution]
    Reflector --> SkillManager[SkillManager<br>Curates strategies]
    SkillManager -- Updates --> Skillbook
    Skillbook -. Injects context .-> Agent
Loading

Documentation


Contributing

We love contributions! Check out our Contributing Guide to get started.


Acknowledgment

Inspired by the ACE paper and Dynamic Cheatsheet.


⭐ Star this repo if you find it useful!

Built with ❀️ by Kayba and the open-source community.