Agent Artifacts is an open-source memory correctness + procedural skill + auditability layer for AI agents. It makes agents more reliable by turning "memory" into structured artifacts that can be validated, versioned, and replayed.
Most agent systems can generate fluent responses. Agent Artifacts helps them remember safely, reuse skills, and explain decisions.
- Transactional memory so hallucinations don't become permanent facts.
- Skill artifacts (prompt + workflow) that are versioned, tagged, and reusable.
- Decision traces that make agent behavior auditable and debuggable.
- Bounded prompt overhead with global injection caps.
- MCP + adapter integrations so teams don't need to switch stacks.
Agent Artifacts is a plug-in layer, not a full agent framework. Use only what you need: memory, skills, or traces can be adopted independently.
Install and import a prompt skill:
pip install agent-artifacts
agent-artifacts import examples/prompt_skills/api-tester.md --name api_tester --version 0.1.0
agent-artifacts list
agent-artifacts run api_tester@0.1.0 --inputs "{\"base_url\": \"https://api.example.com\", \"endpoints\": [\"/health\"]}"More: Quickstart guide and examples index.
1) Transactional Memory
- Stage -> validate -> commit (or rollback) memory writes.
- Prevents "false facts" from sticking in production.
- Details: validation policy and memory types.
2) Skill Artifacts (Procedural Memory)
- Store workflows + role prompts as versioned artifacts (
name@version,@stabletags). - Enforce typed inputs with JSON Schema for safer execution.
- Prompt skills can be plain
.mdfiles with optional YAML front-matter. - Details: prompt skills examples, workflow skills examples, and tool adapters.
3) Decision Traces
- Structured logs for "why did the agent do that?"
- Supports replay and regression debugging.
- Details: memory redaction (privacy) and trace CLI examples below.
4) Bounded Context Overhead
- Injection is capped by default (
AdapterPipeline(max_injected_tokens=1000)). - Keeps prompts predictable vs dumping full histories.
- Details: context budgeting.
- You maintain a library of agent prompts and want versioning + metadata.
- You run repeatable workflows (deploys, QA checks, data extraction).
- You need auditability for production agent behavior.
- You want bounded context overhead instead of raw history dumps.
See benefits and use-cases for persona-based examples (solo devs, vibe coders, production teams) and modular adoption guidance.
- LangGraph adapter (stable): LangGraph adapter guide
- LangChain adapter (stable): LangChain adapter guide
- MCP server (stdio + HTTP/SSE): MCP server docs
- MCP client guides: MCP client setup
Documentation index: Docs (start here)
agent-artifacts import examples/prompt_skills/api-tester.md --name api_tester --version 0.1.0
agent-artifacts list
agent-artifacts run api_tester@0.1.0 --inputs "{\"base_url\": \"https://api.example.com\", \"endpoints\": [\"/health\"]}"Full CLI + injection examples: CLI reference.
Expose skills as tool/function definitions and execute tool calls:
from agent_artifacts.skills import (
SkillToolConfig,
SkillToolRegistry,
SkillQueryConfig,
execute_tool_call,
)
registry = SkillToolRegistry.from_storage(
storage,
query=SkillQueryConfig(tags=["stable"]),
config=SkillToolConfig(name_strategy="name_version"),
)
tool_defs = registry.definitions() # pass to your LLM runtime as tool/function specs
# ...when the model calls a tool:
result = execute_tool_call(storage, tool_name, tool_arguments, registry=registry)
print(result.to_dict())Provider-specific tool adapters and SDK call examples live in tool adapters.
Tools quickstart:
- Build tool definitions with
SkillToolRegistry.from_storage(...) - Convert them for your provider via
to_openai_tools/to_anthropic_tools/to_gemini_tools - Execute tool calls with
execute_tool_call(...) - Auto-run from model responses with
auto_execute_with_model(...)(see tool adapters)
Runnable demo (no external SDKs required):
python examples/skills/tool_adapters_demo.pyExpose skills + memory + traces via Model Context Protocol:
agent-artifacts-mcp --backend sqlite --db ~/.agent-artifacts/agent-artifacts.dbHTTP/SSE transport (optional):
agent-artifacts-mcp-http --host 127.0.0.1 --port 8001 --backend sqlite --db ~/.agent-artifacts/agent-artifacts.dbCopy/paste MCP client config (Cursor, Claude Desktop, etc.):
{
"mcpServers": {
"agent-artifacts": {
"command": "agent-artifacts-mcp",
"args": ["--backend", "sqlite", "--db", "/path/to/agent-artifacts.db"]
}
}
}60-second smoke test (HTTP): see MCP HTTP demo.
See MCP server docs for tool inventory and request/response examples. Client setup: MCP clients. Cursor quickstart config: Cursor config template and Cursor guide. Windsurf and Claude guides: Windsurf guide and Claude guide. Compatibility matrix and example app: MCP compatibility and MCP examples.
Prompt skills surfaced via MCP include argument metadata. If your skill inputs use JSON Schema
fields like description (or title), MCP clients can render richer prompt UIs:
inputs:
text:
type: string
description: Text to summarize.# Decision traces + audit journal
agent-artifacts trace log --decision execute_skill --skill-ref deploy_fastapi@1.0.0 --reason "deploy requested" --confidence 0.9 --result success --tx-id <tx_id>
agent-artifacts trace query --decision execute_skill --limit 50
agent-artifacts trace query --skill-ref deploy_fastapi@1.0.0 --result success --created-after 2026-01-01T00:00:00Z --correlation-id corr-123
agent-artifacts journal query --tx-id <tx_id> --limit 50 --show-payload
agent-artifacts journal query --tx-id <tx_id> --limit 10 --format json
agent-artifacts replay --tx-id <tx_id> --limit 50 --show-payloadCLI run retries/timeouts:
agent-artifacts run deploy_fastapi@1.0.0 --inputs "{\"repo_path\": \".\", \"retries\": 1}" --max-attempts 3 --retry-on timeout,failure --backoff-ms 250,500 --total-timeout-s 60 --step-timeout-s 10 --idempotency-key deploy-2026-01-27-001 --trace-inputs-preview --trace-output-preview --trace-preview-max-chars 120Configuration can be stored in ~/.agent-artifacts/agent-artifacts.yaml (or AGENT_ARTIFACTS_CONFIG) with
precedence: CLI args > env vars > config file > defaults.
Starter template: config template.
storage:
backend: sqlite
db: ~/.agent-artifacts/agent-artifacts.db
backend_config: {}Inspect the resolved configuration and sources:
agent-artifacts config show --format jsonStorage service, Postgres backend setup, and programmatic API examples live in: storage service docs and Python API.
- LangGraph starter app: starter app README
- LangGraph migration guide: migration guide
- LangGraph demos: demo, parallel demo, reference demo
- LangGraph ergonomics RFC: ergonomics RFC
- Benchmark harness: benchmarks
- Docs index: Docs (start here)
- Examples index: examples/README.md
Contributions are welcome. See contributing guide.
Open items we would love help with:
- LeTTA / memU / ReMe adapters
- Adapter compatibility notes + deprecation policy
- Adapter conformance tests in CI
- Memory pollution benchmark + trace replay regression tests
No. Agent Artifacts focuses on:
- transactional memory correctness
- procedural skills as artifacts
- decision trace auditability
Because it describes the core idea: reusable agent behavior stored as versioned artifacts.
MIT. See LICENSE.