Skip to content

A plug‑in layer for AI agents: safe memory, reusable skills, and traceable decisions

License

Notifications You must be signed in to change notification settings

HengYpinn/agent-artifacts

Repository files navigation

Agent Artifacts: Transactional Memory and Skill Artifacts for AI Agents

Agent Artifacts is an open-source memory correctness + procedural skill + auditability layer for AI agents. It makes agents more reliable by turning "memory" into structured artifacts that can be validated, versioned, and replayed.

Most agent systems can generate fluent responses. Agent Artifacts helps them remember safely, reuse skills, and explain decisions.


What You Get

  • Transactional memory so hallucinations don't become permanent facts.
  • Skill artifacts (prompt + workflow) that are versioned, tagged, and reusable.
  • Decision traces that make agent behavior auditable and debuggable.
  • Bounded prompt overhead with global injection caps.
  • MCP + adapter integrations so teams don't need to switch stacks.

Agent Artifacts is a plug-in layer, not a full agent framework. Use only what you need: memory, skills, or traces can be adopted independently.


Quickstart (2 Minutes)

Install and import a prompt skill:

pip install agent-artifacts
agent-artifacts import examples/prompt_skills/api-tester.md --name api_tester --version 0.1.0
agent-artifacts list
agent-artifacts run api_tester@0.1.0 --inputs "{\"base_url\": \"https://api.example.com\", \"endpoints\": [\"/health\"]}"

More: Quickstart guide and examples index.


Core Capabilities (Why It Matters)

1) Transactional Memory

  • Stage -> validate -> commit (or rollback) memory writes.
  • Prevents "false facts" from sticking in production.
  • Details: validation policy and memory types.

2) Skill Artifacts (Procedural Memory)

3) Decision Traces

  • Structured logs for "why did the agent do that?"
  • Supports replay and regression debugging.
  • Details: memory redaction (privacy) and trace CLI examples below.

4) Bounded Context Overhead

  • Injection is capped by default (AdapterPipeline(max_injected_tokens=1000)).
  • Keeps prompts predictable vs dumping full histories.
  • Details: context budgeting.

When Agent Artifacts Shines

  • You maintain a library of agent prompts and want versioning + metadata.
  • You run repeatable workflows (deploys, QA checks, data extraction).
  • You need auditability for production agent behavior.
  • You want bounded context overhead instead of raw history dumps.

Benefits & Use-Cases

See benefits and use-cases for persona-based examples (solo devs, vibe coders, production teams) and modular adoption guidance.


Integrations


Docs (Start Here)

Documentation index: Docs (start here)


CLI Quickstart

agent-artifacts import examples/prompt_skills/api-tester.md --name api_tester --version 0.1.0
agent-artifacts list
agent-artifacts run api_tester@0.1.0 --inputs "{\"base_url\": \"https://api.example.com\", \"endpoints\": [\"/health\"]}"

Full CLI + injection examples: CLI reference.

Skill tool integration (callable by LLMs)

Expose skills as tool/function definitions and execute tool calls:

from agent_artifacts.skills import (
    SkillToolConfig,
    SkillToolRegistry,
    SkillQueryConfig,
    execute_tool_call,
)

registry = SkillToolRegistry.from_storage(
    storage,
    query=SkillQueryConfig(tags=["stable"]),
    config=SkillToolConfig(name_strategy="name_version"),
)

tool_defs = registry.definitions()  # pass to your LLM runtime as tool/function specs

# ...when the model calls a tool:
result = execute_tool_call(storage, tool_name, tool_arguments, registry=registry)
print(result.to_dict())

Provider-specific tool adapters and SDK call examples live in tool adapters.

Tools quickstart:

  • Build tool definitions with SkillToolRegistry.from_storage(...)
  • Convert them for your provider via to_openai_tools / to_anthropic_tools / to_gemini_tools
  • Execute tool calls with execute_tool_call(...)
  • Auto-run from model responses with auto_execute_with_model(...) (see tool adapters)

Runnable demo (no external SDKs required):

python examples/skills/tool_adapters_demo.py

MCP server (stdio)

Expose skills + memory + traces via Model Context Protocol:

agent-artifacts-mcp --backend sqlite --db ~/.agent-artifacts/agent-artifacts.db

HTTP/SSE transport (optional):

agent-artifacts-mcp-http --host 127.0.0.1 --port 8001 --backend sqlite --db ~/.agent-artifacts/agent-artifacts.db

MCP quickstart (60 seconds)

Copy/paste MCP client config (Cursor, Claude Desktop, etc.):

{
  "mcpServers": {
    "agent-artifacts": {
      "command": "agent-artifacts-mcp",
      "args": ["--backend", "sqlite", "--db", "/path/to/agent-artifacts.db"]
    }
  }
}

60-second smoke test (HTTP): see MCP HTTP demo.

See MCP server docs for tool inventory and request/response examples. Client setup: MCP clients. Cursor quickstart config: Cursor config template and Cursor guide. Windsurf and Claude guides: Windsurf guide and Claude guide. Compatibility matrix and example app: MCP compatibility and MCP examples.

Prompt skills surfaced via MCP include argument metadata. If your skill inputs use JSON Schema fields like description (or title), MCP clients can render richer prompt UIs:

inputs:
  text:
    type: string
    description: Text to summarize.
# Decision traces + audit journal
agent-artifacts trace log --decision execute_skill --skill-ref deploy_fastapi@1.0.0 --reason "deploy requested" --confidence 0.9 --result success --tx-id <tx_id>
agent-artifacts trace query --decision execute_skill --limit 50
agent-artifacts trace query --skill-ref deploy_fastapi@1.0.0 --result success --created-after 2026-01-01T00:00:00Z --correlation-id corr-123
agent-artifacts journal query --tx-id <tx_id> --limit 50 --show-payload
agent-artifacts journal query --tx-id <tx_id> --limit 10 --format json
agent-artifacts replay --tx-id <tx_id> --limit 50 --show-payload

CLI run retries/timeouts:

agent-artifacts run deploy_fastapi@1.0.0 --inputs "{\"repo_path\": \".\", \"retries\": 1}" --max-attempts 3 --retry-on timeout,failure --backoff-ms 250,500 --total-timeout-s 60 --step-timeout-s 10 --idempotency-key deploy-2026-01-27-001 --trace-inputs-preview --trace-output-preview --trace-preview-max-chars 120

Config file (optional)

Configuration can be stored in ~/.agent-artifacts/agent-artifacts.yaml (or AGENT_ARTIFACTS_CONFIG) with precedence: CLI args > env vars > config file > defaults.

Starter template: config template.

storage:
  backend: sqlite
  db: ~/.agent-artifacts/agent-artifacts.db
  backend_config: {}

Inspect the resolved configuration and sources:

agent-artifacts config show --format json

Storage + Python API

Storage service, Postgres backend setup, and programmatic API examples live in: storage service docs and Python API.


Examples


Contributing

Contributions are welcome. See contributing guide.

Open items we would love help with:

  • LeTTA / memU / ReMe adapters
  • Adapter compatibility notes + deprecation policy
  • Adapter conformance tests in CI
  • Memory pollution benchmark + trace replay regression tests

FAQ

Q: Is this just another RAG memory system?

No. Agent Artifacts focuses on:

  • transactional memory correctness
  • procedural skills as artifacts
  • decision trace auditability

Q: Why "Agent Artifacts"?

Because it describes the core idea: reusable agent behavior stored as versioned artifacts.


License

MIT. See LICENSE.


About

A plug‑in layer for AI agents: safe memory, reusable skills, and traceable decisions

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages