From a9b886a0b8f2adaade5275de80327a9389a77e3c Mon Sep 17 00:00:00 2001 From: cemde Date: Fri, 2 Jan 2026 00:25:50 +0100 Subject: [PATCH 1/5] initial table --- README.md | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/README.md b/README.md index e473085..20c4cea 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,46 @@ MASEval is an evaluation library that provides a unified interface for benchmark Analogous to pytest for testing or MLflow for ML experimentation, MASEval focuses exclusively on evaluation infrastructure. It does not implement agents, define multi-agent communication protocols, or turn LLMs into agents. Instead, it wraps existing agent systems via simple adapters, orchestrates the evaluation lifecycle (setup, execution, measurement, teardown), and provides lifecycle hooks for tracing, logging, and metrics collection. This separation allows researchers to compare different agent architectures apples-to-apples across frameworks, while maintaining full control over their agent implementations. +## Why MASEval? + +| Library | Multi-Agent | System | Agnostic | Benchmarks | Multi-turn | No Lock-In | BYO | Trace-First | Error Attr | Light | Tooling | Sandbox | +| --------------- | :---------: | :----: | :------: | :--------: | :--------: | :--------: | :-: | :---------: | :--------: | :---: | :-----: | :-----: | +| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | ✅ | ✅ | 🟢 | +| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ✅ | +| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ✅ | ✅ | ✅ | +| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ✅ | ✅ | ❌ | +| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | ✅ | ✅ | ❌ | +| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | ? | 🟡 | 🟡 | 🟡 | +| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | ❌ | 🟡 | ✅ | +| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ? | ✅ | 🟡 | ❌ | +| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | +| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ❌ | +| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | ✅ | ✅ | ❌ | + +**Legend:** + +| Rating | Grading Criteria | +| ------ | ------------------------------------------------------------------------------------- | +| **✅** | Native Docker/K8s/VM integration with domain/network controls. Built-in sandbox. | +| **🟢** | Architecture supports BYO sandboxing through abstract `Environment` but not built-in. | +| **🟡** | Docker optional or partial integration. | +| **❌** | No sandboxing; runs in local Python environment. | + +| Column | Feature | One-Liner | +| --------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------- | +| **Multi-Agent** | Multi-Agent Native | Native orchestration with per-agent tracing, independent message histories, and explicit coordination patterns. | +| **System** | System-Level Comparison | Compare different framework implementations on the same benchmark (not just swapping LLMs). | +| **Agnostic** | Agent Framework Agnostic | Evaluate agents from any framework via thin adapters without requiring protocol adoption or code recreation. | +| **Benchmarks** | Pre-Implemented Benchmarks | Ships complete, ready-to-run benchmarks with environments, tools, and evaluators (not just templates). | +| **Multi-turn** | User-Agent Multi-turn | First-class user simulation with personas, stop tokens, and tool access for realistic multi-turn conversations. | +| **No Lock-In** | No Vendor Lock-In | Fully open-source, works offline, permissive license (MIT/Apache), no mandatory cloud services or telemetry. | +| **BYO** | BYO Philosophy | Bring your own logging, agents, environments, and tools — flexibility over opinionated defaults. | +| **Trace-First** | Trace-First Evaluation | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | +| **Error Attr** | Structured Error Attribution | Distinguish agent faults from infrastructure/user errors for fair scoring (`AgentError` vs `EnvironmentError`). | +| **Light** | Lightweight | Minimal dependencies, small codebase (~20k LOC), quick time to first evaluation (~5-15 min). | +| **Tooling** | Professional Tooling | Published on PyPI, CI/CD, good test coverage, structured logging, active maintenance, excellent docs. | +| **Sandbox** | Sandboxed Execution | Built-in Docker/K8s/VM isolation for safe code execution (or BYO sandbox via abstract Environment). | + ## Core Principles: - **Evaluation, Not Implementation:** MASEval provides the evaluation infrastructure—you bring your agent implementation. Whether you've built agents with AutoGen, LangChain, custom code, or direct LLM calls, MASEval wraps them via simple adapters and runs them through standardized benchmarks. From ed27386077a639ba3632590a4d1b67aa061db8c1 Mon Sep 17 00:00:00 2001 From: cemde Date: Fri, 2 Jan 2026 00:35:59 +0100 Subject: [PATCH 2/5] cleaned table --- README.md | 52 +++++++++++++++------------------------------------- 1 file changed, 15 insertions(+), 37 deletions(-) diff --git a/README.md b/README.md index 20c4cea..4ba3762 100644 --- a/README.md +++ b/README.md @@ -21,43 +21,21 @@ Analogous to pytest for testing or MLflow for ML experimentation, MASEval focuse ## Why MASEval? -| Library | Multi-Agent | System | Agnostic | Benchmarks | Multi-turn | No Lock-In | BYO | Trace-First | Error Attr | Light | Tooling | Sandbox | -| --------------- | :---------: | :----: | :------: | :--------: | :--------: | :--------: | :-: | :---------: | :--------: | :---: | :-----: | :-----: | -| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | ✅ | ✅ | 🟢 | -| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ✅ | -| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ✅ | ✅ | ✅ | -| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ✅ | ✅ | ❌ | -| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | ✅ | ✅ | ❌ | -| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | ? | 🟡 | 🟡 | 🟡 | -| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | ❌ | 🟡 | ✅ | -| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ? | ✅ | 🟡 | ❌ | -| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | -| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ❌ | -| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | ✅ | ✅ | ❌ | - -**Legend:** - -| Rating | Grading Criteria | -| ------ | ------------------------------------------------------------------------------------- | -| **✅** | Native Docker/K8s/VM integration with domain/network controls. Built-in sandbox. | -| **🟢** | Architecture supports BYO sandboxing through abstract `Environment` but not built-in. | -| **🟡** | Docker optional or partial integration. | -| **❌** | No sandboxing; runs in local Python environment. | - -| Column | Feature | One-Liner | -| --------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------- | -| **Multi-Agent** | Multi-Agent Native | Native orchestration with per-agent tracing, independent message histories, and explicit coordination patterns. | -| **System** | System-Level Comparison | Compare different framework implementations on the same benchmark (not just swapping LLMs). | -| **Agnostic** | Agent Framework Agnostic | Evaluate agents from any framework via thin adapters without requiring protocol adoption or code recreation. | -| **Benchmarks** | Pre-Implemented Benchmarks | Ships complete, ready-to-run benchmarks with environments, tools, and evaluators (not just templates). | -| **Multi-turn** | User-Agent Multi-turn | First-class user simulation with personas, stop tokens, and tool access for realistic multi-turn conversations. | -| **No Lock-In** | No Vendor Lock-In | Fully open-source, works offline, permissive license (MIT/Apache), no mandatory cloud services or telemetry. | -| **BYO** | BYO Philosophy | Bring your own logging, agents, environments, and tools — flexibility over opinionated defaults. | -| **Trace-First** | Trace-First Evaluation | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | -| **Error Attr** | Structured Error Attribution | Distinguish agent faults from infrastructure/user errors for fair scoring (`AgentError` vs `EnvironmentError`). | -| **Light** | Lightweight | Minimal dependencies, small codebase (~20k LOC), quick time to first evaluation (~5-15 min). | -| **Tooling** | Professional Tooling | Published on PyPI, CI/CD, good test coverage, structured logging, active maintenance, excellent docs. | -| **Sandbox** | Sandboxed Execution | Built-in Docker/K8s/VM isolation for safe code execution (or BYO sandbox via abstract Environment). | +| Library | Multi-Agent Native | Cross-Framework Eval | Framework Agnostic | Ready Benchmarks | Multi-turn Users | Open Source | Flexible (BYO) | Trace-First Eval | Error Attribution | +| --------------- | :----------------: | :------------------: | :----------------: | :--------------: | :--------------: | :---------: | :------------: | :--------------: | :---------------: | +| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | +| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | +| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | +| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | ❌ | +| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | +| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | +| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | +| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | + +Compare multi-agent evaluation frameworks across key capabilities. **✅** Full/Native · **🟢** Flexible for BYO · **🟡** Partial/Limited · **❌** Not possible ## Core Principles: From c29b4980dbf3a493726cbcb57cc91d363444ab0d Mon Sep 17 00:00:00 2001 From: cemde Date: Fri, 2 Jan 2026 00:51:27 +0100 Subject: [PATCH 3/5] added explanation for columns --- README.md | 49 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 34 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index 4ba3762..9676aa7 100644 --- a/README.md +++ b/README.md @@ -21,21 +21,40 @@ Analogous to pytest for testing or MLflow for ML experimentation, MASEval focuse ## Why MASEval? -| Library | Multi-Agent Native | Cross-Framework Eval | Framework Agnostic | Ready Benchmarks | Multi-turn Users | Open Source | Flexible (BYO) | Trace-First Eval | Error Attribution | -| --------------- | :----------------: | :------------------: | :----------------: | :--------------: | :--------------: | :---------: | :------------: | :--------------: | :---------------: | -| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | -| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | -| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | -| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | ❌ | -| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | -| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | -| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | -| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | - -Compare multi-agent evaluation frameworks across key capabilities. **✅** Full/Native · **🟢** Flexible for BYO · **🟡** Partial/Limited · **❌** Not possible +Compare multi-agent evaluation frameworks across key capabilities. + +| Library | Multi-Agent Native | System-Level Comparison | Framework Agnostic | Ready Benchmarks | Multi-turn Users | Open Source | Flexible (BYO) | Trace-First Eval | Error Attribution | +| --------------- | :----------------: | :---------------------: | :----------------: | :--------------: | :--------------: | :---------: | :------------: | :--------------: | :---------------: | +| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | +| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | +| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | +| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | ❌ | +| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | +| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | +| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | +| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | + +**✅** Full/Native · **🟢** Flexible for BYO · **🟡** Partial/Limited · **❌** Not possible + +
+Expand for Column Explanation + +| Feature | Explanation | +| --------------------------- | --------------------------------------------------------------------------------------------------------------- | +| **Multi-Agent Native** | Native orchestration with per-agent tracing, independent message histories, and explicit coordination patterns. | +| **System-Level Comparison** | Compare different framework implementations on the same benchmark (not just swapping LLMs). | +| **Framework Agnostic** | Evaluate agents from any framework via thin adapters without requiring protocol adoption or code recreation. | +| **Ready Benchmarks** | Ships complete, ready-to-run benchmarks with environments, tools, and evaluators (not just templates). | +| **Multi-turn Users** | First-class user simulation with personas, stop tokens, and tool access for realistic multi-turn conversations. | +| **Open Source** | Fully open-source, works offline, permissive license (MIT/Apache), no mandatory cloud services or telemetry. | +| **Flexible (BYO)** | Bring your own logging, agents, environments, and tools — flexibility over opinionated defaults. | +| **Trace-First Eval** | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | +| **Error Attribution** | Distinguish agent faults from infrastructure/user errors for fair scoring (`AgentError` vs `EnvironmentError`). | + +
## Core Principles: From 6c06ab203864db90a8253e216c7ea099203beb89 Mon Sep 17 00:00:00 2001 From: cemde Date: Fri, 2 Jan 2026 11:29:46 +0100 Subject: [PATCH 4/5] small fix to table --- README.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/README.md b/README.md index 9676aa7..dd64e79 100644 --- a/README.md +++ b/README.md @@ -23,19 +23,19 @@ Analogous to pytest for testing or MLflow for ML experimentation, MASEval focuse Compare multi-agent evaluation frameworks across key capabilities. -| Library | Multi-Agent Native | System-Level Comparison | Framework Agnostic | Ready Benchmarks | Multi-turn Users | Open Source | Flexible (BYO) | Trace-First Eval | Error Attribution | -| --------------- | :----------------: | :---------------------: | :----------------: | :--------------: | :--------------: | :---------: | :------------: | :--------------: | :---------------: | -| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | -| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | -| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | -| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | ❌ | -| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | -| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | -| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | -| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | +| Library | Multi-Agent Native | System-Level Comparison | Framework Agnostic | Ready Benchmarks | Multi-turn Users | Open Source | Flexible (BYO) | Action / State Eval | Error Attribution | +| --------------- | :----------------: | :---------------------: | :----------------: | :--------------: | :--------------: | :---------: | :------------: | :-----------------: | :---------------: | +| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | +| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | +| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | +| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | +| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | 🟡 | +| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | +| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | 🟡 | +| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | +| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | +| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | **✅** Full/Native · **🟢** Flexible for BYO · **🟡** Partial/Limited · **❌** Not possible @@ -51,7 +51,7 @@ Compare multi-agent evaluation frameworks across key capabilities. | **Multi-turn Users** | First-class user simulation with personas, stop tokens, and tool access for realistic multi-turn conversations. | | **Open Source** | Fully open-source, works offline, permissive license (MIT/Apache), no mandatory cloud services or telemetry. | | **Flexible (BYO)** | Bring your own logging, agents, environments, and tools — flexibility over opinionated defaults. | -| **Trace-First Eval** | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | +| **Action / State Eval** | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | | **Error Attribution** | Distinguish agent faults from infrastructure/user errors for fair scoring (`AgentError` vs `EnvironmentError`). | From 6908d1dbb2bcf48a82eb0de07c4b8a39547f7ca5 Mon Sep 17 00:00:00 2001 From: cemde Date: Fri, 2 Jan 2026 18:28:40 +0100 Subject: [PATCH 5/5] updated table added table to docs --- README.md | 55 +++++++++++++++++++++++++++++---------------------- docs/index.md | 55 ++++++++++++++++++++++++++++++++++++++------------- 2 files changed, 72 insertions(+), 38 deletions(-) diff --git a/README.md b/README.md index dd64e79..aa71efb 100644 --- a/README.md +++ b/README.md @@ -23,36 +23,43 @@ Analogous to pytest for testing or MLflow for ML experimentation, MASEval focuse Compare multi-agent evaluation frameworks across key capabilities. -| Library | Multi-Agent Native | System-Level Comparison | Framework Agnostic | Ready Benchmarks | Multi-turn Users | Open Source | Flexible (BYO) | Action / State Eval | Error Attribution | -| --------------- | :----------------: | :---------------------: | :----------------: | :--------------: | :--------------: | :---------: | :------------: | :-----------------: | :---------------: | -| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | -| **Inspect-AI** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | -| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **AnyAgent** | 🟡 | ✅ | ✅ | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | ❌ | -| **DeepEval** | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | -| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | 🟡 | 🟡 | -| **AgentGym** | 🟡 | ❌ | ❌ | ✅ | 🟡 | ✅ | ❌ | 🟡 | ❌ | -| **AgentBeats** | ✅ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟢 | 🟡 | 🟡 | -| **MCPEval** | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ✅ | 🟡 | 🟡 | ❌ | -| **Phoenix** | 🟡 | ❌ | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | -| **LangSmith** | 🟡 | ✅ | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | +| Library | Multi-Agent | System Evaluation | Agent-Agnostic | Benchmarks | Multi-turn User | No Lock-In | BYO | State-Action Eval | Error Attr | Lightweight | Project Maturity | Sandboxed Environment | +| ----------------- | :---------: | :---------------: | :------------: | :--------: | :-------------: | :--------: | :-: | :---------------: | :--------: | :---------: | :--------------: | :-------------------: | +| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | ✅ | ✅ | 🟢 | +| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | ✅ | 🟡 | ✅ | +| **AnyAgent** | 🟡 | ✅ | ✅ | ❌ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ✅ | ✅ | ❌ | +| **Inspect-AI** | 🟡 | ✅ | 🟡 | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ✅ | +| **MLflow GenAI** | 🟡 | 🟡 | 🟢 | ❌ | 🟡 | ✅ | 🟢 | ✅ | ❌ | 🟡 | ✅ | 🟡 | +| **LangSmith** | 🟡 | 🟡 | 🟡 | ❌ | ✅ | ❌ | 🟡 | ✅ | ❌ | ✅ | ✅ | ❌ | +| **OpenCompass** | ❌ | 🟡 | ❌ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | ❌ | ✅ | 🟡 | +| **AgentGym** | ❌ | ❌ | ❌ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ❌ | 🟡 | 🟡 | +| **Arize Phoenix** | 🟡 | ❌ | 🟡 | ❌ | ❌ | 🟡 | 🟢 | ✅ | ❌ | 🟡 | ✅ | ❌ | +| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ | 🟡 | ? | 🟡 | 🟡 | 🟡 | +| **TruLens** | 🟡 | ❌ | 🟡 | ❌ | ❌ | ✅ | 🟡 | 🟢 | ❌ | 🟡 | ✅ | ❌ | +| **AgentBeats** | 🟡 | ❌ | 🟡 | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ? | ✅ | 🟡 | 🟡 | +| **DeepEval** | 🟡 | ❌ | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ❌ | +| **MCPEval** | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | +| **Galileo** | 🟡 | ❌ | 🟡 | ❌ | ❌ | ❌ | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ❌ | **✅** Full/Native · **🟢** Flexible for BYO · **🟡** Partial/Limited · **❌** Not possible
Expand for Column Explanation -| Feature | Explanation | -| --------------------------- | --------------------------------------------------------------------------------------------------------------- | -| **Multi-Agent Native** | Native orchestration with per-agent tracing, independent message histories, and explicit coordination patterns. | -| **System-Level Comparison** | Compare different framework implementations on the same benchmark (not just swapping LLMs). | -| **Framework Agnostic** | Evaluate agents from any framework via thin adapters without requiring protocol adoption or code recreation. | -| **Ready Benchmarks** | Ships complete, ready-to-run benchmarks with environments, tools, and evaluators (not just templates). | -| **Multi-turn Users** | First-class user simulation with personas, stop tokens, and tool access for realistic multi-turn conversations. | -| **Open Source** | Fully open-source, works offline, permissive license (MIT/Apache), no mandatory cloud services or telemetry. | -| **Flexible (BYO)** | Bring your own logging, agents, environments, and tools — flexibility over opinionated defaults. | -| **Action / State Eval** | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | -| **Error Attribution** | Distinguish agent faults from infrastructure/user errors for fair scoring (`AgentError` vs `EnvironmentError`). | +| Column | Feature | One-Liner | +| --------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------ | +| **Multi-Agent** | Multi-Agent Native | Native orchestration with per-agent tracing, independent message histories, and explicit coordination patterns. | +| **System Evaluation** | System-Level Comparison | Compare different framework implementations on the same benchmark (not just swapping LLMs). | +| **Agent Agnostic** | Agent Framework Agnostic | Evaluate agents from any framework via thin adapters without requiring protocol adoption or code recreation. | +| **Benchmarks** | Pre-Implemented Benchmarks | Ships complete, ready-to-run benchmarks with environments, tools, and evaluators (not just templates). | +| **Multi-turn User** | User-Agent Multi-turn | First-class user simulation with personas, stop tokens, and tool access for realistic multi-turn conversations. | +| **No Lock-In** | No Vendor Lock-In | Fully open-source, works offline, permissive license (MIT/Apache), no mandatory cloud services or telemetry. | +| **BYO** | BYO Philosophy | Bring your own logging, agents, environments, and tools — flexibility over opinionated defaults. | +| **State-Action Eval** | Trace-First Evaluation | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | +| **Error Attr** | Structured Error Attribution | Structured exceptions distinguish between different failure for fair scoring (`AgentError` vs `EnvironmentError`). | +| **Lightweight** | Lightweight | Minimal dependencies, small codebase (~20k LOC), quick time to first evaluation (~5-15 min). | +| **Project Maturity** | Professional Tooling | Published on PyPI, CI/CD, good test coverage, structured logging, active maintenance, excellent docs. | +| **Sandbox** | Sandboxed Execution | Built-in Docker/K8s/VM isolation for safe code execution (or BYO sandbox via abstract Environment). |
diff --git a/docs/index.md b/docs/index.md index 38d2449..e9db642 100644 --- a/docs/index.md +++ b/docs/index.md @@ -16,6 +16,47 @@ pip install maseval More details in the [Quickstart](getting-started/quickstart.md) +## Why MASEval? + +Compare multi-agent evaluation frameworks across key capabilities. + +| Library | Multi-Agent | System Evaluation | Agent-Agnostic | Benchmarks | Multi-turn User | No Lock-In | BYO | State-Action Eval | Error Attr | Lightweight | Project Maturity | Sandboxed Environment | +| ----------------- | :---------: | :---------------: | :------------: | :--------: | :-------------: | :--------: | :-: | :---------------: | :--------: | :---------: | :--------------: | :-------------------: | +| **MASEval** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🟢 | ✅ | ✅ | ✅ | ✅ | 🟢 | +| **HAL Harness** | 🟡 | ✅ | ✅ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | ✅ | 🟡 | ✅ | +| **AnyAgent** | 🟡 | ✅ | ✅ | ❌ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ✅ | ✅ | ❌ | +| **Inspect-AI** | 🟡 | ✅ | 🟡 | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ✅ | +| **MLflow GenAI** | 🟡 | 🟡 | 🟢 | ❌ | 🟡 | ✅ | 🟢 | ✅ | ❌ | 🟡 | ✅ | 🟡 | +| **LangSmith** | 🟡 | 🟡 | 🟡 | ❌ | ✅ | ❌ | 🟡 | ✅ | ❌ | ✅ | ✅ | ❌ | +| **OpenCompass** | ❌ | 🟡 | ❌ | ✅ | 🟡 | ✅ | 🟡 | 🟡 | ❌ | ❌ | ✅ | 🟡 | +| **AgentGym** | ❌ | ❌ | ❌ | ✅ | 🟡 | ✅ | 🟢 | 🟡 | ❌ | ❌ | 🟡 | 🟡 | +| **Arize Phoenix** | 🟡 | ❌ | 🟡 | ❌ | ❌ | 🟡 | 🟢 | ✅ | ❌ | 🟡 | ✅ | ❌ | +| **MARBLE** | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ | 🟡 | ? | 🟡 | 🟡 | 🟡 | +| **TruLens** | 🟡 | ❌ | 🟡 | ❌ | ❌ | ✅ | 🟡 | 🟢 | ❌ | 🟡 | ✅ | ❌ | +| **AgentBeats** | 🟡 | ❌ | 🟡 | ❌ | ❌ | 🟡 | 🟡 | 🟡 | ? | ✅ | 🟡 | 🟡 | +| **DeepEval** | 🟡 | ❌ | 🟡 | ❌ | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ❌ | +| **MCPEval** | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | 🟡 | 🟡 | ❌ | 🟡 | 🟡 | ❌ | +| **Galileo** | 🟡 | ❌ | 🟡 | ❌ | ❌ | ❌ | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ❌ | + +**✅** Full/Native · **🟢** Flexible for BYO · **🟡** Partial/Limited · **❌** Not possible + +??? info "Column Explanation" + + | Column | Feature | One-Liner | + | --------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------ | + | **Multi-Agent** | Multi-Agent Native | Native orchestration with per-agent tracing, independent message histories, and explicit coordination patterns. | + | **System Evaluation** | System-Level Comparison | Compare different framework implementations on the same benchmark (not just swapping LLMs). | + | **Agent Agnostic** | Agent Framework Agnostic | Evaluate agents from any framework via thin adapters without requiring protocol adoption or code recreation. | + | **Benchmarks** | Pre-Implemented Benchmarks | Ships complete, ready-to-run benchmarks with environments, tools, and evaluators (not just templates). | + | **Multi-turn User** | User-Agent Multi-turn | First-class user simulation with personas, stop tokens, and tool access for realistic multi-turn conversations. | + | **No Lock-In** | No Vendor Lock-In | Fully open-source, works offline, permissive license (MIT/Apache), no mandatory cloud services or telemetry. | + | **BYO** | BYO Philosophy | Bring your own logging, agents, environments, and tools — flexibility over opinionated defaults. | + | **State-Action Eval** | Trace-First Evaluation | Evaluate intermediate steps and tool usage patterns via trace filtering, not just final output scoring. | + | **Error Attr** | Structured Error Attribution | Structured exceptions distinguish between different failure for fair scoring (`AgentError` vs `EnvironmentError`). | + | **Lightweight** | Lightweight | Minimal dependencies, small codebase (~20k LOC), quick time to first evaluation (~5-15 min). | + | **Project Maturity** | Professional Tooling | Published on PyPI, CI/CD, good test coverage, structured logging, active maintenance, excellent docs. | + | **Sandbox** | Sandboxed Execution | Built-in Docker/K8s/VM isolation for safe code execution (or BYO sandbox via abstract Environment). | + ## Core Principles - **Evaluation, Not Implementation:** MASEval provides the evaluation infrastructure—you bring your agent implementation. Whether you've built agents with AutoGen, LangChain, custom code, or direct LLM calls, MASEval wraps them via simple adapters and runs them through standardized benchmarks. @@ -34,20 +75,6 @@ More details in the [Quickstart](getting-started/quickstart.md) - **Abstract Base Classes:** The library provides abstract base classes for core components (Task, Benchmark, Environment, Evaluator) with optional default implementations, giving users flexibility to customize while maintaining interface consistency. -## Quickstart - -Install the package from PyPI: - -```bash -pip install maseval -``` - -Run the example script shipped with the repository: - -```bash -python examples/smolagents_research.py -``` - ## API See the automatic API reference under `Reference`.