diff --git a/intro/faq.mdx b/intro/faq.mdx index 322cfb1..cc7e462 100644 --- a/intro/faq.mdx +++ b/intro/faq.mdx @@ -78,6 +78,7 @@ import { Accordion, AccordionGroup, Card } from "@mintlify/components"; - Iterate on prompts, tools, and configurations with quantitative feedback - Multi-turn simulations test conversational improvements + See our [Multi-turn Simulation](/features/multi-turn-simulation) and [A/B Comparison](/features/a-b-comparison) docs for specific improvement workflows. diff --git a/intro/overview.mdx b/intro/overview.mdx index 90cea4b..175dde2 100644 --- a/intro/overview.mdx +++ b/intro/overview.mdx @@ -1,10 +1,10 @@ --- title: "Overview" -description: "Build trusted AI agents with systematic testing and evaluation" +description: "The simulation platform for building frontier AI agents" mode: "center" --- -Scorecard helps teams build reliable AI products through systematic testing and evaluation. Test your AI agents before they impact users, catch regressions early, and deploy with confidence. +Scorecard is the simulation platform for AI agent self-improvement. Run your agents through thousands of realistic scenarios in minutes and ship frontier capabilities with confidence. ## Platform Demo @@ -20,7 +20,7 @@ Watch CEO Darius demonstrate the complete Scorecard workflow, from creating test - Learn how Scorecard works and why teams use it + Learn how simulation drives agent self-improvement Start sending traces in minutes @@ -38,4 +38,4 @@ Watch CEO Darius demonstrate the complete Scorecard workflow, from creating test ## Get help -Email support@scorecard.io for assistance with your AI evaluation setup. \ No newline at end of file +Email support@scorecard.io for assistance with your AI agent setup. diff --git a/intro/what-is-scorecard.mdx b/intro/what-is-scorecard.mdx index 2d68f42..a26c6c1 100644 --- a/intro/what-is-scorecard.mdx +++ b/intro/what-is-scorecard.mdx @@ -1,63 +1,81 @@ --- title: "What Is Scorecard?" -description: "Build trusted AI agents with systematic testing and evaluation" +description: "The simulation platform for building frontier AI agents. Run thousands of realistic scenarios in minutes and ship new capabilities with confidence." --- ![Scorecard Workflow](/images/what-is-scorecard/scorecard-workflow.gif) -Scorecard is an AI evaluation platform that helps teams build reliable AI products through systematic AI evals. Test your AI agents before they impact users, catch regressions early, and deploy with confidence. +Scorecard is the simulation platform for AI agent self-improvement. Run your agents through thousands of realistic scenarios in minutes, encode expert judgment into scalable reward models, and ship frontier capabilities in days. -## Why you need AI evals +## The bottleneck is feedback, not building -Building production AI agents without proper AI evals is risky. Teams often discover issues in production because they lack visibility into AI behavior across different scenarios. Manual evals don't scale, and without systematic evaluation, it's impossible to know if changes improve or degrade performance. +Teams are building increasingly complex agents — multi-step workflows, consequential actions, real-world integrations. But the way most teams validate these agents hasn't kept up. The current approach is manual: review a handful of production cases, wait weeks for expert feedback, and hope nothing slips through. -Scorecard provides the infrastructure to run AI evals systematically, validate improvements, and prevent regressions. +This limits you to scenarios you've already seen. Edge cases stay hidden until they hit production. Expert time doesn't scale — every new capability means more review cycles, longer iteration loops, and slower releases. -## Who uses Scorecard +The bottleneck has shifted from building to feedback. Scorecard flips this by turning expert judgment into automated reward models and replacing manual review with large-scale simulation. -**AI Engineers** run evals systematically instead of manually checking outputs. + +**Traditional approach:** Review tens of production cases over weeks. -**Agent Developers** test multi-turn conversations and complex workflows. +**Scorecard approach:** Simulate tens of thousands of scenarios in 30 minutes. + -**Product Teams** validate that AI behavior matches user expectations. +## How Scorecard works -**QA Teams** build comprehensive test suites for AI agents. + + + Define reward criteria in natural language. Scorecard turns them into automated judges that score every scenario consistently and at scale. -**Leadership** gets visibility into AI reliability and performance. + [Learn about metrics →](/features/metrics) + + + Run your agent through thousands of realistic scenarios using AI-powered personas. Generate diverse test scenarios automatically — no manual case writing required. -## What Scorecard provides + [Multi-turn simulation →](/features/multi-turn-simulation) · [Synthetic data generation →](/features/synthetic-data-generation) + + + Quantitative A/B comparison across every metric. Iterate visually in the Playground with real-time feedback to find the best prompt, model, or architecture. -**[Tracing](/features/tracing)** — Capture and inspect every step of your AI agent's execution. Understand how your agent processes requests, identify bottlenecks, and debug failures with full visibility into each trace. + [A/B comparison →](/features/a-b-comparison) · [Playground →](/features/playground) + + + Integrate simulation into CI/CD so every pull request is validated automatically. Monitor production with tracing and feed real traffic back into your simulation suite. -**[Domain-specific metrics](/features/metrics)** — Choose from pre-validated metrics for your industry or create custom evaluators, available for legal, financial services, healthcare, customer support, and general quality evaluation. + [GitHub Actions →](/features/github-actions) · [Tracing →](/features/tracing) + + -**[Testset management](/features/testsets)** — Convert real production scenarios into reusable test cases. When your AI fails in production, capture that case and add it to your regression suite. +## Works with your agent stack -**[Playground evaluation](/features/playground)** — Test prompts and models side-by-side without writing code. Compare different approaches across providers (OpenAI, Anthropic, Google Gemini) to find what works best. - -**[Automated workflows](/features/github-actions)** — Integrate AI evals into your CI/CD pipeline. Get alerts when performance drops and prevent regressions before they reach users. - -## How it works + + + Zero-code tracing. Set three environment variables and get full visibility into agent decisions, tool use, and costs. + + + Trace LangChain agents and chains with OpenTelemetry. + + + Works with OpenAI, Anthropic, Google, and any OpenTelemetry-compatible provider. + + -1. **Instrument your agent** with Scorecard's tracing SDK to capture every step of execution. -2. **Define metrics** that evaluate quality, accuracy, and safety of your agent's outputs. -3. **Analyze traces** to identify failures, bottlenecks, and areas for improvement. -4. **Deploy with confidence**, knowing your AI agent meets quality standards. +## Get started +Built by engineers from Waymo, Uber, and SpaceX who used large-scale simulation to ship autonomous vehicles, global logistics, and rockets — now applied to AI agents. - - - Set up your first evaluation + + + Set up Scorecard and run a simulation in minutes. + + + Start testing without writing code. - - Start testing without code + + Book a demo and see Scorecard in action. -## Next steps - -Ready to integrate Scorecard into your workflow? We provide SDK support for Python and TypeScript, full REST API access, and GitHub Actions integration. - -Email support@scorecard.io for help getting started. \ No newline at end of file +Email support@scorecard.io for help getting started.