Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 31 additions & 1 deletion features/tracing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

- **Observability / AI spans / request logs**: We capture standard OpenTelemetry traces and spans for LLM calls and related operations.
- **Agent runs / tools / function calls**: These appear as nested spans in the trace tree, with inputs/outputs when available.
- **Prompt/Completion pairs**: Extracted from common keys (`openinference.*`, `ai.prompt` / `ai.response`, `gen_ai.*`) so they can be turned into testcases and scored.

Check warning on line 25 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L25

Did you really mean 'testcases'?

---

Expand Down Expand Up @@ -106,10 +106,40 @@

- **Time ranges**: 30m, 24h, 3d, 7d, 30d, All.
- **Project scope**: toggle between Current project and All projects.
- **SearchText**: full‑text across span/resource attributes (including prompt/response fields).
- **SearchText**: full‑text across span name, span kind, span ID, and all span attributes (including nested values in prompt/response fields).
- **Match previews**: quick context snippets with deep links to traces.
- **Cursor pagination**: efficient browsing with shareable URLs.

### Conversation view

The **Conversation** tab displays AI interactions in a familiar chat-like format, making it easy to follow the flow of prompts and responses. This view:

- Extracts messages from `gen_ai` spans and renders them as user/assistant bubbles
- Shows system prompts in a distinct format
- Displays tool calls and their results inline
- Supports Claude Code traces with agent identification and rich tool rendering
- Automatically deduplicates messages across spans

Check warning on line 121 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L121

Did you really mean 'deduplicates'?

Use the Conversation view to quickly understand what happened in a trace without navigating the span tree.

### User annotations

Add human feedback directly to traces and individual spans using the **Annotations** feature. Annotations help you:

- **Rate interactions**: Give thumbs up/down ratings to mark good or problematic responses

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For improved clarity and consistency with the phrasing in step 4 of the 'how-to' list ('thumbs up or thumbs down'), you could make the terminology for ratings more explicit here.

- **Rate interactions**: Give thumbs-up or thumbs-down ratings to mark good or problematic responses

- **Add comments**: Document observations, issues, or suggestions for improvement
- **Track feedback by span**: Annotate specific spans to pinpoint exactly where issues occurred

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The introduction and 'how-to' guide mention that annotations can be applied to both traces and spans, but this benefit focuses only on spans. To improve consistency and better reflect the feature's capabilities, consider rephrasing this to cover both trace and span level annotations.

- **Pinpoint feedback**: Annotate entire traces for overall feedback, or specific spans to pinpoint exactly where issues occurred.


To add an annotation:
1. Open a trace and select a span (or view the trace overview)
2. Expand the **Annotations** section
3. Click **Add Annotation**
4. Optionally select a thumbs up or thumbs down rating
5. Add a comment describing your feedback
6. Click **Submit**

Annotations are visible to your team and persist with the trace. Spans with annotations show a feedback indicator in the span tree for easy identification.

---

## Trace Grouping
Expand Down Expand Up @@ -173,12 +203,12 @@

---

## Turn traces to testcases

Check warning on line 206 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L206

Did you really mean 'testcases'?

Live traffic exposes edge-cases synthetic datasets miss. From any span that contains prompt/response attributes click **Create Testcase** and Scorecard will:

Check warning on line 208 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L208

Did you really mean 'Testcase'?

1. Extract `openinference.*`, `ai.prompt` / `ai.response`, or `gen_ai.*` fields.
2. Save the pair into a chosen **Testset**.

Check warning on line 211 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L211

Did you really mean 'Testset'?
3. Make it immediately available for offline evaluation runs.

Read more in [Trace to Testcase](/features/trace-to-testcase).
Expand Down Expand Up @@ -209,9 +239,9 @@
- **CrewAI** – Multi-agent collaboration
- **Haystack** – Search and question-answering pipelines
- **LangChain** – Chains, agents, and tool calls
- **Langflow** – Visual workflow builder

Check warning on line 242 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L242

Did you really mean 'Langflow'?
- **LangGraph** – Multi-step workflows and state machines
- **LiteLLM** – Unified interface for 100+ LLMs

Check warning on line 244 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L244

Did you really mean 'LLMs'?
- **LlamaIndex** – RAG pipelines and document retrieval
- **[OpenAI Agents SDK](https://github.com/openai/openai-agents-python?tab=readme-ov-file#tracing)** – Assistants API and function calling
- **[Vercel AI SDK](https://ai-sdk.dev/providers/observability/scorecard)** – Full-stack AI applications
Expand All @@ -229,11 +259,11 @@
- Cohere
- Google Gemini
- Google Vertex AI
- Groq

Check warning on line 262 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L262

Did you really mean 'Groq'?
- HuggingFace
- IBM Watsonx AI

Check warning on line 264 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L264

Did you really mean 'Watsonx'?
- Mistral AI
- Ollama

Check warning on line 266 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L266

Did you really mean 'Ollama'?
- OpenAI
- Replicate
- Together AI
Expand All @@ -242,11 +272,11 @@
### Vector Databases
- Chroma
- LanceDB
- Marqo

Check warning on line 275 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L275

Did you really mean 'Marqo'?
- Milvus

Check warning on line 276 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L276

Did you really mean 'Milvus'?
- Pinecone

Check warning on line 277 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L277

Did you really mean 'Pinecone'?
- Qdrant

Check warning on line 278 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L278

Did you really mean 'Qdrant'?
- Weaviate

Check warning on line 279 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L279

Did you really mean 'Weaviate'?

<Info>
For the complete list of supported integrations, see the [OpenLLMetry repository](https://github.com/traceloop/openllmetry). All integrations are built on OpenTelemetry standards and maintained by the community.
Expand All @@ -269,7 +299,7 @@
- **Debugging slow/failed requests with full span context**
- **Auditing prompts/completions for compliance**
- **Attributing token cost and latency to services/cohorts**
- **Building evaluation datasets from real traffic (Trace to Testcase)**

Check warning on line 302 in features/tracing.mdx

View check run for this annotation

Mintlify / Mintlify Validation (scorecard-d65b5e8a) - vale-spellcheck

features/tracing.mdx#L302

Did you really mean 'Testcase'?

---

Expand Down