Skip to content

fix: SDK cost tracking — add claude-sonnet-4 pricing + model in span data#255

Merged
bmdhodl merged 2 commits intomainfrom
fix/sdk-cost-tracking
Mar 15, 2026
Merged

fix: SDK cost tracking — add claude-sonnet-4 pricing + model in span data#255
bmdhodl merged 2 commits intomainfrom
fix/sdk-cost-tracking

Conversation

@bmdhodl
Copy link
Owner

@bmdhodl bmdhodl commented Mar 15, 2026

Summary

Autoresearch agent traces show $0 cost and "unknown" model in dashboard. Two root causes:

  1. Missing model pricingclaude-sonnet-4-20250514 and claude-opus-4-20250515 not in cost.py pricing table. estimate_cost() returned $0.00 with UnknownModelWarning.

  2. Empty span datapatch_anthropic/patch_openai created trace spans with data={}. Dashboard queries e.data->>'model' for cost-by-model aggregation, which returned "unknown". Now passes data={"model": ..., "provider": ...} in all 4 patch variants (sync/async x openai/anthropic).

Evidence

Before fix: autoresearch traces show "unknown" model and $0.00 cost
After fix: estimate_cost("claude-sonnet-4-20250514", 1000, 500, "anthropic") returns $0.0105

Test plan

  • 383/384 SDK tests pass (1 pre-existing Windows path failure)
  • Cost estimation verified for new models
  • Publish new SDK version, restart autoresearch agent, verify cost appears in dashboard

🤖 Generated with Claude Code

bmdhodl and others added 2 commits March 15, 2026 18:03
Two bugs causing $0 cost and "unknown" model in dashboard:

1. claude-sonnet-4-20250514 and claude-opus-4-20250515 were missing
   from the pricing table. All calls returned $0.00 with
   UnknownModelWarning. Added both models.

2. patch_anthropic/patch_openai created spans with empty data={}.
   Dashboard queries e.data->>'model' for cost-by-model aggregation,
   which returned 'unknown'. Now passes data={"model": ...,
   "provider": ...} when creating the trace span.

Both fixes apply to all 4 patch variants (sync/async x openai/anthropic).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@bmdhodl bmdhodl merged commit 3c5d81a into main Mar 15, 2026
10 checks passed
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8229a68f86

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

"""Shared traced wrapper for sync OpenAI create calls."""
model = kwargs.get("model", "unknown")
with tracer.trace(f"llm.openai.{model}") as ctx:
with tracer.trace(f"llm.openai.{model}", data={"model": model, "provider": "openai"}) as ctx:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Serialize model before attaching it to span data

This now stores model in span data without sanitization, so a non-JSON-serializable model value (for example an object/enum passed through wrapper code) will make sink emission fail in TraceContext.__enter__ when JsonlFileSink/StdoutSink calls json.dumps(event), aborting the LLM call before original_create runs. Previously this path only interpolated model into the span name (stringified), so this commit introduces a new hard failure mode for malformed-but-commonly-seen dynamic inputs.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant