-
Notifications
You must be signed in to change notification settings - Fork 63
Description
Description
When using the traceloop SDK with the Vercel AI SDK, the gen_ai.input.messages attribute in telemetry spans does not include tool_call parts from assistant messages. Only the text content is preserved, making it impossible to see tool call arguments in the conversation history.
Expected Behavior
When an LLM response contains tool calls and those are sent back as part of the conversation history in subsequent calls, the gen_ai.input.messages attribute should include assistant messages with tool_call parts:
[
{"role": "user", "parts": [{"type": "text", "content": "What's the weather?"}]},
{"role": "assistant", "parts": [
{"type": "text", "content": "Let me check the weather for you."},
{"type": "tool_call", "id": "call_123", "name": "get_weather", "arguments": {"location": "Paris"}}
]},
{"role": "tool", "parts": [{"type": "tool_result", "toolCallId": "call_123", "output": {"temp": 72}}]}
]Actual Behavior
The tool_call parts are stripped, and only text content remains:
[
{"role": "user", "parts": [{"type": "text", "content": "What's the weather?"}]},
{"role": "assistant", "parts": [{"type": "text", "content": "Let me check the weather for you."}]},
{"role": "tool", "parts": [{"type": "text", "content": "[{\"type\":\"tool-result\",...}]"}]}
]Root Cause
The issue is in src/lib/tracing/ai-sdk-transformations.ts:
1. processMessageContent only extracts text parts:
const processMessageContent = (content: any): string => {
if (Array.isArray(content)) {
const textItems = content.filter(
(item: any) =>
item &&
typeof item === "object" &&
item.type === TYPE_TEXT && // ← Only matches "text" type
item.text,
);
// ...
}
};2. transformPrompts creates messages with only text parts:
inputMessages.push({
role: msg.role,
parts: [
{
type: TYPE_TEXT, // ← Always "text"
content: processedContent,
},
],
});Suggested Fix
Modify transformPrompts to preserve the original message structure, including tool-call parts. The function already handles tool calls correctly for output messages in transformResponseToolCalls - similar logic should be applied to input messages.
Example fix approach:
const transformPrompts = (attributes: Record<string, any>): void => {
// ... existing parsing code ...
messages.forEach((msg: { role: string; content: any }, index: number) => {
const parts: any[] = [];
if (Array.isArray(msg.content)) {
for (const item of msg.content) {
if (item.type === TYPE_TEXT && item.text) {
parts.push({ type: TYPE_TEXT, content: item.text });
} else if (item.type === "tool-call" || item.type === "tool_call") {
parts.push({
type: TYPE_TOOL_CALL,
tool_call: {
id: item.toolCallId,
name: item.toolName,
arguments: item.args ?? item.input,
},
});
} else if (item.type === "tool-result") {
parts.push({
type: "tool_result",
tool_call_id: item.toolCallId,
content: item.result ?? item.output,
});
}
}
} else {
parts.push({ type: TYPE_TEXT, content: processMessageContent(msg.content) });
}
inputMessages.push({ role: msg.role, parts });
});
};Environment
@traceloop/node-server-sdk: 0.22.6ai(Vercel AI SDK): 4.x / 5.x- Node.js: 20.x
Reproduction
- Use the AI SDK with
experimental_telemetry: { isEnabled: true } - Make a request that triggers tool calls
- Continue the conversation (multi-turn with tool results)
- Inspect the
gen_ai.input.messagesattribute on subsequentai.streamText.doStreamspans - Notice that assistant messages only contain text parts, not tool_call parts