Skip to content

Fix: SSE data: [DONE] sentinel for non-streaming requests (complements PR #285)#286

Open
rothnic wants to merge 1 commit intodecolua:masterfrom
rothnic:fix/response-format-support
Open

Fix: SSE data: [DONE] sentinel for non-streaming requests (complements PR #285)#286
rothnic wants to merge 1 commit intodecolua:masterfrom
rothnic:fix/response-format-support

Conversation

@rothnic
Copy link
Contributor

@rothnic rothnic commented Mar 12, 2026

Summary

This PR fixes a critical SSE issue that was blocking AI SDK structured output compatibility. It complements the response_format fix from PR #285 (already merged).

Relation to PR #285

PR #285 (merged): Added response_format support for Claude by translating OpenAI's response_format parameter to Claude-compatible system prompts in openai-to-claude.js.

This PR (#286): Fixes the SSE data: [DONE] sentinel issue that was still causing AI SDK to fail even with the response_format fix in place.

Both fixes are required for full AI SDK generateObject() compatibility.

The Problem

9router emitted data: [DONE]\n\n even for non-streaming requests, causing AI SDK and other clients to fail JSON parsing with "Invalid JSON response" errors.

Root Cause Discovery

AI SDK's generateObject() sends stream: undefined (not stream: false). The original check stream !== false evaluated to true when stream was undefined, incorrectly triggering the SSE sentinel emission.

Solution

Changed the check from stream !== false to stream === true in:

  • open-sse/utils/stream.js (PASSTHROUGH mode + flush function)
  • open-sse/executors/github.js (TransformStream)

Now data: [DONE] is ONLY emitted when explicitly streaming (stream: true).

Testing

Verified working with:

  • AI SDK v6 generateObject() with gh/gpt-4o
  • AI SDK v6 generateObject() with gh/claude-sonnet-4.5
  • Resume tailoring workflow (job-tracker-opencode)
  • curl requests with stream: false

Files Changed

  • open-sse/utils/stream.js - Fixed SSE sentinel logic (2 locations)
  • open-sse/executors/github.js - Fixed SSE sentinel in TransformStream

Impact

Fixes compatibility with:

  • AI SDK structured output (generateObject, generateText with schema)
  • OpenAI official clients
  • LangChain and other frameworks
  • Any client that doesn't explicitly set stream: true

Note on Merge Conflicts

This PR was rebased on top of PR #285 (response_format support). The conflicts were due to overlapping changes - PR #285 added the response_format support in openai-to-claude.js, while this PR focuses specifically on the SSE data: [DONE] issue. This PR now only includes the SSE fixes to avoid duplication.

@rothnic rothnic marked this pull request as draft March 12, 2026 15:19
@rothnic
Copy link
Contributor Author

rothnic commented Mar 12, 2026

Still coming across some other issues with structured outputs and will update this PR and mark ready once I have them all addressed.

rothnic added a commit to rothnic/9router that referenced this pull request Mar 12, 2026
…lity

This commit fixes critical issues affecting structured JSON output:

1. **SSE data: [DONE] sentinel for non-streaming requests**
   Problem: 9router emitted 'data: [DONE]' even for non-streaming requests,
   causing AI SDK and other clients to fail JSON parsing.

   Root cause: AI SDK's generateObject() sends stream: undefined (not false),
   so stream !== false evaluates to true.

   Solution: Changed check from stream !== false to stream === true in:
   - open-sse/utils/stream.js (2 locations)
   - open-sse/executors/github.js (1 location)

   Now only emits data: [DONE] when explicitly streaming (stream: true).

2. **response_format support for Claude via GitHub Copilot**
   Problem: GitHub's internal translation doesn't respect OpenAI's
   response_format parameter. Claude returns plain text instead of JSON.

   Solution: Modified sanitizeMessagesForChatCompletions() in github.js to
   inject JSON instructions into system prompt + user message prefix.

3. **response_format support for direct Claude API**
   Problem: Claude doesn't understand OpenAI's response_format parameter.

   Solution: Modified openai-to-claude.js to translate response_format
   to system prompts with JSON schema instructions.

Files changed:
- open-sse/utils/stream.js: Fixed SSE sentinel logic
- open-sse/executors/github.js: Fixed SSE + added response_format for Claude
- open-sse/translator/request/openai-to-claude.js: Added response_format translation

Fixes AI SDK generateObject() and all structured output clients.
Closes decolua#286
CRITICAL FIX for AI SDK compatibility:

**Problem:** 9router emitted 'data: [DONE]\n\n' even for non-streaming
requests, causing AI SDK and other clients to fail JSON parsing.

**Root Cause:** AI SDK's generateObject() sends stream: undefined (not false).
The check stream !== false evaluated to true when stream was undefined,
causing the SSE sentinel to be emitted incorrectly.

**Solution:** Changed check from stream !== false to stream === true in:
- open-sse/utils/stream.js (PASSTHROUGH mode + flush function)
- open-sse/executors/github.js (TransformStream)

Now data: [DONE] is ONLY emitted when explicitly streaming (stream: true).

**Related:** This PR builds on decolua#285 (already merged) which added
response_format support for Claude. Together these fixes enable full
AI SDK structured output compatibility.

Fixes AI SDK generateObject() and all structured output clients.
@rothnic rothnic force-pushed the fix/response-format-support branch from 00826bb to 8d24d47 Compare March 12, 2026 15:50
@rothnic rothnic changed the title Fix: response_format support for Claude + SSE handling for non-streaming requests Fix: SSE data: [DONE] sentinel for non-streaming requests (complements PR #285) Mar 12, 2026
@rothnic rothnic marked this pull request as ready for review March 12, 2026 15:53
@rothnic
Copy link
Contributor Author

rothnic commented Mar 12, 2026

@decolua this one is ready to review now and builds on the other PR to fully resolve structured output issues I was seeing when using AISDK. In my case, I had been using AISDK within some specialized cli tools that were within a larger workflow managed by OpenCode.

So the agent would use a cli tool, which uses aisdk targeting a 9router model combo i defined that preferred a few sources, where one was Kimi K2.5, then a fallback to gh copilot gpt-5-mini. I was seeing most of the requests fail, and it turned out to be multiple reasons. One being the claude response format for kimi already fixed, then the github model due gh copilot ignoring the format output and the referenced issue where stream being undefined (set by aisdk) resulted in that condition being handled incorrectly.

@decolua
Copy link
Owner

decolua commented Mar 13, 2026

Hi @rothnic, thanks for this fix! 🙏

We've cherry-picked the relevant parts from this PR into our fork:

What we merged:

  • Guard data: [DONE] in github.js TransformStream with stream === true — fixes the case where AI SDK sends stream: undefined
  • Inject response_format as system prompt for Claude models via GitHub executor, since GitHub's internal translation ignores response_format

What we skipped:

  • The stream.js guards — after tracing the code, createSSEStream in our fork is only called for true streaming paths (non-streaming goes through handleNonStreamingResponse or handleForcedSSEToJson directly), so the guard is not needed there. Also, applying body?.stream === true would incorrectly block [DONE] for Antigravity/Gemini formats that don't explicitly set stream: true.

Great investigation on the stream: undefined root cause — that's a subtle but critical difference. Appreciate the detailed write-up! 🎉

decolua pushed a commit that referenced this pull request Mar 13, 2026
- Guard data: [DONE] in github.js TransformStream with stream === true
- Inject response_format as system prompt for Claude models via GitHub executor

Note: stream.js guards skipped, createSSEStream is only called for true streaming paths.

Cherry-picked and adapted from PR #286 by @rothnic
#286

Made-with: Cursor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants