diff --git a/ASYNC_MESSAGE_FUTURE.md b/ASYNC_MESSAGE_FUTURE.md new file mode 100644 index 0000000..2aed782 --- /dev/null +++ b/ASYNC_MESSAGE_FUTURE.md @@ -0,0 +1,124 @@ +# Future: Async Message Visibility in Debug-Agent + +## Current Limitation + +The debug-agent (and any pi session) operates in a **synchronous turn-based model**: +- When processing a user prompt, the LLM generates a response +- During this generation, incoming `send_to_session` messages are queued +- These queued messages are **invisible** until the current turn completes +- Only then does pi process the queue and present them as new user prompts + +This creates confusion in debug/observability scenarios where admins expect real-time interaction. + +## Short-term Mitigation (this PR) + +- Document the limitation clearly +- Add `/ready` command to signal completion +- Advise keeping responses short +- Provide socket checking workarounds + +## Long-term Solutions + +### Option 1: Pi Core Enhancement - Message Queue API + +Extend pi's `ExtensionAPI` to expose pending message count: + +```typescript +interface ExtensionAPI { + // ... existing methods + + /** + * Get count of pending send_to_session messages in queue + * Returns 0 if no messages pending, N if messages queued + */ + getPendingMessageCount(): number; + + /** + * Get preview of pending messages (if available) + * Useful for dashboard indicators + */ + getPendingMessagePreviews(): Array<{ + from: string; // sender session ID or name + preview: string; // first 50 chars + timestamp: Date; + }>; +} +``` + +Then the dashboard could show: +``` +┌─────────────────────────────────────────────────────┐ +│ 📬 3 messages queued (complete turn to process) │ +│ • control-agent: "can you check..." │ +│ • user: "status update?" │ +└─────────────────────────────────────────────────────┘ +``` + +**Implementation**: Requires changes to pi's session control layer (WebSocket/message routing) + +### Option 2: Streaming Response with Message Interruption + +More ambitious - allow interrupting mid-generation: + +```typescript +pi.on("message_received_during_turn", async (event, ctx) => { + // Opportunity to abort current generation + // Present queued message immediately + // Or queue for "after this sentence" +}); +``` + +**Challenges**: +- Requires streaming/incremental generation support +- Complex state management (partial responses) +- May confuse conversation context + +### Option 3: Parallel Session Mode + +Create a "monitor" mode for debug-agent that spawns a parallel session: + +```typescript +// Main session: normal turn-based interaction +// Monitor session: async-only, polls for messages every 2s + +pi.registerExtension("async-monitor", { + async init(ctx) { + if (ctx.sessionRole === "debug-observer") { + setInterval(() => { + // Poll session control for pending messages + // Display in dashboard without blocking main turn + }, 2000); + } + } +}); +``` + +**Pros**: No pi core changes needed +**Cons**: Complex dual-session architecture, more resources + +### Option 4: Event-Driven Dashboard Widget + +The dashboard widget already runs outside the LLM turn. Enhance it to: +1. Listen on the session socket directly (bypassing pi's queue) +2. Display pending messages in the widget itself (not as user prompts) +3. Use `/accept-message ` command to pull from queue into conversation + +**Pros**: Clean separation of monitoring vs. conversation +**Cons**: Two different interaction modes (in-chat vs. dashboard commands) + +## Recommendation + +**Phase 1** (this PR): Documentation + workarounds +**Phase 2**: Implement Option 1 (message queue API in pi core) +**Phase 3**: Consider Option 4 if Option 1 proves insufficient + +Option 1 is the cleanest long-term solution as it: +- Preserves pi's turn-based model +- Adds visibility without changing core behavior +- Minimal API surface area +- Useful for all session types, not just debug-agent + +## Related Pi Issues + +- (TODO: check if @mariozechner/pi-coding-agent has issue tracker) +- Consider proposing this enhancement upstream diff --git a/pi/skills/debug-agent/SKILL.md b/pi/skills/debug-agent/SKILL.md index c62fc25..24cbc8b 100644 --- a/pi/skills/debug-agent/SKILL.md +++ b/pi/skills/debug-agent/SKILL.md @@ -32,6 +32,16 @@ The activity feed tails the control-agent's session JSONL file — it updates au - **Run diagnostics**: check bridge health, socket state, process trees - **Make code changes**: edit extensions, skills, configs — same tools as any agent +## Known limitations + +**Async message visibility**: Messages sent to this session via `send_to_session` are queued by pi but won't be visible until you complete your current response. This is a fundamental limitation of pi's turn-based conversation model. + +**Workaround**: If you suspect messages are queued: +1. Keep responses short to release the turn quickly +2. Say "Ready for next message" or use `/ready` to signal completion +3. The queued messages will then appear as new user prompts +4. Check your session socket in `/session-control/` to see if connections are pending + ## What you should NOT do - Don't send disruptive messages to the control-agent while it's mid-task (check activity feed first) @@ -53,3 +63,4 @@ The activity feed tails the control-agent's session JSONL file — it updates au ## Commands - `/dashboard` — force-refresh the health metrics +- `/ready` — signal completion and allow queued messages to be processed diff --git a/pi/skills/debug-agent/debug-dashboard.ts b/pi/skills/debug-agent/debug-dashboard.ts index 19242cb..d76c862 100644 --- a/pi/skills/debug-agent/debug-dashboard.ts +++ b/pi/skills/debug-agent/debug-dashboard.ts @@ -807,6 +807,14 @@ export default function dashboardExtension(pi: ExtensionAPI): void { }, }); + // /ready command — signal completion for queued messages + pi.registerCommand("ready", { + description: "Signal completion to allow queued send_to_session messages to be processed", + handler: async (_args, ctx) => { + ctx.ui.notify("Ready for next message. Complete your response to process queue.", "info"); + }, + }); + pi.on("session_start", async (_event, ctx) => { savedCtx = ctx; await refresh();