Skip to content

feat: add JSON format to agd log and rich PR summary#11

Open
BlueHotDog wants to merge 6 commits intomainfrom
feat/agd-pr-summary
Open

feat: add JSON format to agd log and rich PR summary#11
BlueHotDog wants to merge 6 commits intomainfrom
feat/agd-pr-summary

Conversation

@BlueHotDog
Copy link
Collaborator

@BlueHotDog BlueHotDog commented Feb 19, 2026

Summary

  • Add --format json output option to agd log command for programmatic consumption
  • Rewrite the AGD Trace GitHub Action to parse JSON and render a rich PR summary with stats table and collapsible session traces
  • Add 4 snapshot tests covering JSON output in various modes

What it looks like

The PR comment will render:

  1. Stats table — session count, action count, message counts, tags, agents
  2. Collapsible sessions — each session in a <details> block showing session ID, action count, and participating agents
  3. Full conversation traces — messages rendered inside each session block

Note on CI testing

The .agd store is gitignored (by design — see workflow.md). The CI workflow will build agd and attempt to read traces, but since the store isn't in the repo, the trace step will produce empty output and skip posting a comment. The rendering pipeline has been verified locally.


Open with Devin

Add JSON output format to the log command for programmatic consumption.
The GitHub Action workflow now parses JSON and renders a rich PR comment
with stats table, collapsible sessions, and full conversation traces.
The .agd store is local (gitignored by design), so the CI workflow could
never read traces. Replace with a Makefile target that runs locally:
agd log --format json | render-summary.py | gh api.

- Remove .github/workflows/agd-trace.yml
- Add scripts/render-summary.py (JSON-to-markdown renderer)
- Add 'make pr-summary' target (detects branch, finds PR, posts comment)
@BlueHotDog
Copy link
Collaborator Author

BlueHotDog commented Feb 19, 2026

AGD Summary

Branch: feat/agd-pr-summary

Metric Value
Sessions 2
Actions 4
Messages 5 (7 raw) observed / 7 (11 raw) produced
Tags llm.stream
Agents plan, build, coder
Session: ses-pr-summary (2 actions) — agents: plan, build

user

Add JSON output format to agd log command for PR summaries

assistant

I'll add a --format json option to the agd log command. This involves: 1) Adding json to the format enum, 2) Implementing JSON rendering functions, 3) Updating the GitHub Action workflow to consume JSON and render a rich summary.

assistant

Done. Added --format json to agd log. The JSON output includes stats (session count, action count, message counts, unique tags/agents) and full action/message data. Updated the GitHub Action to parse JSON and render a summary with collapsible sessions.
Session: ses-rich (2 actions) — agents: coder

user

Add a JSON output format to the agd log command

assistant

I'll modify src/main.zig to add JSON support.

tool call: write_file (call_id: call_001)

{"path": "src/main.zig", "content": "fn main() void { ... }"}

tool

tool result (call_id: call_001)

File written successfully to src/main.zig

assistant

Done. I've added --format json support to agd log. The JSON includes stats and full action/message data.

Files changed

Status File Source
modified src/main.zig agd
added src/objects.zig git

The per-action observed/produced view showed the same messages repeatedly.
Now we collect all unique messages across a session (deduped by hash) in
chronological order and render as a single conversation thread.
devin-ai-integration[bot]

This comment was marked as resolved.

- Add observed/produced workspace trees to JSON action output
- Renderer now computes file diffs across actions (added/modified/removed)
- Tool calls and tool results render with name, call_id, and content
- Created richer test session with tool calls + workspace state
- Fix jsonWriteEscaped missing 0x08 (backspace) and 0x0C (form feed)
  control characters, which produced invalid JSON per RFC 8259
- Fix total_sessions overcounting in branch mode when agent/tag filters
  exclude entire sessions
- Add unique_observed_messages and unique_produced_messages stats for
  deduplicated message counts across actions
- Update render-summary.py to show unique counts as primary, with raw
  counts shown only when they differ
Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 new potential issues.

View 8 additional findings in Devin Review.

Open in Devin Review

Comment on lines +1589 to +1596
tag_set.put(action.tag, {}) catch {};
agent_set.put(action.agent_id, {}) catch {};
session_set.put(action.session_id, {}) catch {};

entries.append(allocator, .{ .hash = h, .raw = raw }) catch {
allocator.free(raw);
continue;
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Use-after-free: StringHashMap keys dangle when entries.append fails in printLogJson

In printLogJson, string slices from a deserialized action (pointing into raw) are stored as keys in tag_set, agent_set, and session_set at lines 1589-1591 before raw is appended to entries. If the subsequent entries.append at line 1593 fails (OOM), the catch block frees raw at line 1594 — but the hashmap keys are now dangling pointers into freed memory.

Root Cause and Impact

The insertion order is: (1) put keys into hashmaps, (2) append raw to entries. If step 2 fails:

tag_set.put(action.tag, {}) catch {};       // stores slice into raw
agent_set.put(action.agent_id, {}) catch {}; // stores slice into raw
session_set.put(action.session_id, {}) catch {}; // stores slice into raw

entries.append(allocator, .{ .hash = h, .raw = raw }) catch {
    allocator.free(raw);  // ← frees backing memory for hashmap keys above
    continue;
};

Zig's StringHashMap stores the []const u8 key slice directly (no copy). After allocator.free(raw), the key pointers in the hashmaps are dangling. These are later dereferenced when iterating the hashmaps at src/main.zig:1608-1621 (jsonWriteString(key.*)) to render unique_tags and unique_agents, and also during subsequent put calls which compare against existing keys.

Impact: Undefined behavior (potential crash or memory corruption) in the OOM error-handling path.

Prompt for agents
In src/main.zig, function printLogJson, lines 1589-1596: Move the entries.append call BEFORE the tag_set.put / agent_set.put / session_set.put calls. This ensures that if entries.append fails and raw is freed, no dangling pointers have been stored in the hashmaps yet. The reordered block should be:

    entries.append(allocator, .{ .hash = h, .raw = raw }) catch {
        allocator.free(raw);
        continue;
    };

    total_actions += 1;
    total_observed += action.observed_messages.len;
    ... (existing stats counting code) ...
    tag_set.put(action.tag, {}) catch {};
    agent_set.put(action.agent_id, {}) catch {};
    session_set.put(action.session_id, {}) catch {};

This way, raw is guaranteed to be owned by entries before any slices into it are stored elsewhere.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +1627 to +1633
for (entries.items, 0..) |entry, idx| {
const action = agd.objects.Action.deserialize(entry.raw, allocator) catch continue;
defer action.deinit(allocator);

if (idx > 0) jsonWriteAll(",");
printActionJson(store, entry.hash, &action, allocator);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Invalid JSON output: comma placement uses loop index instead of write counter in printLogJson render pass

In the render loop of printLogJson, the comma separator between JSON array elements is gated on idx > 0 (the loop index over entries.items). If any entry's deserialization fails via catch continue at line 1628, subsequent entries will still use their array index for the comma check, producing malformed JSON.

Detailed Explanation

At src/main.zig:1627-1633:

for (entries.items, 0..) |entry, idx| {
    const action = agd.objects.Action.deserialize(entry.raw, allocator) catch continue;
    defer action.deinit(allocator);

    if (idx > 0) jsonWriteAll(",");
    printActionJson(store, entry.hash, &action, allocator);
}

If entry 0 fails deserialization (e.g., OOM during array allocation in Action.deserialize) and entry 1 succeeds, idx is 1 so if (idx > 0) writes a leading comma, producing [,{...}] — invalid JSON.

Note: entries were already deserialized successfully in the stats pass, so this only triggers under OOM conditions during the second deserialization. Nevertheless, the code explicitly handles the catch continue path, making this a real error-handling bug.

Impact: Malformed JSON output that would break downstream consumers like render-summary.py.

Suggested change
for (entries.items, 0..) |entry, idx| {
const action = agd.objects.Action.deserialize(entry.raw, allocator) catch continue;
defer action.deinit(allocator);
if (idx > 0) jsonWriteAll(",");
printActionJson(store, entry.hash, &action, allocator);
}
var first_action = true;
for (entries.items) |entry| {
const action = agd.objects.Action.deserialize(entry.raw, allocator) catch continue;
defer action.deinit(allocator);
if (!first_action) jsonWriteAll(",");
first_action = false;
printActionJson(store, entry.hash, &action, allocator);
}
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant