Replies: 1 comment
-
|
I'll start by giving my perspective, which is from the jsdom project: another web-specs-in-JS project. I've been working with Claude Code. No real plugins or skills, just a per-project AGENT.md (and CLAUDE.md symlink) with some things I discovered while working on the codebase. At the end of the giant PR I did ask GitHub Copilot (web) and ChatGPT Codex (on my machine) to do code review, and they found some good issues. Notably, I'm still in "collaborative mode" where I carefully review and iterate with Claude on changes (see e.g. the giant stream of commits in jsdom/jsdom#4033 and its predecessor jsdom/jsdom#4023). As opposite to "let it rip" mode that some projects seem to favor, where the code is rarely looked at by the human. Undici seems to be similarly careful with the code, helped along by having human reviewers. I have two parallel worktrees in case I want to parallelize, but I haven't gone further. Right now I'm just managing them manually, although I saw some tooling recently (but can't find it right now) that seemed like maybe an improvement. I tried to use Codex Web version because I liked the idea of multiple separate feature streams and it all being sandboxed on someone else's machine. But it was extremely lazy and needed a lot more handholding than Claude Code so I gave up quickly. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
My curiosity was sparked by @KhafraDev's comments in #4801 (comment), @mcollina's commit ec537c7, and I think some of @mcollina's X posts.
You all seem to have a good amount of experience with AI development and tooling in an area I'm quite interested in, around web specs and JS code. I'd love to hear more about what workflows and tooling you've found most valuable.
Beta Was this translation helpful? Give feedback.
All reactions