feat: Add Agent Zero compatible skills directory (46 skills)#7
feat: Add Agent Zero compatible skills directory (46 skills)#7
Conversation
- Convert 46 skills to Agent Zero format - Add agent-zero/skills/ directory with SKILL.md files - Normalize YAML frontmatter (slug names, inline tags) - Add comprehensive README with installation instructions - Compatible with cldcde.cc downloads
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ❌ Deployment failed View logs |
cldcde | 66a512b | Mar 01 2026, 12:49 PM |
📝 WalkthroughWalkthroughThis PR introduces a comprehensive expansion of the Agent Zero Skills repository, adding 40+ new AI agent skills with documentation, configuration files, and supporting scripts. New skills span domains including visual testing, mutation analysis, MCP server development, 3D animation, database optimization, workflow orchestration, code review automation, and more. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (31)
agent-zero/skills/flow-nexus-platform/SKILL.md-1038-1045 (1)
1038-1045:⚠️ Potential issue | 🟠 MajorRemove or verify all broken documentation URLs.
Four of the five referenced documentation URLs are inaccessible and return connection errors:
https://docs.flow-nexus.ruv.io,https://api.flow-nexus.ruv.io/docs,https://status.flow-nexus.ruv.io, andhttps://community.flow-nexus.ruv.io. Only the GitHub Issues link (https://github.com/ruvnet/flow-nexus/issues) is accessible. Either replace these broken links with valid alternatives or remove them from the documentation to avoid directing users to non-existent resources.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/flow-nexus-platform/SKILL.md` around lines 1038 - 1045, The listed external links under the Documentation section (the entries labeled "Documentation", "API Reference", "Status Page", "Community Forum", "GitHub Issues", and "Discord"/"Email Support") contain several broken URLs; verify each URL (particularly https://docs.flow-nexus.ruv.io, https://api.flow-nexus.ruv.io/docs, https://status.flow-nexus.ruv.io, https://community.flow-nexus.ruv.io) and either replace them with working alternatives or remove the corresponding bullet items, keeping only valid links (e.g., the working "GitHub Issues" URL) and updating the "Email Support" note to clearly indicate availability if it's Pro/Enterprise only; ensure the SKILL.md list is consistent and all remaining links are reachable before committing.agent-zero/skills/agentic-jujutsu/SKILL.md-630-634 (1)
630-634:⚠️ Potential issue | 🟠 MajorRemove or fix broken relative documentation references.
The relative documentation paths reference files that do not exist in the repository:
docs/VALIDATION_FIXES_v2.3.1.mddocs/AGENTDB_GUIDE.mdThe
docs/directory does not exist inagent-zero/skills/agentic-jujutsu/. Either create these documentation files, update the references to point to actual files, or remove the references entirely if they are no longer applicable after the conversion to Agent Zero format.The external NPM and GitHub links are functional.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/agentic-jujutsu/SKILL.md` around lines 630 - 634, The SKILL.md file contains broken relative doc links to docs/VALIDATION_FIXES_v2.3.1.md and docs/AGENTDB_GUIDE.md; update SKILL.md to either remove these two references or replace them with valid targets (create the missing docs in agent-zero/skills/agentic-jujutsu/docs/ with appropriate content or point the links to their correct locations in the repository or external URLs). Edit the SKILL.md entries (the lines listing Validation Guide and AgentDB Guide) so they reference the newly created files or valid paths/URLs, or delete those two bullet lines if the guides are no longer applicable.agent-zero/skills/agentdb-optimization/SKILL.md-232-257 (1)
232-257:⚠️ Potential issue | 🟠 MajorBatch insert example still executes individual operations sequentially.
Lines 254-256 use the same sequential loop pattern as the "slow" example above it (lines 232-235). Both execute
await adapter.insertPattern()one at a time, which means the promised 500x performance gain cannot be achieved by this approach.To achieve true batch performance, either:
- Use a dedicated batch API method if available (e.g.,
adapter.batchInsert(patterns))- Use
Promise.all()to parallelize inserts like the "Batch Retrieval" example correctly does on line 266- Wrap operations in a transaction to reduce overhead
The current code only pre-maps data before insertion but still incurs per-operation overhead. Updating this example to show actual parallel execution or a batch method would reflect the claimed performance benefit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/agentdb-optimization/SKILL.md` around lines 232 - 257, The "Batch insert" example maps documents into patterns but still inserts sequentially using await adapter.insertPattern in a for loop (patterns / adapter.insertPattern), so it doesn't realize the claimed speedup; update the example to either call a true batch API (e.g., adapter.batchInsert(patterns)) if available, or execute parallel inserts with Promise.all(patterns.map(p => adapter.insertPattern(p))) or wrap inserts in a transaction helper, and adjust the surrounding text to reflect which approach you chose (batch API vs Promise.all vs transaction) so the example actually performs concurrent/batched inserts.agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh-153-159 (1)
153-159:⚠️ Potential issue | 🟠 MajorPreserve CLI/interactive inputs instead of always overriding with config.
With the current logic, config values win whenever the file exists, so
-pand interactive answers get discarded.🔧 Proposed precedence fix
if [ -f "$CONFIG_FILE" ]; then - PROBLEM_DESC=$(get_config_value "problem_description" "$PROBLEM_DESC") - SCOPE=$(get_config_value "scope" "${scope:-}") - SEVERITY=$(get_config_value "severity" "medium") - BUSINESS_IMPACT=$(get_config_value "business_impact" "${impact:-}") - TIMELINE=$(get_config_value "timeline" "${timeline:-}") - DATA_SOURCES=$(get_config_value "data_sources" '["logs", "metrics", "traces"]') + if [ -z "$PROBLEM_DESC" ]; then + PROBLEM_DESC=$(get_config_value "problem_description" "System performance degradation requiring analysis") + fi + SCOPE="${scope:-$(get_config_value "scope" "Unknown systems")}" + SEVERITY=$(get_config_value "severity" "medium") + BUSINESS_IMPACT="${impact:-$(get_config_value "business_impact" "Under investigation")}" + TIMELINE="${timeline:-$(get_config_value "timeline" "Unknown")}" + DATA_SOURCES=$(get_config_value "data_sources" '["logs", "metrics", "traces"]') else🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh` around lines 153 - 159, The current block always prefers values from the config file and overwrites any CLI flags or interactive answers; change the assignments to only call get_config_value when the corresponding shell variable is empty so CLI/interactive input is preserved (use parameter expansion of the existing uppercase variables as the default). Update each target variable (PROBLEM_DESC, SCOPE, SEVERITY, BUSINESS_IMPACT, TIMELINE, DATA_SOURCES) to pass its current value as the default to get_config_value (e.g., use "${PROBLEM_DESC:-}" for PROBLEM_DESC and ensure SCOPE uses the SCOPE variable rather than lower-case scope) so config values only fill in missing values.agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh-396-399 (1)
396-399:⚠️ Potential issue | 🟠 MajorReport command references the wrong collector script and unsupported flags.
The generated report tells users to run
./scripts/collect-evidence.sh --hypothesis ..., but this script generates$OUTPUT_DIR/collect-evidence-$TIMESTAMP.shand that generated script does not accept those flags.🧭 Proposed correction
**Step 1: Evidence Collection** ```bash # Automated evidence collection script -./scripts/collect-evidence.sh --hypothesis H1.1 --data sources/ +"$OUTPUT_DIR/collect-evidence-$TIMESTAMP.sh" "$OUTPUT_DIR/evidence-$TIMESTAMP"</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sharound lines 396 -
399, The report references the wrong invocation for the evidence collector:
instead of calling ./scripts/collect-evidence.sh with unsupported flags, update
the suggested command to run the generated script collect-evidence-$TIMESTAMP.sh
located in $OUTPUT_DIR and pass the generated evidence directory as the
positional argument (use "$OUTPUT_DIR/collect-evidence-$TIMESTAMP.sh" and supply
"$OUTPUT_DIR/evidence-$TIMESTAMP" as the argument); change the line that
currently shows ./scripts/collect-evidence.sh --hypothesis ... --data ... to
invoke the generated script with the evidence directory argument so it matches
how collect-evidence.sh actually produces and expects the generated script and
params.</details> </blockquote></details> <details> <summary>agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh-714-716 (1)</summary><blockquote> `714-716`: _⚠️ Potential issue_ | _🟠 Major_ **Avoid dumping raw environment variables into evidence output.** Line 715 may leak secrets (tokens, passwords, API keys) into artifacts. <details> <summary>🛡️ Proposed redaction</summary> ```diff - env > "$EVIDENCE_DIR/config/environment-variables.log" 2>&1 || true + env \ + | sed -E 's/((PASS|PASSWORD|TOKEN|SECRET|KEY)[^=]*)=.*/\1=[REDACTED]/I' \ + > "$EVIDENCE_DIR/config/environment-variables.log" 2>&1 || true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh` around lines 714 - 716, The current env dump to "$EVIDENCE_DIR/config/environment-variables.log" can leak secrets; replace it with a redacted or whitelisted dump. Modify the script so that instead of running env > ..., you either (a) iterate through environment variables and only write a safe whitelist of variable names, or (b) filter out/redact variables whose names match secret patterns (e.g., /TOKEN|SECRET|PASSWORD|KEY|AWS|GCP|PRIVATE/) before writing to "$EVIDENCE_DIR/config/environment-variables.log"; reference the existing EVIDENCE_DIR and the environment-variables.log target and ensure the implementation leaves values masked (e.g., replace value with "<REDACTED>") or omits non-whitelisted keys.agent-zero/skills/fpef-analyzer/SKILL.md-97-205 (1)
97-205:⚠️ Potential issue | 🟠 Major15 of 16 referenced scripts are missing; documentation references non-existent executables.
SKILL.md references 16 shell scripts but only
fpef-analyze.shexists in thescripts/directory. The following 15 scripts are missing:
auto-rca.sh,cross-system.sh,expand-scope.shfpef-evidence.sh,fpef-find.sh,fpef-fix.sh,fpef-prove.shhypothesis-prioritization.sh,integrate-devops.sh,integrate-incident.sh,integrate-monitoring.shrealtime-evidence.sh,synthesize-evidence.sh,validate-fix.sh,validate-proof.shUsers following this guide will encounter broken command references at lines 97–205, 217–340, and 374–379. Either implement the missing scripts or remove their references from the documentation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/fpef-analyzer/SKILL.md` around lines 97 - 205, The SKILL.md references 16 CLI scripts but only fpef-analyze.sh exists, causing broken command references (e.g., fpef-find.sh, fpef-prove.sh, fpef-evidence.sh, validate-proof.sh, validate-fix.sh, fpef-fix.sh, auto-rca.sh, cross-system.sh, expand-scope.sh, hypothesis-prioritization.sh, integrate-devops.sh, integrate-incident.sh, integrate-monitoring.sh, realtime-evidence.sh, synthesize-evidence.sh); fix by either (A) implementing lightweight placeholder scripts with the listed names under scripts/ that output help text and wire them into existing workflows, or (B) remove or replace all invocations of those missing scripts in SKILL.md with the single existing fpef-analyze.sh and accurate commands, updating the documentation sections (Step 1.3, Step 2.1–2.3, Step 3.2–3.3, Step 4.2–4.3) so no missing-script names remain and examples reflect actual executable names.agent-zero/skills/blender-3d-studio/demo-conversion-result.md-98-99 (1)
98-99:⚠️ Potential issue | 🟠 MajorFix Python boolean literals in the API example.
Lines 98–99 use JSON syntax (
true,false) which will raiseNameErrorin Python. The json parameter expects Python dict with Python booleans (True,False).Patch for the Python snippet
- "ease_in": true, - "ease_out": true + "ease_in": True, + "ease_out": True🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/demo-conversion-result.md` around lines 98 - 99, The Python example in demo-conversion-result.md uses JSON literals true/false for the ease_in and ease_out keys which will raise NameError; update the Python dict passed (the json parameter) to use Python boolean literals True and False for "ease_in" and "ease_out" (and any other true/false occurrences) so the snippet uses valid Python booleans.agent-zero/skills/demo-video/scripts/create-demo.sh-23-28 (1)
23-28:⚠️ Potential issue | 🟠 MajorContainer cleanup should run on all exit paths.
If any command fails before Line 121, the container can be left running. Add an EXIT trap-based cleanup.
💡 Proposed patch
-CONTAINER_ID=$(docker run -d \ - --name clawreform-demo \ +CONTAINER_NAME="clawreform-demo-$$" +cleanup() { docker rm -f "$CONTAINER_NAME" >/dev/null 2>&1 || true; } +trap cleanup EXIT + +CONTAINER_ID=$(docker run -d \ + --name "$CONTAINER_NAME" \ @@ -docker rm -f $CONTAINER_ID +# Cleanup handled by trapAlso applies to: 119-121
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/demo-video/scripts/create-demo.sh` around lines 23 - 28, The script creates a detached container and assigns its ID to CONTAINER_ID but does not guarantee the container is removed if the script exits early; add an EXIT trap that calls a cleanup function to stop and remove the container using the CONTAINER_ID variable (define a cleanup function near the top of create-demo.sh that checks if CONTAINER_ID is set and non-empty, stops and removes the container, and then register it via trap cleanup EXIT) so the container is always cleaned up on any exit path.agent-zero/skills/avant-garde-frontend-architect/library/components/cards/holographic-glass-card.md-359-367 (1)
359-367:⚠️ Potential issue | 🟠 MajorARIA sample currently wires relationships incorrectly.
Line 366 sets
aria-describedbyon the<p>and references an ID that is never defined; additionally, fixed IDs (card-title) can collide across repeated cards. This can break screen-reader relationships in copied implementations.💡 Proposed patch
<article className="relative group" role="article" - aria-label={title} + aria-labelledby="card-title" + aria-describedby="card-description" > <div className="relative z-10"> <h3 id="card-title">{title}</h3> - <p aria-describedby="card-description">{description}</p> + <p id="card-description">{description}</p> </div> </article>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/avant-garde-frontend-architect/library/components/cards/holographic-glass-card.md` around lines 359 - 367, The ARIA relationships are wired incorrectly: the <h3 id="card-title"> uses a fixed id and <p aria-describedby="card-description"> references a non-existent id, which will collide across repeated cards; fix by generating unique IDs per card (e.g., derive a uniqueId from a passed prop or runtime generator) and assign them to the title and description elements (e.g., h3 get uniqueTitleId instead of "card-title", p get uniqueDescId), then wire the container or controls to use aria-labelledby=uniqueTitleId and/or aria-describedby=uniqueDescId as appropriate so IDs exist and are unique across instances.agent-zero/skills/demo-video/scripts/create-demo.sh-78-80 (1)
78-80:⚠️ Potential issue | 🟠 MajorRecording duration argument is ignored.
Line 79 hardcodes
-t 180, soDURATIONfrom Line 7 never affects recording length.💡 Proposed patch
-docker exec $CONTAINER_ID bash -c ' +docker exec -e DEMO_DURATION="$DURATION" "$CONTAINER_ID" bash -c ' @@ - ffmpeg -f x11grab -video_size 1920x1080 -i :99 \ - -c:v libx264 -preset ultrafast -t 180 \ + ffmpeg -f x11grab -video_size 1920x1080 -i :99 \ + -c:v libx264 -preset ultrafast -t "${DEMO_DURATION}" \ -y /tmp/demo-output.mp4 &🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/demo-video/scripts/create-demo.sh` around lines 78 - 80, The ffmpeg invocation hardcodes "-t 180" so the DURATION variable declared earlier is ignored; update the ffmpeg command in create-demo.sh (the line that runs ffmpeg -f x11grab ... -t 180 ...) to use the DURATION variable (e.g., replace the literal 180 with the shell variable DURATION) and ensure the variable is exported or in scope when the command runs so the recording length respects the DURATION value.agent-zero/skills/agentdb-learning/SKILL.md-375-375 (1)
375-375:⚠️ Potential issue | 🟠 MajorDocumentation examples use invalid TypeScript/JavaScript function-call syntax.
Lines 375 and 431 contain named-argument style syntax (
arg: valuewithin function parentheses), which is not valid in standard TS/JS. These examples will fail with syntax errors if copied:
- Line 375:
sampleRandomBatch(replayBuffer, batchSize: 32)→ usesampleRandomBatch(replayBuffer, 32)- Line 431:
collectBatch(size: 1000)→ usecollectBatch(1000)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/agentdb-learning/SKILL.md` at line 375, The docs contain invalid named-argument call syntax; update the examples to use normal positional arguments: replace calls like sampleRandomBatch(replayBuffer, batchSize: 32) with sampleRandomBatch(replayBuffer, 32) and collectBatch(size: 1000) with collectBatch(1000) so the sample calls in SKILL.md use valid JavaScript/TypeScript syntax; locate occurrences of sampleRandomBatch and collectBatch in the file and change the argument lists accordingly.agent-zero/skills/bd-management/resources/schemas/issue.schema.json-4-95 (1)
4-95:⚠️ Potential issue | 🟠 MajorSchema allows malformed dependency references and unexpected keys.
depsitem values should be validated as issue IDs, and both root/deps should reject unknown properties to preserve data integrity.💡 Proposed patch
"title": "Beads Issue", "type": "object", + "additionalProperties": false, + "definitions": { + "issueId": { + "type": "string", + "pattern": "^[a-zA-Z0-9-]+$" + } + }, "properties": { @@ "deps": { "type": "object", + "additionalProperties": false, "properties": { "blocks": { "type": "array", - "items": { - "type": "string" - }, + "items": { "$ref": "#/definitions/issueId" }, @@ "related": { "type": "array", - "items": { - "type": "string" - }, + "items": { "$ref": "#/definitions/issueId" }, @@ "parent-child": { "type": "array", - "items": { - "type": "string" - }, + "items": { "$ref": "#/definitions/issueId" }, @@ "discovered-from": { "type": "array", - "items": { - "type": "string" - }, + "items": { "$ref": "#/definitions/issueId" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/bd-management/resources/schemas/issue.schema.json` around lines 4 - 95, The schema currently allows unknown properties and doesn’t validate dependency IDs; update the issue schema so the root object and the "deps" object both set "additionalProperties": false, and enforce issue-ID format for all deps arrays by adding the same pattern ("^[a-zA-Z0-9-]+$") to the "items" of "blocks", "related", "parent-child", and "discovered-from" (matching the "id" pattern); keep the existing "required" keys but ensure no unknown keys are allowed at root or inside "deps".agent-zero/skills/github-code-review/SKILL.md-398-419 (1)
398-419:⚠️ Potential issue | 🟠 MajorCommand injection vulnerability in webhook handler example.
The webhook handler directly interpolates user-controlled
event.comment.bodyinto a shell command viaexecSync(Line 412). An attacker could craft a malicious PR comment like/swarm; rm -rf /to execute arbitrary commands.Additionally, the
bodyvariable used on Line 404 is never defined in the code snippet.🔒 Proposed fix with input validation and sanitization
createServer((req, res) => { if (req.url === '/github-webhook') { - const event = JSON.parse(body); + let body = ''; + req.on('data', chunk => body += chunk); + req.on('end', () => { + const event = JSON.parse(body); - if (event.action === 'opened' && event.pull_request) { - execSync(`npx ruv-swarm github pr-init ${event.pull_request.number}`); - } + if (event.action === 'opened' && event.pull_request) { + const prNumber = parseInt(event.pull_request.number, 10); + if (!Number.isInteger(prNumber)) return; + execSync(`npx ruv-swarm github pr-init ${prNumber}`); + } - if (event.comment && event.comment.body.startsWith('/swarm')) { - const command = event.comment.body; - execSync(`npx ruv-swarm github handle-comment --pr ${event.issue.number} --command "${command}"`); - } + if (event.comment && event.comment.body.startsWith('/swarm')) { + const prNumber = parseInt(event.issue.number, 10); + // Validate command against allowlist instead of passing raw input + const allowedCommands = ['init', 'spawn', 'status', 'review']; + const parts = event.comment.body.split(/\s+/); + const subCommand = parts[1]; + if (!Number.isInteger(prNumber) || !allowedCommands.includes(subCommand)) return; + // Use spawn with args array to prevent shell injection + const { spawnSync } = require('child_process'); + spawnSync('npx', ['ruv-swarm', 'github', 'handle-comment', '--pr', String(prNumber), '--command', subCommand]); + } - res.writeHead(200); - res.end('OK'); + res.writeHead(200); + res.end('OK'); + }); } }).listen(3000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/github-code-review/SKILL.md` around lines 398 - 419, The webhook handler has two issues: the request body is never read (undefined body) and user-controlled input (event.comment.body / event.pull_request.number) is interpolated into execSync causing command injection; fix by reading and parsing the request body in createServer (collect data chunks and JSON.parse once), then stop passing raw strings to execSync — use child_process.execFile/ spawn with argument arrays or validate & strictly whitelist allowed commands/patterns (e.g., only digits for PR numbers, and only allow "/swarm" with known subcommands) before invoking; update the calls that currently use execSync to instead call a safe API (execFile/spawn with separate args) or validate/sanitize event.comment.body and event.pull_request.number in the webhook-handler.js createServer callback.agent-zero/skills/bd-management/scripts/optimize.py-39-57 (1)
39-57:⚠️ Potential issue | 🟠 MajorDependency checking logic is non-functional (placeholder).
The dependency checking at Lines 42-46 contains placeholder code that always evaluates to
True:can_add = can_add and True # Placeholder for dependency checkingThis means all
ready_issueswill be placed into a single parallel group in the first iteration of the while loop, defeating the purpose of creating multiple parallel execution groups. The variablesissue_idandother_idare assigned but never used for actual dependency resolution.Consider either implementing the dependency check or documenting this as a known limitation.
💡 Sketch of actual dependency checking
for issue in remaining_issues: issue_id = issue.get('id') - # Check if this issue depends on any issue in current group can_add = True + issue_deps = issue.get('deps', {}) + blocked_by = issue_deps.get('blocks', []) for other_issue in current_group: other_id = other_issue.get('id') - # Check dependencies (simplified - in real implementation would check full dependency map) - can_add = can_add and True # Placeholder for dependency checking + # Cannot add if this issue blocks another issue in current group + if other_id in blocked_by: + can_add = False + break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/bd-management/scripts/optimize.py` around lines 39 - 57, The placeholder dependency check always yields True; replace it with a real check that inspects each issue's dependency list (e.g., issue.get('depends_on') or similar) and skips adding an issue to current_group when any other_id equals an entry in issue's dependencies or when any dependency remains in remaining_issues; specifically update the loop that computes can_add (inside the while over remaining_issues iterating other_issue in current_group) to compare issue_id against other_issue.get('id') via the issue's dependency field so that only issues with all dependencies already assigned to previous groups are added to current_group, leaving unresolved issues in remaining_issues for later iterations that populate parallel_groups.agent-zero/skills/blender-3d-studio/create-animation.py-136-147 (1)
136-147:⚠️ Potential issue | 🟠 MajorFailure paths should propagate non-zero process exit codes.
At Line 139 and Line 147, errors only
return; at Line 161-162, the script exits 0 regardless of those failures.💡 Proposed fix
import bpy import os import math +import sys @@ if __name__ == "__main__": - main() + sys.exit(0 if main() else 1)Also applies to: 161-162
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/create-animation.py` around lines 136 - 147, The script currently returns from failure branches in animate_infinity() and setup_camera_animation() checks which leaves the process exit code as 0 later; update the failure handling so instead of plain return you call sys.exit(1) (or another non-zero code) and ensure sys is imported, and also make the final successful exit explicit (sys.exit(0)) so the process exit reflects success only when all checks pass (reference animate_infinity(), setup_camera_animation() and the final exit logic).agent-zero/skills/avant-garde-frontend-architect/scripts/add-component.sh-46-47 (1)
46-47:⚠️ Potential issue | 🟠 Major
--categoryshould be validated before path join.Line 46 trusts raw user input; values like
../../tmpcan write outside the library directory.💡 Proposed fix
CATEGORY_DIR="$LIBRARY_DIR/$CATEGORY" +if [[ "$CATEGORY" == /* || "$CATEGORY" == *".."* ]]; then + echo "Invalid category path: $CATEGORY" + exit 1 +fi mkdir -p "$CATEGORY_DIR"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/avant-garde-frontend-architect/scripts/add-component.sh` around lines 46 - 47, The script trusts the user-supplied CATEGORY when building CATEGORY_DIR ("CATEGORY_DIR=\"$LIBRARY_DIR/$CATEGORY\""), allowing directory traversal like "../../tmp"; validate and sanitize CATEGORY before joining: reject or normalize inputs that contain "..", "/" (absolute or nested paths), null bytes, or shell metacharacters, or restrict to a safe whitelist (e.g., /^[A-Za-z0-9_-]+$/); only after validation set CATEGORY_DIR and call mkdir -p. Ensure the validation logic is applied in add-component.sh before using CATEGORY and that failures exit with a clear error message.agent-zero/skills/blender-3d-studio/create-animation.py-120-125 (1)
120-125:⚠️ Potential issue | 🟠 MajorHard-coded absolute render output path is not portable.
Line 120 and Line 123 bind this script to one local machine path; this will fail in other environments.
💡 Proposed fix
- scene.render.filepath = "/home/ae/AE/.claude/skills/blender-3d-studio/animation_frames/" - - # Ensure output directory exists - output_dir = "/home/ae/AE/.claude/skills/blender-3d-studio/animation_frames/" - if not os.path.exists(output_dir): - os.makedirs(output_dir) + base_dir = os.path.dirname(os.path.abspath(__file__)) + output_dir = os.path.join(base_dir, "animation_frames") + os.makedirs(output_dir, exist_ok=True) + scene.render.filepath = os.path.join(output_dir, "")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/create-animation.py` around lines 120 - 125, The render output path and output_dir are hard-coded to a local absolute path; make them configurable and portable by deriving the path from a relative project path or environment variable (e.g., read an ENV like BLENDER_OUTPUT_DIR or use a local "animation_frames" folder), then update scene.render.filepath to that computed path and replace the os.path.exists/makedirs block with a robust creation using pathlib.Path(output_dir).mkdir(parents=True, exist_ok=True); ensure you reference and update the output_dir variable and the scene.render.filepath assignment so the script works across environments.agent-zero/skills/bd-management/docs/API_REFERENCE.md-9-21 (1)
9-21:⚠️ Potential issue | 🟠 MajorAPI documentation documents non-existent class-based interfaces.
The documented classes (
WorkflowAnalyzer,WorkflowOptimizer, etc.) in API_REFERENCE.md do not exist in the codebase. The actual implementation inagent-zero/skills/bd-management/scripts/is function-based (seesync.py,analyze.py, etc.) with no class definitions or CLI argument parsing support. Update the API reference to match the actual function-based implementation or implement the documented class interfaces.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/bd-management/docs/API_REFERENCE.md` around lines 9 - 21, The API docs list class-based interfaces like WorkflowAnalyzer, WorkflowOptimizer, etc., but the codebase uses function-based implementations (e.g., analyze.py, sync.py in agent-zero/skills/bd-management/scripts) with no class definitions or CLI parsing; either update API_REFERENCE.md to document the real functions (for example, describe analyze_dependencies(), find_parallel_groups(), detect_circular_dependencies() as functions in analyze.py and explain their signatures and CLI flags present in sync.py) or implement the missing classes (create WorkflowAnalyzer, WorkflowOptimizer classes matching the documented methods and wire them to the existing functions/CLI in sync.py/analyze.py), ensuring documented symbols match actual code (use the filenames and function names from scripts to locate where to change).agent-zero/skills/avant-garde-frontend-architect/scripts/add-component.sh-50-50 (1)
50-50:⚠️ Potential issue | 🟠 MajorGenerated React snippet uses invalid JavaScript identifiers.
Lines 87 and 95 use
${NAME// /-}to generate function names, which creates invalid identifiers when NAME contains spaces. For example,--name "Fancy Button"producesfunction Fancy-Button(), which is a syntax error since hyphens are not allowed in JavaScript identifiers.Update the script to generate valid identifiers (e.g., PascalCase via
COMPONENT_NAME), and apply it consistently in the React template.Additionally, Line 46 concatenates the unsanitized
CATEGORYvariable directly into the file path. Use--category ../../../etcto escape the intended directory. Validate or sanitize the category input to prevent path traversal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/avant-garde-frontend-architect/scripts/add-component.sh` at line 50, The script currently builds identifiers directly from NAME (SLUG and `${NAME// /-}`) and injects CATEGORY raw into paths; change it to derive a sanitized PascalCase COMPONENT_NAME from NAME (e.g., strip/replace non-alphanumerics, split on spaces/hyphens and capitalize each segment) and use COMPONENT_NAME wherever the React function/component name is generated instead of `${NAME// /-}`; keep SLUG for kebab-case filenames but ensure it strips unsafe chars; also sanitize CATEGORY by rejecting or normalizing path-traversal characters (remove any "../" or leading "/" and allow only a whitelist of chars) before concatenating into file paths so --category cannot escape the target directory, and update all template usages to reference the new sanitized variables (COMPONENT_NAME, SLUG, sanitized CATEGORY).agent-zero/skills/bd-management/scripts/sync.py-35-50 (1)
35-50:⚠️ Potential issue | 🟠 MajorFix git binary resolution and prevent commit failure on unchanged files.
Lines 35, 49–50 invoke
gitvia PATH (S607 vulnerability), and Line 50 commits unconditionally—failing with CalledProcessError when there are no staged changes, incorrectly reporting sync failure. Additionally, Lines 57 and 71 use broadExceptioncatches, reducing error diagnosability.Resolve
gitbinary location usingshutil.which(), skip commit when no changes are staged, and replace broad Exception catches with specific ones.Proposed fix
import json import sys import os import subprocess +import shutil from pathlib import Path from typing import Dict, List def sync_with_git(issues: List[Dict]): """Synchronize Beads with git""" print("🔄 Synchronizing Beads with git...") try: + git_bin = shutil.which("git") + if not git_bin: + print("❌ Git binary not found in PATH") + sys.exit(1) - result = subprocess.run(['git', 'status', '--porcelain'], capture_output=True, text=True) + result = subprocess.run([git_bin, 'status', '--porcelain'], capture_output=True, text=True, check=True) if result.returncode != 0: print("❌ Git not available or repository not found") sys.exit(1) # Save current Beads state beads_dir = Path('.beads') backup_file = beads_dir / 'issues_backup.json' with open(backup_file, 'w') as f: json.dump(issues, f, indent=2) print(f"💾 Backup created: {backup_file}") # Commit Beads changes - subprocess.run(['git', 'add', '.beads/'], check=True) - subprocess.run(['git', 'commit', '-m', 'Beads workflow sync'], check=True) + subprocess.run([git_bin, 'add', '.beads/'], check=True) + if subprocess.run([git_bin, 'diff', '--cached', '--quiet']).returncode == 0: + print("ℹ️ No Beads changes to commit") + return + subprocess.run([git_bin, 'commit', '-m', 'Beads workflow sync'], check=True) except subprocess.CalledProcessError as e: print(f"❌ Sync failed: {e}") sys.exit(1) - except Exception as e: + except (IOError, OSError) as e: print(f"❌ Unexpected error: {e}") sys.exit(1) def main(): """Main sync function""" print("🔄 Synchronizing Beads workflow with git...") try: issues = load_beads_data() sync_with_git(issues) print("✅ Beads sync completed successfully") - except Exception as e: + except (FileNotFoundError, ValueError, json.JSONDecodeError) as e: print(f"❌ Sync failed: {e}") sys.exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/bd-management/scripts/sync.py` around lines 35 - 50, The script uses subprocess.run(['git', ...]) directly and commits unconditionally which can fail; update git invocation to resolve the git binary with shutil.which() and use that path for subprocess calls (locate and replace the subprocess.run calls that run ['git', 'status', '--porcelain'], ['git', 'add', '.beads/'], and ['git', 'commit', '-m', 'Beads workflow sync']), then check the output of the status call to determine if there are staged changes and skip the commit if none to avoid CalledProcessError; also replace the broad Exception handlers around the sync flow (the generic except blocks referenced on Lines ~57 and ~71) with specific exceptions (e.g., FileNotFoundError, subprocess.CalledProcessError, OSError, json.JSONDecodeError) so errors are more actionable.agent-zero/skills/blender-3d-studio/test-conversion.sh-129-129 (1)
129-129:⚠️ Potential issue | 🟠 MajorLight color assignment uses 4 channels where Blender expects 3.
At Line 129, assigning
(0.9, 0.9, 1.0, 1.0)tofill_light.data.colorfails because Blender'sLight.colorproperty accepts only RGB (3 components), not RGBA (4 components).Proposed fix
- fill_light.data.color = (0.9, 0.9, 1.0, 1.0) + fill_light.data.color = (0.9, 0.9, 1.0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/test-conversion.sh` at line 129, The assignment to fill_light.data.color uses an RGBA 4-tuple but Blender's Light.color expects an RGB 3-tuple; update the assignment to supply only three components (e.g., change (0.9, 0.9, 1.0, 1.0) to (0.9, 0.9, 1.0)) or otherwise slice the tuple before assigning to fill_light.data.color, and if you intended to set intensity/alpha, set fill_light.data.energy (or the appropriate property) separately; locate the assignment to fill_light.data.color and replace the 4-value tuple with a 3-value RGB tuple.agent-zero/skills/blender-3d-studio/test-conversion.sh-83-86 (1)
83-86:⚠️ Potential issue | 🟠 MajorEdit Mode is required before calling
bpy.ops.uv.smart_project().Line 85 calls
bpy.ops.uv.smart_project()without switching to Edit Mode. The Blender manual specifies this operator requires Edit Mode and selected mesh faces to function. Without these prerequisites, the operator will fail with a "poll() failed, context is incorrect" error.💡 Proposed fix
# UV unwrap plane.select_set(True) bpy.context.view_layer.objects.active = plane + bpy.ops.object.mode_set(mode='EDIT') + bpy.ops.mesh.select_all(action='SELECT') bpy.ops.uv.smart_project() + bpy.ops.object.mode_set(mode='OBJECT')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/test-conversion.sh` around lines 83 - 86, The call to bpy.ops.uv.smart_project() requires Edit Mode and an active mesh selection; before calling smart_project() (after plane.select_set(True) and setting bpy.context.view_layer.objects.active = plane) switch the object into Edit Mode (using bpy.ops.object.mode_set with mode='EDIT') and ensure the mesh has faces selected, run bpy.ops.uv.smart_project(), then restore the previous mode (e.g., bpy.ops.object.mode_set(mode='OBJECT')) so the script leaves the context unchanged; update the code around plane.select_set, bpy.context.view_layer.objects.active, and bpy.ops.uv.smart_project() accordingly.agent-zero/skills/blender-3d-studio/scripts/2d-to-3d.sh-285-287 (1)
285-287:⚠️ Potential issue | 🟠 MajorExternal dependencies
numpyandPILmay not be available.The embedded Python script imports
numpyandPIL(Pillow), which are not part of Blender's bundled Python. These need to be installed in Blender's Python environment or the script will fail.💡 Consider adding dependency check or fallback
try: import numpy as np from PIL import Image except ImportError as e: print(f"Missing dependency: {e}") print("Install with: blender --background --python-expr \"import subprocess; subprocess.check_call(['pip', 'install', 'numpy', 'pillow'])\"") sys.exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/scripts/2d-to-3d.sh` around lines 285 - 287, The script's import block (numpy as np, PIL.Image as Image) can raise ImportError in Blender's bundled Python; wrap those imports in a try/except around the existing import lines to catch ImportError, log a clear message including the missing package name and a suggested blender pip install command (e.g., using "blender --background --python-expr ..." to call subprocess.check_call for 'pip install numpy pillow'), and exit with a non-zero status so the calling shell detects failure; update any code paths that assume these modules are present to not run if imports failed.agent-zero/skills/blender-3d-studio/convert-infinity.py-78-80 (1)
78-80:⚠️ Potential issue | 🟠 MajorVerify Blender API compatibility for
Specularinput.The
'Specular'input was renamed to'IOR Level'in Blender 4.0's Principled BSDF, which will cause aKeyErroron Blender 4.0 and later versions when accessingbsdf.inputs['Specular'].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/convert-infinity.py` around lines 78 - 80, The code sets Principled BSDF socket values using bsdf.inputs['Specular'], which will KeyError on Blender 4.0+ where that socket is renamed to 'IOR Level'; update the assignment in convert-infinity.py to handle both names by checking bsdf.inputs for 'Specular' first and falling back to 'IOR Level' (or use a try/except KeyError) and then set the default_value on the found socket (referencing the bsdf variable and the existing Metallic/Roughness assignments for placement).agent-zero/skills/blender-3d-studio/simple-convert.py-84-88 (1)
84-88:⚠️ Potential issue | 🟠 MajorUpdate Principled BSDF socket names for Blender 4.0+ compatibility.
Lines 87-88 use socket names that changed in Blender 4.0:
'Specular'→'Specular IOR Level''Subsurface Scattering'→'Subsurface Weight'This code will raise
KeyErroron Blender 4.0+ without updating these socket references.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/simple-convert.py` around lines 84 - 88, Update the Principled BSDF socket names used on the bsdf node: replace uses of bsdf.inputs['Specular'] with bsdf.inputs['Specular IOR Level'] and replace bsdf.inputs['Subsurface Scattering'] with bsdf.inputs['Subsurface Weight'] (or add a fallback check so the code first tries the new key and if missing falls back to the old key) to avoid KeyError on Blender 4.0+; locate these references on the bsdf node in simple-convert.py and update the socket names or add a conditional lookup before setting default_value.agent-zero/skills/blender-3d-studio/scripts/queue-render.sh-15-16 (1)
15-16:⚠️ Potential issue | 🟠 MajorQuality presets are effectively bypassed by default.
SAMPLESis initialized to"64"(Line 15), so the preset branch at Line 211 never runs unless--samplesis empty. This makes--qualitymostly non-functional.🔧 Suggested fix
-SAMPLES="64" +SAMPLES=""This lets the existing preset block compute both sample count and resolution scale when
--samplesis not explicitly provided.Also applies to: 211-232
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/scripts/queue-render.sh` around lines 15 - 16, Currently SAMPLES is prefilled with "64", which prevents the quality-presets branch from running; change the initialization so SAMPLES is empty (e.g., SAMPLES="") and leave RESOLUTION as needed, then update the argument-parsing logic so that the preset/quality branch (the quality case that computes sample count and resolution scale around the SAMPLES and RESOLUTION variables) only runs when --samples was not explicitly provided (i.e., SAMPLES is unset/empty). Ensure the preset block sets both SAMPLES and RESOLUTION (or resolution scale) when it executes so --quality takes effect; refer to the SAMPLES and RESOLUTION variables and the quality preset branch (the case handling --quality) when making this change.agent-zero/skills/bd-management/scripts/analyze.py-67-98 (1)
67-98:⚠️ Potential issue | 🟠 MajorParallel group construction can schedule dependent tasks together.
Line 82 checks only one direction inside
current_group, so dependency ordering can be violated (e.g., blocker and blocked issue end up in the same batch). This can produce invalid execution plans for downstream scripts.🔧 Suggested fix (topological batching)
def find_parallel_groups(issues: List[Dict], dependency_map: Dict) -> List[List[Dict]]: - """Find groups of issues that can be executed in parallel""" - parallel_groups = [] - remaining_issues = issues.copy() - - while remaining_issues: - current_group = [] - issues_to_remove = [] - - for issue in remaining_issues: - issue_id = issue.get('id') - # Check if this issue depends on any issue in current group - can_add = True - for other_issue in current_group: - other_id = other_issue.get('id') - if issue_id in dependency_map.get(other_id, []): - can_add = False - break - - if can_add: - current_group.append(issue) - issues_to_remove.append(issue) - - if current_group: - parallel_groups.append(current_group) - for issue in issues_to_remove: - remaining_issues.remove(issue) - else: - # No more parallel groups possible - break - - return parallel_groups + """Find executable parallel batches using dependency layers.""" + issue_by_id = {i.get("id"): i for i in issues if i.get("id") is not None} + blockers = {issue_id: set() for issue_id in issue_by_id} + + # dependency_map is blocker -> [blocked...] + for blocker_id, blocked_ids in dependency_map.items(): + for blocked_id in blocked_ids: + if blocked_id in blockers: + blockers[blocked_id].add(blocker_id) + + remaining_ids = set(issue_by_id.keys()) + parallel_groups: List[List[Dict]] = [] + + while remaining_ids: + ready_ids = [i for i in remaining_ids if len(blockers[i] & remaining_ids) == 0] + if not ready_ids: + break # cycle or malformed dependency graph + + parallel_groups.append([issue_by_id[i] for i in ready_ids]) + remaining_ids -= set(ready_ids) + + return parallel_groups🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/bd-management/scripts/analyze.py` around lines 67 - 98, The current find_parallel_groups function can place dependent tasks in the same batch because it only checks if the candidate issue depends on items already in current_group; instead implement topological batching: for the remaining_issues compute incoming dependencies restricted to the remaining set using dependency_map, then select a batch of issues with zero incoming edges (no remaining dependencies), append that batch to parallel_groups, remove them from remaining_issues and repeat until none remain; update references in find_parallel_groups and use issue.get('id') and dependency_map to build the restricted incoming-edge counts so dependent/blocked tasks never appear in the same group as their blockers.agent-zero/skills/blender-3d-studio/scripts/queue-render.sh-187-195 (1)
187-195:⚠️ Potential issue | 🟠 MajorConfig
render_engineis loaded but never applied.Line 187 assigns
RENDER_ENGINE, but job metadata and render script useENGINE. As written, config engine overrides are ignored.✅ Fix
- RENDER_ENGINE=$(python3 -c " + ENGINE=$(python3 -c " import json try: with open('$CONFIG_FILE', 'r') as f: config = json.load(f) print(config.get('render_engine', 'cycles')) except: print('cycles') " 2>/dev/null || echo "cycles")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/scripts/queue-render.sh` around lines 187 - 195, The config-parsed variable RENDER_ENGINE is set but never used because the script and job metadata reference ENGINE instead; update the script to use a single consistent variable by assigning ENGINE="$RENDER_ENGINE" after RENDER_ENGINE is computed (or replace uses of ENGINE with RENDER_ENGINE) and ensure downstream references in the render invocation and job metadata (e.g., the render call and any metadata emission) use that unified symbol so config overrides take effect.agent-zero/skills/blender-3d-studio/scripts/start-mcp-server.sh-114-114 (1)
114-114:⚠️ Potential issue | 🟠 MajorChange
-h|--host)to-H|--host)to fix unreachable help flag.Line 114 maps
-hto the host option. Since the case statement evaluates patterns in order,-h|--help)at line 167 never matches—users cannot access help with-has the flag is intercepted by the host handler. Change the host option to use-Hinstead.🔧 Suggested fix
- -h|--host) + -H|--host) HOST="$2" shift 2 ;;Also applies to: 167-169
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/scripts/start-mcp-server.sh` at line 114, The case pattern for the host flag incorrectly uses `-h|--host)` which intercepts the help flag; update the case entry that handles the host option (the case pattern currently named `-h|--host)`) to use `-H|--host)` instead so `-h|--help)` (the help case) can match; search the script's option-parsing case statement to replace the host pattern accordingly and ensure no other option patterns conflict with `-h`/`--help`.agent-zero/skills/blender-3d-studio/scripts/start-mcp-server.sh-717-718 (1)
717-718:⚠️ Potential issue | 🟠 MajorPath resolution and subprocess handling are fundamentally broken in 2D→3D conversion.
The Python code is generated into a cache file (
./cache/mcp_server.py), making the relative pathos.path.dirname(__file__)/../scripts/2d_to-3d.pyresolve to./scripts/2d_to-3d.py(wrong location). Additionally, line 743 has a critical subprocess bug:process.wait()returns an integer (return code), not stdout, so stdout gets assigned a number instead of the captured output. stderr is hardcoded as"", discarding all error output and never reading from the PIPE—this can deadlock if the subprocess produces large stderr output.Fixes required:
- Export the actual script directory from the bash wrapper to the Python environment (e.g.,
export MCP_SKILL_SCRIPTS_DIR="$SCRIPT_DIR") and use it instead of__file__-based paths.- Replace
process.wait(), ""withprocess.communicate()to properly capture both stdout and stderr without deadlock risk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent-zero/skills/blender-3d-studio/scripts/start-mcp-server.sh` around lines 717 - 718, The 2D→3D conversion uses __file__-relative paths and mis-handles subprocess output: update the bash wrapper that launches the generated cache file to export the real scripts directory as MCP_SKILL_SCRIPTS_DIR (e.g., export MCP_SKILL_SCRIPTS_DIR="$SCRIPT_DIR") and change the Python path construction in the conversion call (where os.path.join(os.path.dirname(__file__), '..', 'scripts', '2d_to-3d.py') is used) to read from os.environ['MCP_SKILL_SCRIPTS_DIR'] instead of __file__; also replace the subprocess usage that assigns process.wait(), "" to stdout/stderr with process.communicate() so you capture both stdout and stderr safely (reference the code handling process.wait() and the variables used to hold stdout/stderr).
| FOUND=0 | ||
|
|
||
| find "$LIBRARY_DIR" -name "*.md" -type f | while read -r file; do | ||
| filename=$(basename "$file" .md) | ||
|
|
||
| if [[ "$filename" == *"$NAME"* ]]; then | ||
| FOUND=1 | ||
| relative_path="${file#$LIBRARY_DIR/}" | ||
|
|
||
| echo "📦 Component: $filename" | ||
| echo "📁 Location: $relative_path" | ||
| echo "==================================================================" | ||
| echo "" | ||
| cat "$file" | ||
| echo "" | ||
| echo "==================================================================" | ||
| echo "✅ End of component" | ||
| exit 0 | ||
| fi | ||
| done | ||
|
|
||
| if [ $FOUND -eq 0 ]; then | ||
| echo "❌ Component not found: $NAME" | ||
| echo "" | ||
| echo "To search for components:" | ||
| echo " ./scripts/search-library.sh $NAME" | ||
| echo "" | ||
| echo "To list all components:" | ||
| echo " ./scripts/list-components.sh --all" | ||
| exit 1 |
There was a problem hiding this comment.
Subshell variable scope bug: FOUND is never updated in the parent shell.
The find | while read pattern runs the while loop in a subshell. Changes to FOUND (Line 26) and the exit 0 (Line 37) only affect the subshell, not the parent script. After the pipeline completes, the parent shell's FOUND remains 0, causing the "Component not found" message to always display—even when a component was successfully found and printed.
🐛 Proposed fix using process substitution
# Search for the component
-FOUND=0
-
-find "$LIBRARY_DIR" -name "*.md" -type f | while read -r file; do
+while read -r file; do
filename=$(basename "$file" .md)
if [[ "$filename" == *"$NAME"* ]]; then
- FOUND=1
relative_path="${file#$LIBRARY_DIR/}"
echo "📦 Component: $filename"
echo "📁 Location: $relative_path"
echo "=================================================================="
echo ""
cat "$file"
echo ""
echo "=================================================================="
echo "✅ End of component"
exit 0
fi
-done
+done < <(find "$LIBRARY_DIR" -name "*.md" -type f)
-if [ $FOUND -eq 0 ]; then
- echo "❌ Component not found: $NAME"
- echo ""
- echo "To search for components:"
- echo " ./scripts/search-library.sh $NAME"
- echo ""
- echo "To list all components:"
- echo " ./scripts/list-components.sh --all"
- exit 1
-fi
+# If we reach here, no component was found
+echo "❌ Component not found: $NAME"
+echo ""
+echo "To search for components:"
+echo " ./scripts/search-library.sh $NAME"
+echo ""
+echo "To list all components:"
+echo " ./scripts/list-components.sh --all"
+exit 1This uses process substitution (< <(...)) so the while loop runs in the current shell, allowing exit 0 to properly terminate the script when a match is found.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FOUND=0 | |
| find "$LIBRARY_DIR" -name "*.md" -type f | while read -r file; do | |
| filename=$(basename "$file" .md) | |
| if [[ "$filename" == *"$NAME"* ]]; then | |
| FOUND=1 | |
| relative_path="${file#$LIBRARY_DIR/}" | |
| echo "📦 Component: $filename" | |
| echo "📁 Location: $relative_path" | |
| echo "==================================================================" | |
| echo "" | |
| cat "$file" | |
| echo "" | |
| echo "==================================================================" | |
| echo "✅ End of component" | |
| exit 0 | |
| fi | |
| done | |
| if [ $FOUND -eq 0 ]; then | |
| echo "❌ Component not found: $NAME" | |
| echo "" | |
| echo "To search for components:" | |
| echo " ./scripts/search-library.sh $NAME" | |
| echo "" | |
| echo "To list all components:" | |
| echo " ./scripts/list-components.sh --all" | |
| exit 1 | |
| while read -r file; do | |
| filename=$(basename "$file" .md) | |
| if [[ "$filename" == *"$NAME"* ]]; then | |
| relative_path="${file#$LIBRARY_DIR/}" | |
| echo "📦 Component: $filename" | |
| echo "📁 Location: $relative_path" | |
| echo "==================================================================" | |
| echo "" | |
| cat "$file" | |
| echo "" | |
| echo "==================================================================" | |
| echo "✅ End of component" | |
| exit 0 | |
| fi | |
| done < <(find "$LIBRARY_DIR" -name "*.md" -type f) | |
| # If we reach here, no component was found | |
| echo "❌ Component not found: $NAME" | |
| echo "" | |
| echo "To search for components:" | |
| echo " ./scripts/search-library.sh $NAME" | |
| echo "" | |
| echo "To list all components:" | |
| echo " ./scripts/list-components.sh --all" | |
| exit 1 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/avant-garde-frontend-architect/scripts/get-component.sh`
around lines 20 - 49, The FOUND variable never gets updated in the parent shell
because the pipeline "find ... | while read -r file; do ... done" runs the while
loop in a subshell; change the loop to run in the current shell by switching to
process substitution: use "while read -r file; do ... done < <(find
"$LIBRARY_DIR" -name \"*.md\" -type f)". Keep the same logic inside the loop
(checking "$filename" against "$NAME", setting FOUND=1, printing the file and
calling exit 0) so exit terminates the script correctly, and leave the final if
[ $FOUND -eq 0 ] block unchanged to handle the not-found case.
| #!/bin/bash | ||
| # Search component library by keyword | ||
|
|
||
| set -e |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and examine its content
if [ -f "agent-zero/skills/avant-garde-frontend-architect/scripts/search-library.sh" ]; then
echo "=== File found, reading content ==="
cat -n "agent-zero/skills/avant-garde-frontend-architect/scripts/search-library.sh"
else
echo "File not found at expected path, searching for it..."
find . -name "search-library.sh" -type f 2>/dev/null | head -5
fiRepository: aegntic/cldcde
Length of output: 2254
Counter tracking is broken and can terminate early.
The while loop at line 27 runs in a pipeline subshell, so found increments at lines 36 and 58 do not persist to the parent scope. Additionally, ((found++)) returns exit status 1 when found=0, which terminates the subshell immediately with set -e on the first match. The check at line 62 will always see found=0, causing the script to fail even when matches exist.
Replace the pipeline with process substitution and use safe arithmetic:
Proposed fix
-find "$LIBRARY_DIR" -name "*.md" -type f | while read -r file; do
+while IFS= read -r -d '' file; do
# Search in filename
filename=$(basename "$file" .md)
@@ -33,7 +33,7 @@
relative_path="${file#$LIBRARY_DIR/}"
title=$(head -n 1 "$file" | sed 's/^# //')
echo "📦 $title"
echo " 📁 $relative_path"
echo ""
- ((found++))
+ found=$((found + 1))
fi
# Search in content
@@ -55,7 +55,7 @@
echo " 🎯 Matches:"
echo "$matches" | sed 's/^/ /'
fi
echo ""
- ((found++))
+ found=$((found + 1))
fi
-done
+done < <(find "$LIBRARY_DIR" -type f -name "*.md" -print0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/avant-garde-frontend-architect/scripts/search-library.sh`
at line 4, The counter tracking bug stems from using a pipeline subshell for the
while loop (so updates to the found variable don't persist) and using
((found++)) which can return non-zero when found==0 under set -e; fix by
replacing the pipeline with process substitution (e.g., change "cmd | while read
..." to "while read ...; do ...; done < <(cmd)") so the loop runs in the main
shell, and replace all occurrences of "((found++))" with a safe arithmetic
update like "found=$((found+1))" or "((found+=1))" so the increment doesn't
produce a failing exit status. Ensure the variable name found is used
consistently and the final check reads the updated found value in the parent
shell.
| bpy.ops.object.convert(target='MESH') | ||
| infinity_obj = bpy.context.active_object | ||
| infinity_obj.name = "Infinity_3D" | ||
|
|
||
| # Enter Edit Mode to modify the curve points | ||
| bpy.ops.object.mode_set(mode='EDIT') | ||
| bpy.ops.curve.select_all(action='SELECT') | ||
|
|
||
| # Scale and position the curve | ||
| bpy.ops.transform.resize(value=(4, 2, 1)) | ||
| bpy.ops.transform.translate(value=(0, 0, 0)) |
There was a problem hiding this comment.
Critical: Curve operation called on mesh object.
After converting the curve to mesh at line 24, calling bpy.ops.curve.select_all() at line 30 will fail because the object is no longer a curve. Mesh selection operations should be used instead.
🐛 Proposed fix
# Convert to mesh for extrusion
bpy.ops.object.convert(target='MESH')
infinity_obj = bpy.context.active_object
infinity_obj.name = "Infinity_3D"
# Enter Edit Mode to modify the curve points
bpy.ops.object.mode_set(mode='EDIT')
- bpy.ops.curve.select_all(action='SELECT')
+ bpy.ops.mesh.select_all(action='SELECT')
# Scale and position the curve
bpy.ops.transform.resize(value=(4, 2, 1))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| bpy.ops.object.convert(target='MESH') | |
| infinity_obj = bpy.context.active_object | |
| infinity_obj.name = "Infinity_3D" | |
| # Enter Edit Mode to modify the curve points | |
| bpy.ops.object.mode_set(mode='EDIT') | |
| bpy.ops.curve.select_all(action='SELECT') | |
| # Scale and position the curve | |
| bpy.ops.transform.resize(value=(4, 2, 1)) | |
| bpy.ops.transform.translate(value=(0, 0, 0)) | |
| bpy.ops.object.convert(target='MESH') | |
| infinity_obj = bpy.context.active_object | |
| infinity_obj.name = "Infinity_3D" | |
| # Enter Edit Mode to modify the curve points | |
| bpy.ops.object.mode_set(mode='EDIT') | |
| bpy.ops.mesh.select_all(action='SELECT') | |
| # Scale and position the curve | |
| bpy.ops.transform.resize(value=(4, 2, 1)) | |
| bpy.ops.transform.translate(value=(0, 0, 0)) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/blender-3d-studio/convert-infinity.py` around lines 24 -
34, You converted the curve to a mesh (bpy.ops.object.convert) but then call
bpy.ops.curve.select_all(), which fails because the object is no longer a curve;
update the selection call to the mesh equivalent (use
bpy.ops.mesh.select_all(action='SELECT')) and ensure the object is in EDIT mode
before selecting (check/keep bpy.ops.object.mode_set(mode='EDIT') around the
selection), replacing references to bpy.ops.curve.select_all with
bpy.ops.mesh.select_all and keeping subsequent transform calls compatible with
mesh edit mode on the infinity_obj.
| local base_name=$(basename "$INPUT_FILE") | ||
| local name_no_ext="${base_name%.*}" | ||
| OUTPUT_FILE="${name_no_ext}_3d.${OUTPUT_FORMAT}" | ||
| fi |
There was a problem hiding this comment.
Critical: local keyword used outside of a function.
Same issue as in generate-animation.sh - the local keyword is only valid inside Bash functions. This affects lines 153-154, 224, 228, 232-233, and 272-274.
🐛 Proposed fix - remove `local` keywords at script level
if [ -z "$OUTPUT_FILE" ]; then
# Generate default output filename
- local base_name=$(basename "$INPUT_FILE")
- local name_no_ext="${base_name%.*}"
+ base_name=$(basename "$INPUT_FILE")
+ name_no_ext="${base_name%.*}"
OUTPUT_FILE="${name_no_ext}_3d.${OUTPUT_FORMAT}"
fiApply similar changes to all other local usages at script level (lines 224, 228, 232, 233, 272, 273, 274).
🧰 Tools
🪛 Shellcheck (0.11.0)
[error] 153-153: 'local' is only valid in functions.
(SC2168)
[warning] 153-153: Declare and assign separately to avoid masking return values.
(SC2155)
[error] 154-154: 'local' is only valid in functions.
(SC2168)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/blender-3d-studio/scripts/2d-to-3d.sh` around lines 153 -
156, The script incorrectly uses the Bash-only `local` keyword at top-level
(e.g., lines creating base_name and name_no_ext and setting OUTPUT_FILE in
2d-to-3d.sh); remove the `local` keyword from those top-level variable
declarations so they are regular shell variables, and apply the same change for
the other top-level occurrences mentioned (the other `local` usages around the
blocks that set/modify variables at lines noted in the review). Locate and edit
the statements that begin with `local base_name`, `local name_no_ext` (and the
similar `local` declarations at the other reported sites) and simply remove
`local` so the assignments remain valid outside a function.
| local base_name=$(basename "$INPUT_FILE") | ||
| local name_no_ext="${base_name%.*}" | ||
| OUTPUT_FILE="${name_no_ext}_${ANIMATION_TYPE}.blend" | ||
| fi |
There was a problem hiding this comment.
Critical: local keyword used outside of a function.
The local keyword is only valid inside Bash functions. Using it at the top level of the script will cause a syntax error. This affects lines 161-162, 202, 206, 224-225, and 553.
🐛 Proposed fix - remove `local` keywords at script level
if [ -z "$OUTPUT_FILE" ]; then
# Generate default output filename
- local base_name=$(basename "$INPUT_FILE")
- local name_no_ext="${base_name%.*}"
+ base_name=$(basename "$INPUT_FILE")
+ name_no_ext="${base_name%.*}"
OUTPUT_FILE="${name_no_ext}_${ANIMATION_TYPE}.blend"
fiApply similar fixes at lines 202, 206, 224, 225, and 553:
# Create output directory
-local output_dir=$(dirname "$OUTPUT_FILE")
+output_dir=$(dirname "$OUTPUT_FILE")
mkdir -p "$output_dir"
# Calculate total frames
-local total_frames=$(echo "$DURATION * $FRAME_RATE" | bc)
+total_frames=$(echo "$DURATION * $FRAME_RATE" | bc) # Create temporary working directory
-local temp_dir=$(mktemp -d)
-local temp_script="$temp_dir/animation_script.py"
+temp_dir=$(mktemp -d)
+temp_script="$temp_dir/animation_script.py" # Get Blender path
-local blender_path="blender"
+blender_path="blender"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| local base_name=$(basename "$INPUT_FILE") | |
| local name_no_ext="${base_name%.*}" | |
| OUTPUT_FILE="${name_no_ext}_${ANIMATION_TYPE}.blend" | |
| fi | |
| base_name=$(basename "$INPUT_FILE") | |
| name_no_ext="${base_name%.*}" | |
| OUTPUT_FILE="${name_no_ext}_${ANIMATION_TYPE}.blend" | |
| fi |
🧰 Tools
🪛 Shellcheck (0.11.0)
[error] 161-161: 'local' is only valid in functions.
(SC2168)
[warning] 161-161: Declare and assign separately to avoid masking return values.
(SC2155)
[error] 162-162: 'local' is only valid in functions.
(SC2168)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/blender-3d-studio/scripts/generate-animation.sh` around
lines 161 - 164, The script uses the Bash-only "local" keyword at top-level
(e.g., the variables base_name, name_no_ext and OUTPUT_FILE in the
generate-animation.sh snippet), which causes a syntax error; fix by removing the
"local" keyword for any variables declared outside of functions (or
alternatively move those declarations into a function if you need function
scope), i.e., replace "local var=..." with a plain assignment "var=..." (or use
"declare" inside scripts where appropriate) for all occurrences of local at
top-level (including the other occurrences noted).
| #!/bin/bash | ||
|
|
||
| # Simplified test script for 2D to 3D conversion | ||
| set -e |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files | grep -E "blender.*test-conversion"Repository: aegntic/cldcde
Length of output: 164
🏁 Script executed:
cat -n agent-zero/skills/blender-3d-studio/test-conversion.shRepository: aegntic/cldcde
Length of output: 8551
🌐 Web query:
Blender bpy.ops.uv.smart_project edit mode context requirement
💡 Result:
bpy.ops.uv.smart_project() is an operator, so its poll() requires the same kind of UI context as when you run UV → Smart UV Project manually. Being in Edit Mode alone is not always enough; you typically need all of:
- An active mesh object
- Edit Mode and some geometry selected
- A valid UI area/region context (commonly a
VIEW_3Darea) when calling it from a script, otherwise you’ll getpoll() failed, context is incorrect[1], [3]
Minimal pattern (context override + edit-mode selection):
import bpy
obj = bpy.context.object
bpy.context.view_layer.objects.active = obj
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
areas = [a for a in bpy.context.window.screen.areas if a.type == 'VIEW_3D']
area = areas[0]
region = next(r for r in area.regions if r.type == 'WINDOW')
with bpy.context.temp_override(
window=bpy.context.window,
screen=bpy.context.window.screen,
area=area,
region=region,
):
bpy.ops.uv.smart_project()
bpy.ops.object.mode_set(mode='OBJECT')This works because overriding context is the intended way to run operators “as if” they were invoked from a specific editor/area [2].
Sources: [1] (Smart Project context override example), [2] (Blender bpy.ops docs: overriding context), [3] (Blender “Gotchas”: operators depend on context/poll).
🌐 Web query:
Blender bpy light data.color RGB RGBA 4-tuple assignment
💡 Result:
In Blender’s Python API, bpy.types.Light.color is RGB only: it’s a mathutils.Color with 3 items (R, G, B). Assigning a 4-tuple (RGBA) will fail because the property does not have an alpha channel. [1]
light = bpy.data.lights["Light"]
# OK (RGB)
light.color = (1.0, 0.2, 0.3)
# If you have RGBA, drop alpha
rgba = (1.0, 0.2, 0.3, 0.5)
light.color = rgba[:3]If you need “alpha/opacity”-like control for a light’s effect, you generally do it via strength/energy (or via shader nodes for emissive materials), not Light.color. [1]
Sources: [1] Blender API docs: bpy.types.Light.color (3-item mathutils.Color).
Shell control flow bypasses cleanup, and Blender script has critical API errors.
At Line 215, the unconditional blender command with set -e will exit immediately on failure, skipping both the error handling at lines 217-239 and cleanup at line 242.
Additionally, the embedded Python script will fail at runtime:
- Line 85:
bpy.ops.uv.smart_project()requires Edit Mode and proper context override. Current code only sets active object and selection without entering Edit Mode, causingpoll() failed, context is incorrect. - Line 129:
fill_light.data.color = (0.9, 0.9, 1.0, 1.0)assigns a 4-tuple (RGBA) to a 3-item RGB property; this will fail. Use(0.9, 0.9, 1.0)instead.
💡 Proposed fix
set -e
+
+cleanup() {
+ rm -f convert_2d_to_3d.py
+}
+trap cleanup EXIT
@@
-echo "🎬 Starting 2D to 3D conversion..."
-blender --background --python convert_2d_to_3d.py 2>&1
+echo "🎬 Starting 2D to 3D conversion..."
+if ! blender --background --python convert_2d_to_3d.py 2>&1; then
+ echo "❌ Conversion failed"
+ echo "Check the error messages above"
+ exit 1
+fi
@@
-# Cleanup
-rm -f convert_2d_to_3d.pyFor the Python script, fix line 85 and 129:
- bpy.ops.uv.smart_project()
+ bpy.ops.object.mode_set(mode='EDIT')
+ bpy.ops.mesh.select_all(action='SELECT')
+ areas = [a for a in bpy.context.window.screen.areas if a.type == 'VIEW_3D']
+ if areas:
+ area = areas[0]
+ region = next((r for r in area.regions if r.type == 'WINDOW'), None)
+ if region:
+ with bpy.context.temp_override(area=area, region=region):
+ bpy.ops.uv.smart_project()
+ bpy.ops.object.mode_set(mode='OBJECT')
- fill_light.data.color = (0.9, 0.9, 1.0, 1.0)
+ fill_light.data.color = (0.9, 0.9, 1.0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/blender-3d-studio/test-conversion.sh` at line 4, The script
uses set -e which causes the unconditional `blender` invocation to abort the
script and skip later error handling/cleanup; change control flow to capture
blender's exit status (remove set -e or add a trap and run blender, then check
its exit code and call the existing error handler/cleanup when non-zero) so
lines 217-239 and the cleanup at line 242 always run on failure. In the embedded
Python, ensure UV unwrap runs in Edit Mode for the active mesh: before calling
bpy.ops.uv.smart_project() switch the object into Edit Mode (e.g.,
bpy.ops.object.mode_set(mode='EDIT') or build a proper context override) so
poll() succeeds, and fix the light color assignment by setting
fill_light.data.color to a 3-tuple (0.9, 0.9, 1.0) instead of a 4-tuple.
| fill_light = bpy.context.active_object | ||
| fill_light.name = "Fill_Light" | ||
| fill_light.data.energy = 400 | ||
| fill_light.data.color = (0.9, 0.9, 1.0, 1.0) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Blender Python API bpy.types.Light.color RGB components specification
💡 Result:
bpy.types.Light.color is a 3-component RGB color stored as a mathutils.Color (no alpha):
- Type / size:
mathutils.Colorwith 3 floats (R, G, B). [1] - Numeric range: each component is allowed in [0, +inf] (so values > 1.0 are valid / “HDR”). [1]
- Color space meaning: like most Blender API color values, it is in the scene-linear (working) color space (per the active OCIO configuration), not display/sRGB. [2]
- Blender’s “reference linear” is typically linear with Rec.709 primaries and D65 (unless your OCIO/working space differs). [3]
- If you have sRGB UI values (0–1) and want to set the light correctly, convert sRGB → scene-linear first (Blender provides conversion helpers on
mathutils.Color). [2][4]
Example (sRGB to scene-linear before assignment):
import bpy
from mathutils import Color
c = Color((1.0, 0.5, 0.0)) # sRGB values in 0..1
c.from_srgb_to_scene_linear()
bpy.data.lights["Light"].color = cSources:
[1] Blender Python API: bpy.types.Light.color ([0, inf], mathutils.Color) (docs.blender.org)
[2] Blender dev docs (Python API notes): most API colors are scene-linear; conversions exist (developer.blender.org)
[3] Blender Manual (Color Management): reference linear / OCIO context (docs.blender.org)
[4] mathutils.Color API: sRGB↔scene-linear conversion methods (upbge.org)
Citations:
- 1: https://docs.blender.org/api/current/bpy.types.Light.html?utm_source=openai
- 2: https://developer.blender.org/docs/release_notes/3.2/python_api/?utm_source=openai
- 3: https://docs.blender.org/manual/en/latest/render/color_management.html?utm_source=openai
- 4: https://upbge.org/docs/latest/api/mathutils.html
Light.color expects RGB (3 components), not RGBA (4 components).
Line 149 assigns a 4-tuple to fill_light.data.color, but Blender's API expects a mathutils.Color with only 3 float values (R, G, B). The alpha component will cause a runtime error.
Fix
- fill_light.data.color = (0.9, 0.9, 1.0, 1.0)
+ fill_light.data.color = (0.9, 0.9, 1.0)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fill_light.data.color = (0.9, 0.9, 1.0, 1.0) | |
| fill_light.data.color = (0.9, 0.9, 1.0) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/blender-3d-studio/working-convert.py` at line 149, The
assignment to fill_light.data.color uses an RGBA 4-tuple but Blender's
Light.color expects an RGB 3-tuple (or mathutils.Color); change the value to
three components (e.g., (0.9, 0.9, 1.0)) or construct a mathutils.Color and
assign that to fill_light.data.color to remove the alpha component and avoid the
runtime error in the fill_light setup.
| ```json | ||
| { | ||
| "hooks": { | ||
| "PreToolUse": [ | ||
| { | ||
| "matcher": "Edit(*)|Write(*)", | ||
| "command": "python3 /a0/usr/plugins/debt-sentinel/debt-sentinel.py --check" | ||
| } | ||
| ] | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the referenced implementation files exist in the debt-sentinel skill directory
echo "Checking for debt-sentinel implementation files..."
# Find the debt-sentinel directory
fd -t d "debt-sentinel" agent-zero/skills/ 2>/dev/null || echo "No debt-sentinel directory found"
# List all files in the debt-sentinel directory
echo -e "\n--- Files in debt-sentinel directory ---"
fd -t f . agent-zero/skills/debt-sentinel/ 2>/dev/null || echo "Directory not found or empty"
# Check for specific referenced files
echo -e "\n--- Checking for specific files ---"
echo "Looking for debt-sentinel.py:"
fd -t f "debt-sentinel.py" agent-zero/skills/debt-sentinel/ 2>/dev/null || echo "Not found"
echo "Looking for ANTI_PATTERNS.md:"
fd -t f "ANTI_PATTERNS.md" agent-zero/skills/debt-sentinel/ 2>/dev/null || echo "Not found"
echo "Looking for DEBT.md:"
fd -t f "DEBT.md" agent-zero/skills/debt-sentinel/ 2>/dev/null || echo "Not found"
# Also check for any Python files in the directory
echo -e "\n--- Checking for any Python files in debt-sentinel ---"
fd -t f "\.py$" agent-zero/skills/debt-sentinel/ 2>/dev/null || echo "No Python files found"
# Check if /a0/usr/plugins exists (may not in sandbox)
echo -e "\n--- Checking if /a0/usr/plugins exists ---"
ls -la /a0/usr/plugins/ 2>/dev/null || echo "Path does not exist"
# Check the actual structure
echo -e "\n--- Structure of agent-zero/skills/debt-sentinel ---"
tree agent-zero/skills/debt-sentinel/ 2>/dev/null || find agent-zero/skills/debt-sentinel/ -type f -o -type dRepository: aegntic/cldcde
Length of output: 583
Add missing implementation files and fix path references.
The skill documentation references files that don't exist in this PR:
debt-sentinel.py(Python implementation script)ANTI_PATTERNS.md(pattern configuration)DEBT.md(debt ledger)
Additionally, all path references use /a0/usr/plugins/debt-sentinel/ (lines 74, 84, 101, 114, 143), but the skill directory is located at agent-zero/skills/debt-sentinel/. The /a0/usr/plugins/ path does not exist. Either:
- Update all path references to match the actual deployment location (
agent-zero/skills/debt-sentinel/), or - Document the correct runtime mount structure if
/a0/usr/plugins/is intended as a future deployment target
Include the actual implementation files in this PR or remove the hook configuration and command examples that reference them.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/debt-sentinel/SKILL.md` around lines 68 - 79, The SKILL.md
references missing implementation and config files (debt-sentinel.py,
ANTI_PATTERNS.md, DEBT.md) and uses incorrect absolute paths
(/a0/usr/plugins/debt-sentinel/) instead of the repo skill path; either add the
missing files to the PR (create debt-sentinel.py with the runtime entrypoint,
plus ANTI_PATTERNS.md and DEBT.md) under agent-zero/skills/debt-sentinel/ and
update all hook/command examples in SKILL.md to use
agent-zero/skills/debt-sentinel/, or remove the PreToolUse hook and any example
commands that reference debt-sentinel.py and the other missing files; ensure all
references in SKILL.md (the JSON hooks block and any inline examples) point to
the chosen location and that the entrypoint filename debt-sentinel.py matches
the actual script name.
| # ClawReform Demo Video Generator | ||
| # Launches VM, installs ClawReform, records demo | ||
|
|
||
| set -e |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n agent-zero/skills/demo-video/scripts/create-demo.shRepository: aegntic/cldcde
Length of output: 3892
Enable pipefail to prevent masking cargo build failures in pipelines.
The script has set -e on line 5, but line 62 pipes the cargo build output to tail -5. Without pipefail, a failed cargo build will not cause the script to exit because tail -5 (the rightmost command) succeeds regardless of the build status. Change line 5 to set -euo pipefail to propagate all pipeline failures.
Proposed patch
-set -e
+set -euo pipefailOptionally, also preserve the full build log while still showing only the last 5 lines:
- cargo build --release -p openfang-cli 2>&1 | tail -5
+ cargo build --release -p openfang-cli 2>&1 | tee /tmp/cargo-build.log | tail -5📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| set -e | |
| set -euo pipefail |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/demo-video/scripts/create-demo.sh` at line 5, The script
currently uses "set -e" which doesn't propagate failures through pipelines (see
the existing "set -e" at the top and the "cargo build | tail -5" usage later),
so change the shell options to enable pipefail and stricter checks (replace the
top "set -e" with "set -euo pipefail") to ensure a failed cargo build causes the
script to exit; optionally, to preserve the full build log while still showing
only the last 5 lines, capture the cargo output to a temp file and then tail -5
that file before failing if the cargo command exits nonzero.
| python3 -c " | ||
| import json | ||
| import sys | ||
|
|
||
| try: | ||
| with open('$CONFIG_FILE', 'r') as f: | ||
| config = json.load(f) | ||
|
|
||
| keys = '$key'.split('.') | ||
| value = config | ||
|
|
||
| for k in keys: | ||
| if isinstance(value, dict) and k in value: | ||
| value = value[k] | ||
| else: | ||
| value = '$default' | ||
| break | ||
|
|
||
| if isinstance(value, str): | ||
| print(value) | ||
| else: | ||
| print('$default') | ||
| except: | ||
| print('$default') | ||
| " |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "fpef-analyze.sh" -type fRepository: aegntic/cldcde
Length of output: 170
🏁 Script executed:
sed -n '120,150p' ./agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh | cat -nRepository: aegntic/cldcde
Length of output: 806
🏁 Script executed:
grep -n "get_config_value" ./agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh | head -20Repository: aegntic/cldcde
Length of output: 501
🏁 Script executed:
grep -n "CONFIG_FILE=" ./agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh | head -10Repository: aegntic/cldcde
Length of output: 138
🏁 Script executed:
sed -n '1,60p' ./agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh | cat -nRepository: aegntic/cldcde
Length of output: 2363
🏁 Script executed:
cat -n ./agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh | sed -n '117,160p'Repository: aegntic/cldcde
Length of output: 1448
🏁 Script executed:
# Test if we can demonstrate the vulnerability with a problematic CONFIG_FILE value
# Let's also check if there's a sample config file
find . -name "fpef-config.json" -type fRepository: aegntic/cldcde
Length of output: 176
🏁 Script executed:
cat ./agent-zero/skills/fpef-analyzer/resources/fpef-config.jsonRepository: aegntic/cldcde
Length of output: 640
🏁 Script executed:
# Create a test to demonstrate the vulnerability
# If we have a key with a quote in the default value, it breaks Python syntax
cat > /tmp/test_shell_interp.sh << 'EOF'
#!/bin/bash
CONFIG_FILE="./agent-zero/skills/fpef-analyzer/resources/fpef-config.json"
get_config_value() {
local key="$1"
local default="$2"
if [ -f "$CONFIG_FILE" ]; then
python3 -c "
import json
import sys
try:
with open('$CONFIG_FILE', 'r') as f:
config = json.load(f)
keys = '$key'.split('.')
value = config
for k in keys:
if isinstance(value, dict) and k in value:
value = value[k]
else:
value = '$default'
break
if isinstance(value, str):
print(value)
else:
print('$default')
except:
print('$default')
"
else
echo "$default"
fi
}
# This should work fine with hardcoded strings
echo "Test 1: Normal case"
get_config_value "severity" "medium"
# Test with a problematic default containing quotes
echo "Test 2: Default with quotes"
get_config_value "nonexistent_key" "value with 'quotes'"
EOF
bash /tmp/test_shell_interp.shRepository: aegntic/cldcde
Length of output: 250
🏁 Script executed:
# Now test the proposed fix
cat > /tmp/test_heredoc_fix.sh << 'EOF'
#!/bin/bash
CONFIG_FILE="./agent-zero/skills/fpef-analyzer/resources/fpef-config.json"
get_config_value() {
local key="$1"
local default="$2"
if [ -f "$CONFIG_FILE" ]; then
python3 - "$CONFIG_FILE" "$key" "$default" <<'PY'
import json
import sys
config_file, key, default = sys.argv[1], sys.argv[2], sys.argv[3]
try:
with open(config_file, "r", encoding="utf-8") as f:
config = json.load(f)
value = config
for k in key.split("."):
if isinstance(value, dict) and k in value:
value = value[k]
else:
value = default
break
if isinstance(value, (dict, list)):
print(json.dumps(value))
else:
print(value if value not in (None, "") else default)
except Exception:
print(default)
PY
else
echo "$default"
fi
}
# Test with normal case
echo "Test 1: Normal case"
get_config_value "severity" "medium"
# Test with problematic default containing quotes
echo "Test 2: Default with quotes"
get_config_value "nonexistent_key" "value with 'quotes'"
# Test with data_sources (list type) - should return JSON
echo "Test 3: List value from config"
get_config_value "data_sources" '["default"]'
EOF
bash /tmp/test_heredoc_fix.shRepository: aegntic/cldcde
Length of output: 201
🏁 Script executed:
# Test with problematic key containing quotes
cat > /tmp/test_key_quotes.sh << 'EOF'
#!/bin/bash
CONFIG_FILE="./agent-zero/skills/fpef-analyzer/resources/fpef-config.json"
# Test with the ORIGINAL vulnerable code
echo "=== ORIGINAL CODE ==="
echo "Test with key containing quotes:"
key="problem's_analysis"
default="fallback"
python3 -c "
import json
import sys
try:
with open('$CONFIG_FILE', 'r') as f:
config = json.load(f)
keys = '$key'.split('.')
value = config
for k in keys:
if isinstance(value, dict) and k in value:
value = value[k]
else:
value = '$default'
break
if isinstance(value, str):
print(value)
else:
print('$default')
except:
print('$default')
" 2>&1 || echo "ERROR: Python syntax error"
# Test with the FIXED code
echo ""
echo "=== FIXED CODE ==="
echo "Test with key containing quotes:"
python3 - "$CONFIG_FILE" "$key" "$default" <<'PY'
import json
import sys
config_file, key, default = sys.argv[1], sys.argv[2], sys.argv[3]
try:
with open(config_file, "r", encoding="utf-8") as f:
config = json.load(f)
value = config
for k in key.split("."):
if isinstance(value, dict) and k in value:
value = value[k]
else:
value = default
break
if isinstance(value, (dict, list)):
print(json.dumps(value))
else:
print(value if value not in (None, "") else default)
except Exception:
print(default)
PY
EOF
bash /tmp/test_key_quotes.shRepository: aegntic/cldcde
Length of output: 374
Fix unsafe shell interpolation in embedded Python config parsing.
Directly embedding $CONFIG_FILE, $key, and $default into the Python source causes syntax errors when these variables contain quotes or special characters (e.g., can't). This breaks the script when users pass config file paths with special characters via the -c flag or when JSON values contain quotes. Additionally, the code incorrectly falls back to the default for non-string types instead of handling them properly.
The proposed fix using a heredoc with sys.argv eliminates the shell interpolation vulnerability, correctly handles dict and list values by outputting JSON, and improves exception handling.
Proposed fix
get_config_value() {
local key="$1"
local default="$2"
if [ -f "$CONFIG_FILE" ]; then
- python3 -c "
-import json
-import sys
-
-try:
- with open('$CONFIG_FILE', 'r') as f:
- config = json.load(f)
-
- keys = '$key'.split('.')
- value = config
-
- for k in keys:
- if isinstance(value, dict) and k in value:
- value = value[k]
- else:
- value = '$default'
- break
-
- if isinstance(value, str):
- print(value)
- else:
- print('$default')
-except:
- print('$default')
-"
+ python3 - "$CONFIG_FILE" "$key" "$default" <<'PY'
+import json
+import sys
+
+config_file, key, default = sys.argv[1], sys.argv[2], sys.argv[3]
+try:
+ with open(config_file, "r", encoding="utf-8") as f:
+ config = json.load(f)
+
+ value = config
+ for k in key.split("."):
+ if isinstance(value, dict) and k in value:
+ value = value[k]
+ else:
+ value = default
+ break
+
+ if isinstance(value, (dict, list)):
+ print(json.dumps(value))
+ else:
+ print(value if value not in (None, "") else default)
+except Exception:
+ print(default)
+PY
else
echo "$default"
fi
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| python3 -c " | |
| import json | |
| import sys | |
| try: | |
| with open('$CONFIG_FILE', 'r') as f: | |
| config = json.load(f) | |
| keys = '$key'.split('.') | |
| value = config | |
| for k in keys: | |
| if isinstance(value, dict) and k in value: | |
| value = value[k] | |
| else: | |
| value = '$default' | |
| break | |
| if isinstance(value, str): | |
| print(value) | |
| else: | |
| print('$default') | |
| except: | |
| print('$default') | |
| " | |
| python3 - "$CONFIG_FILE" "$key" "$default" <<'PY' | |
| import json | |
| import sys | |
| config_file, key, default = sys.argv[1], sys.argv[2], sys.argv[3] | |
| try: | |
| with open(config_file, "r", encoding="utf-8") as f: | |
| config = json.load(f) | |
| value = config | |
| for k in key.split("."): | |
| if isinstance(value, dict) and k in value: | |
| value = value[k] | |
| else: | |
| value = default | |
| break | |
| if isinstance(value, (dict, list)): | |
| print(json.dumps(value)) | |
| else: | |
| print(value if value not in (None, "") else default) | |
| except Exception: | |
| print(default) | |
| PY |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agent-zero/skills/fpef-analyzer/scripts/fpef-analyze.sh` around lines 122 -
146, The embedded Python in fpef-analyze.sh unsafe-ly interpolates $CONFIG_FILE,
$key and $default causing syntax errors and shell-injection risks; replace the
inline python3 -c block with a heredoc that passes CONFIG_FILE, key and default
via sys.argv to Python (avoid direct shell interpolation), then in the Python
script open sys.argv[1] for the config path, split sys.argv[2] for keys,
traverse config (handling dicts and lists), and when the resolved value is a str
print it, otherwise json.dump the value so dict/list/numeric types are output
correctly; also improve exception handling to catch specific exceptions
(FileNotFoundError, json.JSONDecodeError, KeyError) and on error print the
provided default (JSON-encoded if non-string) to match expected output.
📦 Agent Zero Skills Directory
This PR adds a complete directory of Agent Zero compatible skills, converted from the Claude Code format.
📊 Summary
🔧 Conversions Performed
📁 New Directory Structure
✅ Skills Categories
🌐 Downloadable via cldcde.cc
Once merged, these skills will be available for download from cldcde.cc for Agent Zero users.
Automated conversion and PR creation by Agent Zero
Summary by CodeRabbit
Release Notes