Conversation
Remove mem0 MCP integration entirely from the project: - Delete Curator agent (curator.md) and all mem0 tool references - Clean all 8 agent templates, 7 command files, docs, skills, tests - Remove ACE learning system references from plugin metadata - Delete obsolete files (tier3_integration_report.json, apply_tier3_deltas.py, playbook.json.backup, requirements-semantic.txt) - Update __init__.py to remove curator generator and mem0 allowlist Simplify map-review to always use thorough mode: - Remove BIG/SMALL mode selection menu - Always present top 4 issues per section Fix hook errors causing noise during agent work: - block-secrets.py: add SAFE_PATH_PREFIXES to stop blocking .claude/ framework files - safety-guardrails.py: narrow "token"/"password" patterns to file extensions only, add .claude/ dirs to safe prefixes, fix missing execute permission - settings.local.json: remove duplicate workflow-context-injector.py (matcher "*", timeout 1s) and duplicate end-of-turn.sh Stop hook All templates synced via make sync-templates. 635 passed, 1 skipped, 0 failures.
📝 WalkthroughWalkthroughThis pull request eliminates the Curator agent and mem0-based memory system from the MAP Framework, reducing the specialized agent count from 12 to 11. Changes reflect this architectural simplification across plugin metadata, agent templates, command definitions, hooks, documentation, tests, and CLI references, with Reflector now handling pattern extraction without Curator mediation. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (19)
.claude/skills/map-workflows-guide/resources/map-feature-deep-dive.md (1)
124-124:⚠️ Potential issue | 🟡 MinorStale token-cost estimate — Curator removal should lower the figure.
The
~18K tokensfigure on line 124 was computed with Curator running per subtask. Now that Curator is removed (lines 71/74 correctly reflectN × Reflector), this estimate is higher than the new reality. Even for a PLANNED workflow, the two cost references should be consistent.📝 Suggested update
-**Token usage:** ~18K tokens (full pipeline, 5 subtasks) +**Token usage:** ~15K tokens (full pipeline, 5 subtasks)(Adjust the figure to whatever the revised estimate is — the key point is it should be lower than the old Curator + Reflector cost.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/map-workflows-guide/resources/map-feature-deep-dive.md at line 124, The token-usage figure "~18K tokens" is stale because it included Curator per subtask; recalculate the pipeline token estimate with Curator removed (only N × Reflector per subtask) and replace the "~18K tokens" value and any PLANNED-workflow references so they match the new, lower estimate; ensure the text that mentions "Curator" and "Reflector" (and the token-usage callout) are consistent across the document..claude/references/step-state-schema.md (1)
56-72:⚠️ Potential issue | 🟡 MinorSame issue as the template counterpart: step 2.1 (CONTEXT_SEARCH) is absent and item numbering skips 8.
This file is the authoritative source for
.claude/consumers. Bothmap_orchestrator.py("2.1": "CONTEXT_SEARCH"inSTEP_PHASES/STEP_ORDER) andworkflow-context-injector.py(reminder forstep_id == "2.1") still operate on step 2.1, so it must appear here.📝 Proposed fix (identical to template counterpart)
-7. `2.2` RESEARCH (conditional) -9. `2.3` ACTOR +7. `2.1` CONTEXT_SEARCH (conditional) +8. `2.2` RESEARCH (conditional) +9. `2.3` ACTOR🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/references/step-state-schema.md around lines 56 - 72, The step list is missing step "2.1 CONTEXT_SEARCH" and skips item numbering (no 8); update the linear step set in this schema to include "2.1 CONTEXT_SEARCH" between "2.0 XML_PACKET" and "2.2 RESEARCH" and renumber the subsequent items so the sequence is continuous; ensure the added step id matches the consumers that reference STEP_PHASES/STEP_ORDER and workflow-context-injector's step_id == "2.1".src/mapify_cli/templates/references/step-state-schema.md (1)
56-72:⚠️ Potential issue | 🟡 MinorStep 2.1 (CONTEXT_SEARCH) is missing from the Step IDs list, and the item numbering has a gap.
Two issues with the "Step IDs (map-efficient)" section after removing MEM0_SEARCH:
Missing step:
map_orchestrator.pystill includes"2.1": "CONTEXT_SEARCH"in bothSTEP_PHASESandSTEP_ORDER, andworkflow-context-injector.pystill emits a reminder forstep_id == "2.1". The schema omits this step entirely, making it inaccurate.Numbering gap: The list jumps from item 7 to item 9 — item 8 is missing. This is a residual from the old numbering (2.1 was item 7, 2.2 was item 8).
📝 Proposed fix
-7. `2.2` RESEARCH (conditional) -9. `2.3` ACTOR +7. `2.1` CONTEXT_SEARCH (conditional) +8. `2.2` RESEARCH (conditional) +9. `2.3` ACTOR🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/references/step-state-schema.md` around lines 56 - 72, The Step IDs documentation omits step "2.1" (CONTEXT_SEARCH) and leaves a numbering gap; add an entry for `2.1 CONTEXT_SEARCH` into the Step IDs list between `2.0 XML_PACKET` and `2.2 RESEARCH` and renumber the subsequent list items so there is no gap (shift items after the insertion by +1 in the visible numbering), and make sure the text around conditional notes remains correct; also ensure the runtime constants match this doc by keeping `"2.1": "CONTEXT_SEARCH"` in `STEP_PHASES` and `STEP_ORDER` and preserving the `step_id == "2.1"` reminder in `workflow-context-injector.py` so code and schema are consistent..claude-plugin/marketplace.json (1)
13-13:⚠️ Potential issue | 🟡 MinorStale agent count in the plugin description.
The
descriptionfield still reads "12 specialized agents" while thefeaturesarray on Line 34 was correctly updated to "11". With Curator removed, both should say 11.🛠️ Proposed fix
- "description": "Modular Agentic Planner - Cognitive architecture for AI agents inspired by prefrontal cortex functions. Orchestrates 12 specialized agents for development with automatic quality validation.", + "description": "Modular Agentic Planner - Cognitive architecture for AI agents inspired by prefrontal cortex functions. Orchestrates 11 specialized agents for development with automatic quality validation.",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude-plugin/marketplace.json at line 13, Update the plugin manifest to make the agent count consistent: change the "description" field string that currently says "12 specialized agents" to "11 specialized agents" to match the updated "features" array and removal of the Curator; ensure the phrase appears only once (in the description value) and preserve the rest of the description text and punctuation.src/mapify_cli/templates/skills/map-workflows-guide/resources/agent-architecture.md (1)
57-92:⚠️ Potential issue | 🟡 MinorAgent numbering skips from
6to8, leaving a ghost slot where Curator was.The file now correctly describes 11 agents, but the numbering still has
**6. Reflector**→**8. Documentation-Reviewer**→ … →**12. Final-Verifier**. Renumber8–12to7–11to eliminate the gap and have the list end at 11, matching the "11 specialized agents" header.♻️ Proposed renumbering
-**8. Documentation-Reviewer** +**7. Documentation-Reviewer** ... -**9. Debate-Arbiter** +**8. Debate-Arbiter** ... -**10. Synthesizer** +**9. Synthesizer** ... -**11. Research-Agent** +**10. Research-Agent** ... -**12. Final-Verifier** +**11. Final-Verifier**🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/skills/map-workflows-guide/resources/agent-architecture.md` around lines 57 - 92, The numbered agent list has a gap: headings move from "**6. Reflector**" to "**8. Documentation-Reviewer**" through "**12. Final-Verifier**"; renumber the blocks labeled "Documentation-Reviewer", "Debate-Arbiter", "Synthesizer", "Research-Agent", and "Final-Verifier" to shift their prefixes down by one so they become "**7. Documentation-Reviewer**", "**8. Debate-Arbiter**", "**9. Synthesizer**", "**10. Research-Agent**", and "**11. Final-Verifier**" respectively, ensuring the list ends at 11 and matches the "11 specialized agents" header..claude/agents/evaluator.md (1)
1439-1448:⚠️ Potential issue | 🟡 MinorMissing checklist item 3 — numbering jumps from 2 to 4
A mem0-specific item was removed from the final checklist but the list was not renumbered, leaving a gap:
✏️ Proposed fix
1. ✅ Did I use sequential thinking for quality analysis? 2. ✅ Did I check review history for consistency with past scores? -4. ✅ Are all scores (0-10) justified with specific evidence? -5. ✅ Is overall_score calculated correctly using weighted formula? -6. ✅ Is recommendation based on decision tree logic? -7. ✅ Is distance_to_goal estimated realistically? -8. ✅ Are strengths and weaknesses specific (not vague)? -9. ✅ Are next_steps concrete and actionable (if not "proceed")? -10. ✅ Is output valid JSON (no markdown, no extra text)? -11. ✅ Did I list which MCP tools I used? +3. ✅ Are all scores (0-10) justified with specific evidence? +4. ✅ Is overall_score calculated correctly using weighted formula? +5. ✅ Is recommendation based on decision tree logic? +6. ✅ Is distance_to_goal estimated realistically? +7. ✅ Are strengths and weaknesses specific (not vague)? +8. ✅ Are next_steps concrete and actionable (if not "proceed")? +9. ✅ Is output valid JSON (no markdown, no extra text)? +10. ✅ Did I list which MCP tools I used?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/agents/evaluator.md around lines 1439 - 1448, The checklist numbering skips item 3 (items show "1.", "2.", "4.", etc.); update the enumerated list in evaluator.md so the sequence is consecutive by renumbering "4." through "11." to "3." through "10." (or remove the gap by reinserting the mem0-specific item if intended); locate the checklist block containing the lines starting with "1. ✅ Did I use sequential thinking..." and adjust the numeric prefixes for the subsequent lines to restore continuous numbering and consistency with any references to overall_score or checklist indices elsewhere.docs/USAGE.md (1)
1367-1367:⚠️ Potential issue | 🟡 MinorStale "12 agents" reference — should be 11 after Curator removal
This PR removes the Curator agent, reducing the agent count from 12 to 11. The
agent-architecture.mdresource description still says "12 agents".✏️ Proposed fix
-- `agent-architecture.md` - How 12 agents orchestrate +- `agent-architecture.md` - How 11 agents orchestrate🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/USAGE.md` at line 1367, The docs still refer to "12 agents" in the agent-architecture overview; update the description for agent-architecture.md to say "11 agents" (or remove the explicit count) to reflect Curator removal, and search for any other occurrences of "12 agents" in the docs to change them to "11 agents" or a non-count phrase; update the single instance in USAGE.md that currently reads "`agent-architecture.md` - How 12 agents orchestrate" to "`agent-architecture.md` - How 11 agents orchestrate" (or "How the agents orchestrate") so the text matches the codebase.src/mapify_cli/templates/commands/map-learn.md (1)
97-134:⚠️ Potential issue | 🟡 MinorSchema mismatch: Reflector won't produce
patterns_used,patterns_discovered, orworkflow_efficiencyThe Reflector prompt in Step 2 requests three fields not defined in
reflector.md's strict output schema, causing the Step 3 Summary Report to be unpopulated:
Requested field In reflector.md schema? Impact in Summary Report patterns_used❌ No Line 122 count always blank patterns_discovered❌ No Lines 123, 125–126 always blank workflow_efficiency❌ No Lines 131–134 block always blank Additionally,
suggested_new_bulletsin line 102 asks for arationalefield, butreflector.md's schema usesrelated_to(an array of linked bullet IDs — semantically different from a rationale string).The summary template should map to the fields the Reflector actually produces:
bullet_updates(wheretag: 'helpful') as a proxy for applied patterns,suggested_new_bulletsfor discovered patterns, and removing the workflow efficiency block (no equivalent field exists).🔧 Suggested alignment for the Reflector prompt output spec (lines 97–103) and Summary Report
**Output JSON with:** - key_insight: string (one sentence takeaway for entire workflow) -- patterns_used: array of strings (existing patterns applied successfully) -- patterns_discovered: array of strings (new patterns worth preserving) - bullet_updates: array of {bullet_id, tag: 'helpful'|'harmful', reason} -- suggested_new_bullets: array of {section, content, code_example, rationale} -- workflow_efficiency: {total_iterations, avg_per_subtask, bottlenecks: array of strings}" +- suggested_new_bullets: array of {section, content, code_example, related_to}"And update the Summary Report to match:
### Reflector Insights - **Key Insight:** [key_insight from Reflector] -- **Patterns Used:** [count] existing patterns applied successfully -- **Patterns Discovered:** [count] new patterns identified +- **Patterns Reinforced:** [count of bullet_updates with tag='helpful'] existing patterns applied successfully +- **New Patterns Suggested:** [count of suggested_new_bullets] new patterns identified -### Discovered Patterns -[List each pattern from patterns_discovered with description] +### Suggested New Patterns +[List each entry from suggested_new_bullets with section and content] ### Suggested Improvements -[List each suggested_new_bullet with section and rationale] +[List each suggested_new_bullet with section and code_example] -### Workflow Efficiency -- **Total Iterations:** [total_iterations] -- **Average per Subtask:** [avg_per_subtask] -- **Bottlenecks:** [list bottlenecks]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/commands/map-learn.md` around lines 97 - 134, The Reflector output schema and the Step 2 prompt are misaligned with the Step 3 summary: update either reflector.md or src/mapify_cli/templates/commands/map-learn.md so the fields match. Option A: extend reflector.md's strict schema to emit patterns_used (array), patterns_discovered (array), and workflow_efficiency (object) plus change suggested_new_bullets to include rationale (string) instead of only related_to; Option B (preferred): change the Step 2 prompt in map-learn.md to request the actual fields reflector.md produces—treat bullet_updates with tag:'helpful' as patterns_used, use suggested_new_bullets (with its existing related_to semantics) for discovered patterns, and remove the workflow_efficiency block from the summary template. Adjust all references to patterns_used, patterns_discovered, workflow_efficiency, bullet_updates, suggested_new_bullets, and related_to so the schema and the Summary Report match..claude/agents/monitor.md (2)
2375-2381:⚠️ Potential issue | 🟡 MinorExample uses non-canonical tool names.
mcp_tools_usedmust use the enum values (glob,read, etc.), but the example shows["Glob", "Read"].✏️ Suggested fix
- "mcp_tools_used": ["Glob", "Read"] + "mcp_tools_used": ["glob", "read"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/agents/monitor.md around lines 2375 - 2381, The example JSON uses non-canonical tool names in the mcp_tools_used array ("Glob", "Read"); update the example so mcp_tools_used contains the enum lowercase values (e.g., "glob", "read", etc.) to match the expected enum names; locate the example JSON near the mcp_tools_used field in .claude/agents/monitor.md and replace each capitalized entry with its corresponding lowercase enum string.
2429-2435:⚠️ Potential issue | 🟡 MinorExample mixes tool failures into
mcp_tools_used.
"request_review (timeout)"is not a valid enum value. Timeouts should be recorded inmcp_tools_failed, withmcp_tools_usedlisting only successful tools.✏️ Suggested fix
- "mcp_tools_used": ["request_review (timeout)"] + "mcp_tools_used": [] + "mcp_tools_failed": ["request_review"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/agents/monitor.md around lines 2429 - 2435, The JSON output incorrectly places a timed-out tool entry ("request_review (timeout)") in mcp_tools_used; update the object so mcp_tools_used contains only successful tool enum values and move the timed-out entry into mcp_tools_failed (or replace it with the proper enum key in mcp_tools_failed), and add validation to ensure mcp_tools_used entries conform to the allowed enum (e.g., "request_review") while failures like timeouts are recorded in mcp_tools_failed; locate and change the keys "mcp_tools_used" and "mcp_tools_failed" in the affected JSON-producing logic to enforce this separation and enum validation.docs/INSTALL.md (1)
218-221:⚠️ Potential issue | 🟡 MinorUpdate agent count to 11.
The manual structure still says “12 specialized agents,” but Curator was removed in this PR, so the count should be 11.
✏️ Suggested edit
- │ ├── agents/ # 12 specialized agents + │ ├── agents/ # 11 specialized agents🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/INSTALL.md` around lines 218 - 221, Update the agents count text: locate the string "12 specialized agents" in docs/INSTALL.md (near the agents/ directory listing, e.g., the line showing "├── agents/ # 12 specialized agents") and change the numeric count to "11" to reflect removal of Curator so the header now reads "11 specialized agents".docs/ARCHITECTURE.md (2)
548-569:⚠️ Potential issue | 🟡 MinorUpdate /map-review text to reflect fixed depth (no BIG/SMALL).
The section still references BIG/SMALL selection, but the workflow is now always thorough with 4 issues per section.✏️ Proposed doc tweak
-4. **Review Section Protocol** — each section presents top N issues (BIG=4, SMALL=1) with options and tradeoffs, user picks resolution via AskUserQuestion -5. **BIG/SMALL mode** — user selects review depth at start +4. **Review Section Protocol** — each section presents the top 4 issues with options and tradeoffs, user picks resolution via AskUserQuestion +5. **Fixed review depth** — no BIG/SMALL selection; always thorough mode🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ARCHITECTURE.md` around lines 548 - 569, Update the /map-review documentation in the ARCHITECTURE.md to remove references to the deprecated "BIG/SMALL mode" and clarify that the review depth is fixed to always present 4 issues per section; specifically edit the "Review-Specific Features" and "Review Section Protocol" blocks (which mention "BIG=4, SMALL=1" and "user selects review depth at start") to state the workflow is always thorough with 4 issues per section and remove any UI/flag language implying a depth choice, and ensure /map-review command text reflects the fixed-depth behavior and CI/auto mode remains unchanged.
999-1039:⚠️ Potential issue | 🟡 MinorCorrect Synthesizer usage context (Self‑MoA, not /map-debate).
This section says Synthesizer is used in /map-debate, but earlier in this doc the DebateArbiter performs synthesis there.✏️ Proposed fix
-**Usage Context:** Only invoked in `/map-debate` workflow after DebateArbiter completes cross-evaluation +**Usage Context:** Invoked in `/map-efficient` Self‑MoA (3×Actor/3×Monitor → Synthesizer → final Monitor). `/map-debate` uses DebateArbiter for synthesis.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ARCHITECTURE.md` around lines 999 - 1039, The "Synthesizer" usage context is incorrect: change its Usage Context from "/map-debate" to Self‑MoA workflows and clarify it runs after DebateArbiter completes synthesis; update the paragraph that currently reads "Only invoked in /map-debate workflow after DebateArbiter completes cross-evaluation" to state "Only invoked in Self‑MoA (/synthesizer) workflows after DebateArbiter completes cross-evaluation" (or similar), remove the conflicting reference to /map-debate, and ensure the section for Synthesizer, DebateArbiter, and the /map-debate workflow consistently reflect that DebateArbiter handles synthesis in /map-debate while Synthesizer is used in Self‑MoA..claude/skills/map-workflows-guide/resources/agent-architecture.md (1)
59-92:⚠️ Potential issue | 🟡 MinorNumbered agent list skips
#7and tops out at#12while the header claims 11 agents.After removing Curator (formerly
#7), the list jumps from**6. Reflector**directly to**8. Documentation-Reviewer**, then continues to**12. Final-Verifier**. The header on line 3 correctly states "11 specialized agents", but the highest ordinal is 12, which is confusing.🔧 Suggested renumbering
-**8. Documentation-Reviewer** +**7. Documentation-Reviewer** -**9. Debate-Arbiter** +**8. Debate-Arbiter** -**10. Synthesizer** +**9. Synthesizer** -**11. Research-Agent** +**10. Research-Agent** -**12. Final-Verifier** +**11. Final-Verifier**🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/map-workflows-guide/resources/agent-architecture.md around lines 59 - 92, The numbered agent list is inconsistent: after "**6. Reflector**" the next item is "**8. Documentation-Reviewer**" and the sequence ends at "**12. Final-Verifier**" while the header claims "11 specialized agents"; update the enumeration so ordinals are contiguous and match the header—either renumber the agent headings (Reflector, Documentation-Reviewer, Debate-Arbiter, Synthesizer, Research-Agent, Final-Verifier, etc.) to be 1..11 or change the header to reflect 12 agents; ensure the unique agent headings (e.g., "Documentation-Reviewer", "Debate-Arbiter", "Synthesizer", "Research-Agent", "Final-Verifier") are adjusted consistently throughout the section so numbering and header count agree..claude/commands/map-review.md (1)
66-83:⚠️ Potential issue | 🟡 Minor"Top 4 issues" (line 66) conflicts with "up to 4" (line 83).
The Review Section Protocol (line 66) instructs agents to "Present top 4 issues", implying exactly 4 are required. Line 83 says "up to 4 issues per section". In sparse sections (e.g., only 1 real performance finding), agents following the protocol literally will pad with low-signal issues.
🔧 Suggested fix
-1. **Present top 4 issues** from the primary source agent for this section, using the section prefix (e.g., ARCH-1, QUALITY-2, TESTS-1, PERF-3) +1. **Present up to 4 issues** from the primary source agent for this section, using the section prefix (e.g., ARCH-1, QUALITY-2, TESTS-1, PERF-3)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/commands/map-review.md around lines 66 - 83, Update the contradictory wording so the review protocol consistently allows fewer than four issues: change the "Present top 4 issues" phrase to "Present up to 4 issues" (or similar) and ensure the document's behavior for sparse sections is explicit (i.e., do not pad with low-signal items and only list real findings). Edit the lines that contain the exact strings "Present top 4 issues" and "up to 4 issues per section" and add a short clarifying sentence under the Review Section Protocol (near "Step 0: Detect CI Mode") stating that when fewer than four high-signal issues exist, list only those issues and skip padding; keep CI detection logic unchanged..claude/agents/reflector.md (1)
66-74:⚠️ Potential issue | 🟡 MinorMinor inconsistency in PARTIAL score range after changing line 71.
Line 71 (changed) now defines PARTIAL as
(6-8), but two unchanged references still use the old6-7.5boundary:
- Line 66 (fast-path condition):
"Clear pass/fail (not partial 6-7.5)"— a score of 7.6 would now be PARTIAL per the CLASSIFY step but would still qualify as "clear pass/fail" per this condition.- Line 333 (Edge Case E3):
"IF all evaluator_scores between 6.0 and 7.5"— the 7.5–8.0 range is now PARTIAL per CLASSIFY but falls outside E3's mediocre-score check entirely.🔧 Suggested fix
- - Clear pass/fail (not partial 6-7.5) + - Clear pass/fail (not partial 6–8)And in E3 (line 333):
-IF all evaluator_scores between 6.0 and 7.5: +IF all evaluator_scores between 6.0 and 8.0:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/agents/reflector.md around lines 66 - 74, The PARTIAL score range was changed to (6-8) in the CLASSIFY block but two other places still reference 6-7.5 causing inconsistency; update the fast-path comment that reads "Clear pass/fail (not partial 6-7.5)" to use the new boundary (6-8) and modify the E3 check that currently says "IF all evaluator_scores between 6.0 and 7.5" to match the CLASSIFY PARTIAL range (e.g., "between 6.0 and 8.0" or ">=6.0 and <8.0") so CLASSIFY, PARTIAL, and the E3 evaluator_scores check are consistent..claude/skills/map-workflows-guide/SKILL.md (1)
217-233:⚠️ Potential issue | 🟡 MinorAgent count appears outdated after Curator removal.
This section still states 12 agents; the current composition is 11. Please update the count to avoid confusion.📝 Suggested fix
-MAP workflows orchestrate **12 specialized agents**, each with specific responsibilities: +MAP workflows orchestrate **11 specialized agents**, each with specific responsibilities:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/map-workflows-guide/SKILL.md around lines 217 - 233, Update the agent count in the "Understanding MAP Agents" header and any nearby references from "12" to "11" to reflect removal of the Curator agent; e.g., edit the section title/first line that currently reads "MAP workflows orchestrate **12 specialized agents**" so it reads "11", and scan for any other occurrences in this block (including lines describing Execution & Validation Agents or mentions near "TaskDecomposer" and "Actor") to keep the documentation consistent.src/mapify_cli/templates/agents/monitor.md (2)
2385-2435:⚠️ Potential issue | 🟡 MinorTimeout example should use schema-valid tool fields.
"request_review (timeout)"isn’t a valid enum value. Prefer listing it undermcp_tools_failedand keepmcp_tools_usedschema-compliant.📝 Suggested fix
- "mcp_tools_used": ["request_review (timeout)"] + "mcp_tools_used": [], + "mcp_tools_failed": ["request_review"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/agents/monitor.md` around lines 2385 - 2435, The JSON metadata in the monitor template uses an invalid enum value "request_review (timeout)" in mcp_tools_used; update the JSON so mcp_tools_used contains only schema-valid tool names and move the timeout entry into a new or existing mcp_tools_failed array (e.g., "request_review" in mcp_tools_used and "request_review" with status "timeout" or just "request_review (timeout)" in mcp_tools_failed) inside src/mapify_cli/templates/agents/monitor.md; locate and edit the JSON block around the Example 4 snippet (symbols: mcp_tools_used, mcp_tools_failed) to ensure it conforms to the schema and preserves the feedback text.
2353-2380:⚠️ Potential issue | 🟡 MinorExample uses non-canonical tool names.
The schema enumerates lowercase tool names; the example should match to avoid invalid outputs.📝 Suggested fix
- "mcp_tools_used": ["Glob", "Read"] + "mcp_tools_used": ["glob", "read"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/agents/monitor.md` around lines 2353 - 2380, Summary: The documentation uses the non-existent "presets: []" trigger and mismatches tool-name casing, contradicting the tech-design. Fix: update the example in monitor.md to use the canonical trigger field "engines: {}" instead of "presets: []" (reflecting the delete-on-empty behavior), add the global disable path using "enabled: false" plus a separate per-engine uninstall using "engines: {}", and ensure all tool names use the lowercase canonical form from the schema; look for the "Checklist item 9" text and the example block that mentions "presets" to make these edits.
🧹 Nitpick comments (10)
.claude/commands/map-learn.md (1)
97-103:bullet_updatesis an orphaned output field — wastes tokens with no consumer.With Curator removed, nothing reads or renders
bullet_updates(array of {bullet_id, tag: 'helpful'|'harmful', reason}). Every other field in this JSON spec (key_insight,patterns_used,patterns_discovered,suggested_new_bullets,workflow_efficiency) is surfaced in Step 3's Summary Report, butbullet_updatesis silently dropped. This causes the Reflector to spend tokens generating a structured list that is immediately discarded.♻️ Proposed fix — remove orphaned field
**Output JSON with:** - key_insight: string (one sentence takeaway for entire workflow) - patterns_used: array of strings (existing patterns applied successfully) - patterns_discovered: array of strings (new patterns worth preserving) -- bullet_updates: array of {bullet_id, tag: 'helpful'|'harmful', reason} - suggested_new_bullets: array of {section, content, code_example, rationale} - workflow_efficiency: {total_iterations, avg_per_subtask, bottlenecks: array of strings}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/commands/map-learn.md around lines 97 - 103, The JSON output spec includes an orphaned field "bullet_updates" (array of {bullet_id, tag, reason}) that is no longer consumed after Curator was removed; remove "bullet_updates" from the output specification in .claude/commands/map-learn.md and from any producer code that populates it (e.g., the Reflector that currently emits it), and ensure Step 3's Summary Report continues to reference only key_insight, patterns_used, patterns_discovered, suggested_new_bullets, and workflow_efficiency so tokens are not wasted generating an unused structure.src/mapify_cli/templates/skills/map-cli-reference/scripts/check-command.sh (1)
39-59: Consider bash arrays for command lists.
REMOVED_COMMANDSandVALID_COMMANDSare plain strings iterated by unquoted word-splitting. This works for current values but is fragile against future entries that might include spaces or accidental glob characters. Bash arrays are the idiomatic fix.♻️ Proposed refactor to use bash arrays
-# Removed subcommands -REMOVED_COMMANDS="playbook" +# Removed subcommands +REMOVED_COMMANDS=("playbook") # Known valid commands -VALID_COMMANDS="init check upgrade validate" +VALID_COMMANDS=("init" "check" "upgrade" "validate") # Check removed commands first -for dep in $REMOVED_COMMANDS; do +for dep in "${REMOVED_COMMANDS[@]}"; do if [ "$SUBCOMMAND" = "$dep" ]; then echo "ERROR: '$SUBCOMMAND' was removed in v4.0+" exit 2 fi done # Check valid commands FOUND=0 -for cmd in $VALID_COMMANDS; do +for cmd in "${VALID_COMMANDS[@]}"; do if [ "$SUBCOMMAND" = "$cmd" ]; then FOUND=1 break fi doneAlso update the "Available commands" echo to use the array:
- echo "Available commands: $VALID_COMMANDS" + echo "Available commands: ${VALID_COMMANDS[*]}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/skills/map-cli-reference/scripts/check-command.sh` around lines 39 - 59, The loops currently iterate over REMOVED_COMMANDS and VALID_COMMANDS as unquoted space-separated strings, which is fragile; change those variables to bash arrays (e.g., REMOVED_COMMANDS=( "playbook" ) and VALID_COMMANDS=( "init" "check" "upgrade" "validate" )), update the for-loops to iterate over the arrays (for dep in "${REMOVED_COMMANDS[@]}"; do ... done and for cmd in "${VALID_COMMANDS[@]}"; do ... done), and update any uses that print the available commands (the echo that lists commands) to join or expand VALID_COMMANDS via "${VALID_COMMANDS[*]}" so SUBCOMMAND checks and the FOUND logic keep working unchanged.src/mapify_cli/templates/skills/map-workflows-guide/resources/map-feature-deep-dive.md (1)
55-62: Consider documenting the new pattern-transmission mechanism.The old flow was explicit: Reflector wrote to mem0, and Actor queried mem0. With mem0 removed, the diagram now simply says "Actor uses learned pattern" without specifying how. Even a brief annotation (e.g.,
↓ pattern passed in context window) would preserve the same level of clarity for implementers when this workflow is eventually built.📝 Suggested clarification
Reflector: "JWT secret storage pattern" - ↓ pattern extracted + ↓ pattern extracted (retained in context) Subtask 2: Implement JWT validation ↓ starts -Actor uses learned pattern +Actor uses learned pattern (from context)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/skills/map-workflows-guide/resources/map-feature-deep-dive.md` around lines 55 - 62, The diagram lost the explicit handoff between Reflector and Actor after removing mem0; update the workflow around the "Reflector: \"JWT secret storage pattern\"" and "Actor uses learned pattern" nodes to explicitly show how the pattern is transmitted (e.g., annotate the arrow with "↓ pattern passed in context window" or "↓ pattern injected into task context"), and mention the replacement mechanism (context window, explicit payload field, or ephemeral store) so implementers can see how Subtask 1 → Subtask 2 pattern transmission occurs..claude/skills/map-workflows-guide/resources/map-feature-deep-dive.md (1)
223-225: Troubleshooting solution lacks an actionable target after Curator removal.Previously the fix was to prune mem0 entries. Now that Curator/mem0 is gone, "review and deduplicate patterns more aggressively" has no clear target. The reader doesn't know where Reflector writes patterns or how to deduplicate them under the new architecture.
Consider pointing to the concrete artifact (e.g., Reflector output file, context document, or the new storage mechanism) so the guidance remains actionable.
📝 Example update
**Issue:** Too many patterns created **Cause:** Reflector suggesting redundant patterns -**Solution:** Review and deduplicate patterns more aggressively +**Solution:** Review Reflector output and remove redundant patterns before committing context(Adjust to match where Reflector actually writes patterns in the new architecture.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/map-workflows-guide/resources/map-feature-deep-dive.md around lines 223 - 225, Update the troubleshooting section to name the concrete artifact and steps to deduplicate now that Curator/mem0 is removed: explicitly state where Reflector writes patterns in the new architecture (e.g., "Reflector output file", "context document", or the specific storage mechanism used by Reflector), and instruct the reader to open that artifact, identify duplicate pattern entries, and run the deduplication step (for example, delete duplicates, merge contexts, or run the Reflector-provided dedupe utility against the named file/document). Reference the components by name (Reflector, Curator, mem0, patterns) so readers can locate the output and apply the deduplication action against the correct artifact..claude/hooks/safety-guardrails.py (1)
29-30: Narrowed patterns miss some non-standard secret file extensions.
r"passwords?\.(json|ya?ml|toml|txt)$"andr"tokens?\.(json|ya?ml|toml|txt)$"leave gaps for formats likeapi_password.sh,token.csv, ortoken.db. Note thatapi_token.jsonis still caught becausere.searchfinds thetoken.jsonsubstring — only files where the token/password word is not directly followed by the listed extension are missed.This is an intentional trade-off (avoiding false positives on source files), but worth documenting in the pattern comment so future contributors understand the deliberate scope limitation.
💬 Suggested comment improvement
- r"passwords?\.(json|ya?ml|toml|txt)$", # password files, not any file with "password" in path - r"tokens?\.(json|ya?ml|toml|txt)$", # token files, not any file with "token" in path + r"passwords?\.(json|ya?ml|toml|txt)$", # password data files (.sh/.py/.csv intentionally excluded to avoid false positives) + r"tokens?\.(json|ya?ml|toml|txt)$", # token data files (.sh/.py/.csv intentionally excluded to avoid false positives)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/hooks/safety-guardrails.py around lines 29 - 30, Update the comment above the secret-file regexes to document the deliberate scope limitation: explain that the patterns r"passwords?\.(json|ya?ml|toml|txt)$" and r"tokens?\.(json|ya?ml|toml|txt)$" intentionally only match common extensions to reduce false positives and therefore will not catch unconventional filenames like api_password.sh, token.csv, token.db, etc.; explicitly note that re.search may still match substrings like api_token.json but that filenames where the secret word is not directly followed by one of the listed extensions are intentionally excluded.src/mapify_cli/templates/hooks/safety-guardrails.py (1)
49-53: Consider narrowing or documenting why"scripts/"is unconditionally safe.Every other added prefix targets a specific framework directory under
.claude/, but"scripts/"is a generic, project-level directory. A file atscripts/tokens.jsonorscripts/deploy.envwould bypass allDANGEROUS_FILE_PATTERNSchecks, potentially conflicting with the guideline to avoid reading/writing credential files.Either document the assumption (e.g., "MAP framework scripts only — no secrets stored here") or replace the broad prefix with the specific subdirectory that actually needs exemption (e.g.,
.map/scripts/).♻️ Proposed narrowing
SAFE_PATH_PREFIXES = [ "src/", "lib/", "test/", "tests/", "docs/", "pkg/", "cmd/", "internal/", ".claude/agents/", ".claude/commands/", ".claude/hooks/", ".claude/references/", - ".claude/skills/", "scripts/", + ".claude/skills/", ".map/scripts/", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mapify_cli/templates/hooks/safety-guardrails.py` around lines 49 - 53, The SAFE_PATH_PREFIXES array currently whitelists the generic "scripts/" prefix which is broad and may exempt sensitive files; update SAFE_PATH_PREFIXES to either replace "scripts/" with a more specific safe subpath (e.g., ".map/scripts/" or the exact directory your framework uses) or add an inline comment/documentation next to SAFE_PATH_PREFIXES explaining why "scripts/" is considered safe and what conventions ensure no secrets are stored there, referencing the SAFE_PATH_PREFIXES symbol so reviewers can find and verify the change..claude/agents/evaluator.md (1)
1195-1195: Inconsistent tool abbreviation strategy inmcp_tools_usedexamplesThe
mcp_tools_usedexamples use different abbreviation levels:"sequentialthinking"and"get-library-docs"use the function name, while"deepwiki"uses the server name. Adopting a consistent strategy (e.g., always use server name, or always use function name) reduces ambiguity for agents learning the output format from examples.Also applies to: 1248-1248, 1304-1304, 1361-1361, 1426-1426
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/agents/evaluator.md at line 1195, Update the inconsistent mcp_tools_used examples to use a single abbreviation strategy (either always use the function name or always use the server name); locate the occurrences where mcp_tools_used lists "sequentialthinking", "get-library-docs", and "deepwiki" and replace them so they all follow the chosen convention (e.g., change "deepwiki" to "deep-wiki" or change "get-library-docs" to the server name), and apply the same change to the other occurrences noted (near the other examples) so every mcp_tools_used entry is consistently named across the file.src/mapify_cli/templates/agents/reflector.md (1)
25-34: Fenced code block is missing a language specifier (MD040)The decision-tree pseudocode block has no language tag. Adding
textsatisfies the markdownlint rule and prevents rendering ambiguity.✏️ Proposed fix
-``` +```text 1. Complex failure with multiple causes? → sequential-thinking for root cause analysis ...</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@src/mapify_cli/templates/agents/reflector.mdaround lines 25 - 34, The
fenced code block containing the decision-tree pseudocode (the block that begins
with "1. Complex failure with multiple causes? → sequential-thinking for root
cause analysis") is missing a language tag; add a language specifier (e.g.,
text) to the opening triple-backtick so the block becomes ```text and conforms
to markdownlint MD040, preserving the existing block contents and formatting in
reflector.md.</details> </blockquote></details> <details> <summary>docs/USAGE.md (1)</summary><blockquote> `1839-1843`: **MD028: blank line inside blockquote may split it into two separate blockquotes** Some Markdown parsers treat a blank line inside a `>` block as ending the blockquote, which breaks the visual grouping. Replace the bare blank line with a `> ` continuation marker. <details> <summary>✏️ Proposed fix</summary> ```diff > **Note:** Claude Code executes all matching hooks in parallel. Each hook's `additionalContext` output is concatenated and added to the prompt. The order is not guaranteed, but both enhancements are applied. -> +> > **Implementation detail:** Prompt improvement, pattern injection, and workflow suggestions are handled within the `improve-prompt.py` hook (`.claude/hooks/improve-prompt.py`). ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/USAGE.md` around lines 1839 - 1843, The blockquote contains a bare blank line that may terminate the quote; update the blockquote so the blank line is prefixed with a continuation marker (add "> " before the empty line) so the two sentences remain a single blockquote, i.e., ensure the line between "The order is not guaranteed, but both enhancements are applied." and "Implementation detail: Prompt improvement..." starts with "> " to keep them together. ``` </details> </blockquote></details> <details> <summary>docs/WORKFLOW_FLOW.md (1)</summary><blockquote> `133-136`: **Remove the duplicate box border line.** There are two consecutive top border lines before the “Turn 5” block, which breaks the diagram formatting. <details> <summary>✏️ Suggested fix</summary> ```diff -┌─────────────────────────────────────────────────────┐ -┌─────────────────────────────────────────────────────┐ +┌─────────────────────────────────────────────────────┐ ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/WORKFLOW_FLOW.md` around lines 133 - 136, Remove the duplicate top border line that appears immediately before the "Turn 5: get_next_step → step_id=2.3, ACTOR" block: locate the two consecutive lines starting with "┌─────────────────────────────────────────────────────┐" immediately above the "Turn 5" box and delete the extra one so only a single top border precedes the block, ensuring the ASCII diagram renders correctly. ``` </details> </blockquote></details> </blockquote></details> --- <details> <summary>ℹ️ Review info</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 656dca55bced34c1983158d5b7bef2c2605d2eff and f9a6f3f043c0816c5f924ea52eba1fee1eb0a95d. </details> <details> <summary>📒 Files selected for processing (86)</summary> * `.claude-plugin/PLUGIN.md` * `.claude-plugin/marketplace.json` * `.claude-plugin/plugin.json` * `.claude/agents/actor.md` * `.claude/agents/documentation-reviewer.md` * `.claude/agents/evaluator.md` * `.claude/agents/monitor.md` * `.claude/agents/predictor.md` * `.claude/agents/reflector.md` * `.claude/agents/research-agent.md` * `.claude/agents/task-decomposer.md` * `.claude/commands/map-debate.md` * `.claude/commands/map-debug.md` * `.claude/commands/map-efficient.md` * `.claude/commands/map-fast.md` * `.claude/commands/map-learn.md` * `.claude/commands/map-release.md` * `.claude/commands/map-review.md` * `.claude/hooks/block-secrets.py` * `.claude/hooks/safety-guardrails.py` * `.claude/hooks/workflow-context-injector.py` * `.claude/playbook.json.backup.20251028_160602` * `.claude/references/mcp-usage-examples.md` * `.claude/references/step-state-schema.md` * `.claude/references/workflow-state-schema.md` * `.claude/skills/map-cli-reference/SKILL.md` * `.claude/skills/map-cli-reference/scripts/check-command.sh` * `.claude/skills/map-workflows-guide/SKILL.md` * `.claude/skills/map-workflows-guide/resources/agent-architecture.md` * `.claude/skills/map-workflows-guide/resources/map-debug-deep-dive.md` * `.claude/skills/map-workflows-guide/resources/map-efficient-deep-dive.md` * `.claude/skills/map-workflows-guide/resources/map-fast-deep-dive.md` * `.claude/skills/map-workflows-guide/resources/map-feature-deep-dive.md` * `.claude/skills/map-workflows-guide/resources/map-refactor-deep-dive.md` * `.claude/skills/skill-rules.json` * `.gitignore` * `.map/scripts/map_orchestrator.py` * `CLAUDE.md` * `IMPLEMENTATION_SUMMARY.md` * `docs/ARCHITECTURE.md` * `docs/CLI_COMMAND_REFERENCE.md` * `docs/CLI_REFERENCE.json` * `docs/COMPLETE_WORKFLOW.md` * `docs/INSTALL.md` * `docs/USAGE.md` * `docs/WORKFLOW_FLOW.md` * `requirements-semantic.txt` * `src/mapify_cli/__init__.py` * `src/mapify_cli/templates/CLAUDE.md` * `src/mapify_cli/templates/agents/actor.md` * `src/mapify_cli/templates/agents/documentation-reviewer.md` * `src/mapify_cli/templates/agents/evaluator.md` * `src/mapify_cli/templates/agents/monitor.md` * `src/mapify_cli/templates/agents/predictor.md` * `src/mapify_cli/templates/agents/reflector.md` * `src/mapify_cli/templates/agents/research-agent.md` * `src/mapify_cli/templates/agents/task-decomposer.md` * `src/mapify_cli/templates/commands/map-debate.md` * `src/mapify_cli/templates/commands/map-debug.md` * `src/mapify_cli/templates/commands/map-efficient.md` * `src/mapify_cli/templates/commands/map-fast.md` * `src/mapify_cli/templates/commands/map-learn.md` * `src/mapify_cli/templates/commands/map-release.md` * `src/mapify_cli/templates/commands/map-review.md` * `src/mapify_cli/templates/hooks/block-secrets.py` * `src/mapify_cli/templates/hooks/safety-guardrails.py` * `src/mapify_cli/templates/hooks/workflow-context-injector.py` * `src/mapify_cli/templates/map/scripts/map_orchestrator.py` * `src/mapify_cli/templates/references/mcp-usage-examples.md` * `src/mapify_cli/templates/references/step-state-schema.md` * `src/mapify_cli/templates/references/workflow-state-schema.md` * `src/mapify_cli/templates/skills/map-cli-reference/SKILL.md` * `src/mapify_cli/templates/skills/map-cli-reference/scripts/check-command.sh` * `src/mapify_cli/templates/skills/map-workflows-guide/SKILL.md` * `src/mapify_cli/templates/skills/map-workflows-guide/resources/agent-architecture.md` * `src/mapify_cli/templates/skills/map-workflows-guide/resources/map-debug-deep-dive.md` * `src/mapify_cli/templates/skills/map-workflows-guide/resources/map-efficient-deep-dive.md` * `src/mapify_cli/templates/skills/map-workflows-guide/resources/map-fast-deep-dive.md` * `src/mapify_cli/templates/skills/map-workflows-guide/resources/map-feature-deep-dive.md` * `src/mapify_cli/templates/skills/map-workflows-guide/resources/map-refactor-deep-dive.md` * `src/mapify_cli/templates/skills/skill-rules.json` * `tests/test_agent_cli_correctness.py` * `tests/test_command_templates.py` * `tests/test_mapify_cli.py` * `tests/test_template_sync.py` * `tests/test_workflow_gate.py` </details> <details> <summary>💤 Files with no reviewable changes (11)</summary> * src/mapify_cli/templates/skills/map-workflows-guide/resources/map-refactor-deep-dive.md * .claude/skills/map-workflows-guide/resources/map-refactor-deep-dive.md * .claude/agents/research-agent.md * src/mapify_cli/templates/agents/research-agent.md * .claude/agents/documentation-reviewer.md * src/mapify_cli/templates/CLAUDE.md * CLAUDE.md * .gitignore * tests/test_mapify_cli.py * src/mapify_cli/templates/agents/documentation-reviewer.md * requirements-semantic.txt </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| SAFE_PATH_PREFIXES = [ | ||
| ".claude/hooks/", | ||
| ".claude/agents/", | ||
| ".claude/commands/", | ||
| ".claude/references/", | ||
| ".claude/skills/", | ||
| "src/", | ||
| "tests/", | ||
| "docs/", | ||
| "scripts/", | ||
| ] |
There was a problem hiding this comment.
src/, tests/, docs/, and scripts/ are overly broad safe-path prefixes.
The .claude/ subdirectory entries are precise and appropriate for the stated goal of preventing false-positive blocks on framework files. However, src/, tests/, docs/, and scripts/ are generic project directories where real secrets can legitimately live (e.g., src/.env, scripts/aws_credentials, tests/fixtures/private_key.pem). Any such files would silently bypass the hook.
The coding guideline for src/mapify_cli/templates/** already explicitly mandates avoiding .env* and credential/key files — accepting everything under src/ in the hook undermines that protection.
Consider constraining the safe prefixes to the actual framework paths that were causing false positives (the .claude/ hierarchy is sufficient), and remove the generic directory entries:
🔒 Proposed fix
SAFE_PATH_PREFIXES = [
".claude/hooks/",
".claude/agents/",
".claude/commands/",
".claude/references/",
".claude/skills/",
- "src/",
- "tests/",
- "docs/",
- "scripts/",
]If specific non-.claude/ paths were triggering false positives, prefer adding those exact paths (e.g., "src/mapify_cli/templates/hooks/") rather than an entire top-level directory. As per coding guidelines, files under src/mapify_cli/templates/** must avoid exposing .env* and credential/key files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/hooks/block-secrets.py around lines 63 - 73, SAFE_PATH_PREFIXES
currently includes overly broad entries ("src/", "tests/", "docs/", "scripts/")
that will bypass secret checks for many unrelated project files; update the
SAFE_PATH_PREFIXES definition in .claude/hooks/block-secrets.py to remove those
generic top-level prefixes and only keep the precise framework paths (the
existing ".claude/..." entries), or if needed add narrowly scoped allowed paths
(e.g., "src/mapify_cli/templates/hooks/") instead of whole directories; modify
the SAFE_PATH_PREFIXES constant to reflect these narrowed prefixes so legitimate
files don't silently bypass the hook.
| **Full pipeline (theoretical baseline):** | ||
| ``` | ||
| Subtask 1 → Actor → Monitor → Predictor → Evaluator → Reflector → Curator | ||
| Subtask 2 → Actor → Monitor → Predictor → Evaluator → Reflector → Curator | ||
| Subtask 3 → Actor → Monitor → Predictor → Evaluator → Reflector → Curator | ||
| Subtask 1 → Actor → Monitor → Predictor → Evaluator → Reflector | ||
| Subtask 2 → Actor → Monitor → Predictor → Evaluator → Reflector | ||
| Subtask 3 → Actor → Monitor → Predictor → Evaluator → Reflector | ||
| ``` |
There was a problem hiding this comment.
Add a language tag to fenced code block.
markdownlint flags this block for missing language (MD040).
✏️ Suggested fix
-```
+```text
Subtask 1 → Actor → Monitor → Predictor → Evaluator → Reflector
Subtask 2 → Actor → Monitor → Predictor → Evaluator → Reflector
Subtask 3 → Actor → Monitor → Predictor → Evaluator → Reflector</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
**Full pipeline (theoretical baseline):**
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 40-40: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/skills/map-workflows-guide/resources/map-efficient-deep-dive.md
around lines 39 - 44, The fenced code block showing the pipeline lines "Subtask
1 → Actor → Monitor → Predictor → Evaluator → Reflector" (and the two following
lines) is missing a language tag and triggers markdownlint MD040; update the
opening fence from ``` to a labeled fence such as ```text (or ```console) so the
block becomes a labeled code block, keeping the same content and closing fence
unchanged to satisfy the linter.
| ## Example 2: Modification to Existing Code | ||
|
|
||
| **Subtask**: "Add rate limiting to existing API endpoint" | ||
|
|
||
| **Approach**: Token bucket rate limiting using existing Redis connection. 100 req/min per IP. mcp__mem0__map_tiered_search found impl-0089 (Redis patterns). | ||
| **Approach**: Token bucket rate limiting using existing Redis connection. 100 req/min per IP. Using standard Redis rate limiting patterns. | ||
|
|
There was a problem hiding this comment.
Use hyphenation for the compound adjective “rate‑limiting.”
Minor language polish to match the compound-adjective rule.
✏️ Suggested fix
-**Approach**: Token bucket rate limiting using existing Redis connection. 100 req/min per IP. Using standard Redis rate limiting patterns.
+**Approach**: Token bucket rate-limiting using existing Redis connection. 100 req/min per IP. Using standard Redis rate-limiting patterns.🧰 Tools
🪛 LanguageTool
[uncategorized] ~824-~824: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...00 req/min per IP. Using standard Redis rate limiting patterns. Code Changes: ```python...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mapify_cli/templates/agents/actor.md` around lines 820 - 825, Replace
unhyphenated uses of the compound adjective "rate limiting" with
"rate-limiting": change "Add rate limiting to existing API endpoint" to "Add
rate-limiting to existing API endpoint", change "Token bucket rate limiting
using existing Redis connection" to "Token-bucket rate-limiting using existing
Redis connection" (or at minimum "Token-bucket rate-limiting"), and change "rate
limiting patterns" to "rate-limiting patterns" in the Example 2 block (look for
the strings "Add rate limiting to existing API endpoint", "Token bucket rate
limiting", and "rate limiting patterns").
| **Category A: Data Completeness** (pick highest applicable) | ||
| ``` | ||
| +0.20: mem0 returned comprehensive patterns for this change type | ||
| +0.10: mem0 returned partial/similar patterns | ||
| +0.00: No query made (default for Tier 1) | ||
| -0.15: mem0 queried but no relevant data found | ||
| +0.20: Comprehensive data found for this change type | ||
| +0.10: Partial/similar patterns found | ||
| +0.00: No additional context available (default for Tier 1) | ||
| -0.15: Queried but no relevant data found | ||
| ``` | ||
|
|
||
| **Category B: Tool Agreement** (pick one) | ||
| ``` | ||
| +0.15: mem0 + grep results match (same usages found) | ||
| +0.15: Multiple verification methods match (same usages found) | ||
| +0.05: Only one tool used, results clear | ||
| -0.10: mem0 and grep conflict (investigate before proceeding) | ||
| -0.10: Tools conflict (investigate before proceeding) | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to fenced blocks (MD040).
The Category A/B fences are missing a language spec, which triggers markdownlint.
✏️ Proposed fix
-```
+```text
+0.20: Comprehensive data found for this change type
+0.10: Partial/similar patterns found
+0.00: No additional context available (default for Tier 1)
-0.15: Queried but no relevant data found
-```
+```
-```
+```text
+0.15: Multiple verification methods match (same usages found)
+0.05: Only one tool used, results clear
-0.10: Tools conflict (investigate before proceeding)
-```
+```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| **Category A: Data Completeness** (pick highest applicable) | |
| ``` | |
| +0.20: mem0 returned comprehensive patterns for this change type | |
| +0.10: mem0 returned partial/similar patterns | |
| +0.00: No query made (default for Tier 1) | |
| -0.15: mem0 queried but no relevant data found | |
| +0.20: Comprehensive data found for this change type | |
| +0.10: Partial/similar patterns found | |
| +0.00: No additional context available (default for Tier 1) | |
| -0.15: Queried but no relevant data found | |
| ``` | |
| **Category B: Tool Agreement** (pick one) | |
| ``` | |
| +0.15: mem0 + grep results match (same usages found) | |
| +0.15: Multiple verification methods match (same usages found) | |
| +0.05: Only one tool used, results clear | |
| -0.10: mem0 and grep conflict (investigate before proceeding) | |
| -0.10: Tools conflict (investigate before proceeding) | |
| ``` | |
| **Category A: Data Completeness** (pick highest applicable) |
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 1771-1771: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 1779-1779: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mapify_cli/templates/agents/predictor.md` around lines 1770 - 1783, Add
explicit language identifiers to the two fenced code blocks under the "Category
A: Data Completeness" and "Category B: Tool Agreement" sections to satisfy
MD040; specifically, change the triple-backtick fences that currently have none
to use ```text for both blocks so the blocks immediately following the "Category
A: Data Completeness" header and the "Category B: Tool Agreement" header are
fenced as ```text ... ``` instead of just ``` ... ```.
| SAFE_PATH_PREFIXES = [ | ||
| ".claude/hooks/", | ||
| ".claude/agents/", | ||
| ".claude/commands/", | ||
| ".claude/references/", | ||
| ".claude/skills/", | ||
| "src/", | ||
| "tests/", | ||
| "docs/", | ||
| "scripts/", | ||
| ] | ||
|
|
||
|
|
||
| def is_sensitive_file(file_path: str) -> bool: | ||
| """Check if file path matches any sensitive file pattern. | ||
|
|
||
| Checks ALL path components (not just filename) to catch patterns | ||
| in directory names or parent paths. | ||
| in directory names or parent paths. Skips files in known safe | ||
| directories (hooks, agents, source code, tests, etc.) | ||
| """ | ||
| path_obj = Path(file_path) | ||
|
|
||
| # Normalize to relative path for prefix matching | ||
| try: | ||
| rel = str(path_obj.relative_to(Path.cwd())) | ||
| except ValueError: | ||
| rel = str(path_obj) | ||
|
|
||
| # Allow known safe directories | ||
| for prefix in SAFE_PATH_PREFIXES: | ||
| if rel.startswith(prefix): | ||
| return False |
There was a problem hiding this comment.
"src/" safe-prefix bypass silently allows src/.env*, src/*.pem, and other exact-match secret files.
When this template is deployed to a user's project, any file under their src/ (e.g., src/.env.local, src/credentials.json, src/id_rsa) will hit the SAFE_PATH_PREFIXES early-return and skip all pattern checks — including the exact-filename patterns that have essentially zero false-positive risk inside src/ (.env*, .pem, _rsa/_dsa/_ecdsa SSH key suffixes).
The false-positive motivation is valid (e.g., src/credentials_manager.py being incorrectly blocked by r".*credentials.*"), but the current fix is too coarse: it exempts the broad word-match patterns and the exact secret-file patterns together. Only the former need the exemption.
🛡️ Targeted fix – always enforce exact-file patterns, skip only word-match patterns for safe paths
+# Patterns that are exact enough to almost never produce false positives in source code.
+# These are checked even for files in SAFE_PATH_PREFIXES.
+ALWAYS_CHECK_PATTERNS = [
+ re.compile(r"\.env.*", re.IGNORECASE),
+ re.compile(r".*\.pem$", re.IGNORECASE),
+ re.compile(r".*_(private|secret|rsa|dsa|ecdsa)\.key$", re.IGNORECASE),
+ re.compile(r".*_rsa$", re.IGNORECASE),
+ re.compile(r".*_dsa$", re.IGNORECASE),
+ re.compile(r".*_ecdsa$", re.IGNORECASE),
+ re.compile(r".*_ed25519$", re.IGNORECASE),
+]
+
SAFE_PATH_PREFIXES = [
".claude/hooks/",
...
]
def is_sensitive_file(file_path: str) -> bool:
path_obj = Path(file_path)
try:
rel = str(path_obj.relative_to(Path.cwd()))
except ValueError:
rel = str(path_obj)
+ # Always enforce exact-file patterns regardless of safe-path status
+ filename = path_obj.name
+ for pattern in ALWAYS_CHECK_PATTERNS:
+ if pattern.match(filename):
+ return True
# Allow known safe directories (skips broad word-match patterns only)
for prefix in SAFE_PATH_PREFIXES:
if rel.startswith(prefix):
return False
# Check each path component against all patterns
for part in path_obj.parts:
for pattern in SENSITIVE_PATTERNS:
if pattern.match(part):
return True
return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mapify_cli/templates/hooks/block-secrets.py` around lines 63 - 94, The
SAFE_PATH_PREFIXES early-return in is_sensitive_file currently skips all checks
for files under prefixes like "src/", which lets exact-secret filenames (e.g.,
.env*, *.pem, *_rsa/*_dsa/*_ecdsa) bypass detection; change is_sensitive_file so
that before returning False for a safe prefix it still enforces exact-file
checks: compute the file basename (Path(file_path).name) and test it against the
exact-filename/suffix patterns (e.g., patterns for ".env", ".env.*", "*.pem",
filenames ending with "_rsa"/"_dsa"/"_ecdsa" or exact names like "id_rsa") and
return True if any exact pattern matches; only if none of those exact-secret
patterns match may you return False for the SAFE_PATH_PREFIXES (word-match
patterns can be skipped for safe prefixes).
| ``` | ||
| Subtask 1 → Actor → Monitor → Predictor → Evaluator → Reflector → Curator | ||
| Subtask 2 → Actor → Monitor → Predictor → Evaluator → Reflector → Curator | ||
| Subtask 3 → Actor → Monitor → Predictor → Evaluator → Reflector → Curator | ||
| Subtask 1 → Actor → Monitor → Predictor → Evaluator → Reflector | ||
| Subtask 2 → Actor → Monitor → Predictor → Evaluator → Reflector | ||
| Subtask 3 → Actor → Monitor → Predictor → Evaluator → Reflector | ||
| ``` |
There was a problem hiding this comment.
Add a language tag to silence the MD040 lint warning.
The fenced block at line 40 (plain-text flow diagram) triggers MD040 fenced-code-language. Adding text satisfies the rule without implying code syntax highlighting.
📝 Proposed fix
-```
+```text
Subtask 1 → Actor → Monitor → Predictor → Evaluator → Reflector
Subtask 2 → Actor → Monitor → Predictor → Evaluator → Reflector
Subtask 3 → Actor → Monitor → Predictor → Evaluator → Reflector
</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.21.0)</summary>
[warning] 40-40: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In
@src/mapify_cli/templates/skills/map-workflows-guide/resources/map-efficient-deep-dive.md
around lines 40 - 44, The fenced block of the flow diagram triggers MD040;
update the triple-backtick fence in map-efficient-deep-dive.md that contains the
three "Subtask → Actor → Monitor → Predictor → Evaluator → Reflector" lines to
include a language tag (e.g., add "text" after the opening ), so the block becomes text and silences the lint warning MD040 while preserving the diagram
content.
</details>
<!-- fingerprinting:phantom:triton:churro -->
<!-- This is an auto-generated comment by CodeRabbit -->
Summary
mcp__mem0__*tool references, ACE learning system, and tiered pattern storage from 86 files across agents, commands, docs, skills, tests, and plugin metadatablock-secrets.pyandsafety-guardrails.py, remove duplicate hook registrations insettings.local.jsonChanges
mem0 removal (86 files, -5100 lines)
curator.mdagent (both canonical and template)__init__.py— remove curator generator, mem0 allowlist, ACE referencestier3_integration_report.json,apply_tier3_deltas.py,playbook.json.backup,requirements-semantic.txtHook fixes
block-secrets.py: addSAFE_PATH_PREFIXESso.claude/hooks/,.claude/agents/,src/,tests/etc. are not blockedsafety-guardrails.py: narrowr"token"→r"tokens?\.(json|ya?ml|toml|txt)$", same for password; add.claude/to safe prefixes; fix missing execute permissionsettings.local.json: remove duplicateworkflow-context-injector.py(was matcher*+ timeout 1s, already insettings.json); remove duplicateend-of-turn.shStop hookmap-review simplification
Test plan
pytest— 635 passed, 1 skipped, 0 failuresmake sync-templates— all.claude/synced tosrc/mapify_cli/templates/mem0|mcp__mem0|map_tiered_search|Curator— only CHANGELOG.md (historical entries)block-secrets.pyallows.claude/hooks/files, blocks.envsafety-guardrails.pyhas execute permission (755)Summary by CodeRabbit
Architecture & Features
Documentation
Breaking Changes