Skip to content

Feedback on your developing-with-docker skill #2

@RichardHightower

Description

@RichardHightower

I took a look at your developing-with-docker skill and wanted to share some thoughts.

Links:

The TL;DR

You're at 98/100, solid A territory. This is based on Anthropic's best practices for skill design. Your strongest area is spec compliance (15/15 - perfect score), and your weakest is writing style (8/10). The skill fills real gaps in Docker debugging, which is its biggest strength.

What's Working Well

  • Spec compliance is flawless — Your frontmatter is clean, naming conventions are correct, and you've nailed the description with strong trigger phrases like "debug Docker," "troubleshoot containers," and "Docker Compose issues."
  • Progressive Disclosure is solid — 104 lines in SKILL.md with 6 well-organized reference files. You've got the layering right - users get the quick debugging workflow upfront, then dive into foundation concepts, networking, CLI debugging, etc. as needed.
  • Practical workflow design — The 5-step debugging workflow with a validation checklist and findings template is exactly what developers need. The platform-specific guidance (Linux vs. Mac vs. Windows) shows you understand real pain points.
  • Great bonus points — Copy-paste checklists, grep-friendly structure, explicit scope boundaries, and quality pre-commit checklist earned you +8 modifiers. That's the kind of polish that makes skills actually useful.

The Big One: Writing Style in References

Your references are verbose. Sections like the intro to guide-foundations.md read more like a textbook ("designed for the professional software developer who has moved beyond the 'Hello World' phase...") than a snappy agent reference.

Why it matters: Agents scan faster with bullet points and tables. Every prose paragraph is context the LLM has to parse.

The fix: Convert narrative prose to structured callouts. In guide-foundations.md, replace the opening paragraphs with:

## Key Concepts

| Term | What It Is | Why It Matters |
|------|-----------|----------------|
| Daemon | Docker background service | Running/stopped state affects all operations |
| Image vs Container | Template vs running instance | Knowing the difference fixes most misconceptions |

This alone could bump you to 9/10 on writing style (+1 point).

Other Things Worth Fixing

  1. Add a TOC to SKILL.md — Even though it's only 104 lines, a quick ## Contents section at the top linking to Overview, Quick Start, Debugging Workflow, Validation Checklist helps navigation. Should be +1 point.

  2. Add verification commands inline — You say "Check docker context ls and current context" but don't show the exact output format. Add: "Verify context: docker context ls (asterisk marks active context)" so agents know exactly what to expect.

  3. Expand feedback loops — You've got a findings template, but some complex scenarios (like debugging distroless containers or permission cascades) could use input/output examples showing before/after states. Real debugging traces help agents learn.

Quick Wins

  • Add TOC to SKILL.md (+1)
  • Tighten prose in foundations reference, use more tables (+1)
  • Add inline verification commands to workflow steps (+1)
  • That's another 3 points waiting for you, pushing you to 101/100 (with modifiers)

The skill is genuinely useful — Docker debugging is hard, and you've captured the right problems. These tweaks are polish, not rework.


Checkout your skill here: SkillzWave.ai | SpillWave We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions