Skip to content

Feedback on your mastering-postgresql skill #7

@RichardHightower

Description

@RichardHightower

I took a look at your mastering-postgresql skill and wanted to share some thoughts.

Links:

The TL;DR

You're at 100/100, solid A grade. This is based on Anthropic's best practices for skill architecture. Your strongest area is Ease of Use (24/25)—the metadata, triggers, and workflow clarity are excellent. The weakest area is still quite good: Writing Style (9/10), mostly around voice consistency in comments.

What's Working Well

  • Progressive Disclosure Architecture is tight. You've got SKILL.md as a concise hub (~320 lines) with 10 reference files exactly one level deep. This is the pattern that maximizes token efficiency without forcing users to dig through nested content.

  • Decision trees and trigger phrases are chef's kiss. Your explicit "When NOT to Use" section prevents misactivation, and the triggers (pgvector, BM25 postgres, JSONB index, asyncpg, etc.) are specific enough that agents will actually activate this when needed.

  • Verification steps everywhere. You've built in concrete feedback loops—SELECT extversion, EXPLAIN ANALYZE, docker-compose ps. This turns theory into "did it actually work?"—which developers love.

  • Problem-solving power is real. You're covering actual gaps: pgvector setup, hybrid search patterns, cloud deployment across AWS/GCP/Azure, BM25 full-text search. This isn't theoretical—it solves problems people hit.

The Big One

Scripts directory is referenced but not included. Your SKILL.md mentions pip install -r scripts/requirements.txt and implies there are template scripts available, but these files aren't in the skill package. This hurts utility—you're telling users to copy-paste from files that don't exist.

Fix: Either (a) create and include the scripts/ directory with actual template files for common setups, or (b) remove the Script Usage section entirely and integrate examples into the reference docs. Option A gets you the last utility point (+1). The templates could be minimal—just one or two working examples for hybrid search setup and vector similarity queries.

Other Things Worth Fixing

  1. Second-person in verification comments. Lines like # Verify container is running: should be imperative: # Verification: container is running. Inconsistent voice costs you a point on writing style. Quick regex through your references and you're good.

  2. Optional fields gap. You're using allowed-tools but not version or tags in the frontmatter. These help discoverability. Adding them gets you the remaining spec point.

  3. Copy-paste checklists. You've got decision trees but no pre-made checklist templates (like "PostgreSQL Docker quick start checklist"). Adding a few markdown checklists in the references would help users hit the ground running.

Quick Wins

  • Add scripts/ directory with template files for hybrid search + vector similarity (highest impact)
  • Normalize voice in verification comments (1-point fix)
  • Add frontmatter version and tags fields
  • Create 2-3 markdown checklist templates for common workflows

Checkout your skill here: [SkillzWave.ai](https://skillzwave.ai) | [SpillWave](https://spillwave.com) We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions