Skip to content

ship the /ask skill #532

Merged
acunniffe merged 19 commits intomainfrom
feat/aidan-ask-skill
Feb 17, 2026
Merged

ship the /ask skill #532
acunniffe merged 19 commits intomainfrom
feat/aidan-ask-skill

Conversation

@acunniffe
Copy link
Collaborator

@acunniffe acunniffe commented Feb 16, 2026

Extending on @jwiegley's search + continue work, this PR introduces a skill that lets any agent read previous prompts for any LoC it's looking at.

It's useful when trying to figure out why AI-code is the way it is and I've found it very useful during planning. I though there'd be some crazy work required to make agents call it, but this seems to do the trick:

Claude.md|AGENTS.md

- In plan mode, always use the /ask skill so you can read the code and the original prompts that generated it. Intent will help you write a better plan

Also included
[ ] updates to VSCode plugin empty prompt state
[ ] reading prompts for logged in teams from CaS Cache, then CaS, then Local DB, then notes
[ ] CaS read cache

Note: This should ship with a VSCode update, but they are technically independent and won't break if out of sync

Separate PR with big doc updates + readme updates in the works


Open with Devin

@git-ai-cloud-dev
Copy link

git-ai-cloud-dev bot commented Feb 16, 2026

Stats powered by Git AI

🧠 you    ██░░░░░░░░░░░░░░░░░░  10%
🤖 ai     ░░██████████████████  90%
More stats
  • 1.3 lines generated for every 1 accepted
  • 0 seconds waiting for AI
  • Top model: claude::claude-opus-4-6 (727 accepted lines, 950 generated lines)

AI code tracked with git-ai

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

acunniffe and others added 11 commits February 17, 2026 13:05
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
@acunniffe acunniffe force-pushed the feat/aidan-ask-skill branch from eab1d3c to 1810816 Compare February 17, 2026 18:06
acunniffe and others added 6 commits February 17, 2026 13:08
devin-ai-integration[bot]

This comment was marked as resolved.

acunniffe and others added 2 commits February 17, 2026 13:39
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 20 additional findings in Devin Review.

Open in Devin Review

}

// Cap concurrent fetches at 3
const toFetch = promptsToFetch.slice(0, 3);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 CAS fetch concurrency cap silently drops all prompts beyond the first 3

triggerCASFetches is called once when a blame result arrives. It slices to the first 3 prompts (promptsToFetch.slice(0, 3)) and starts fetching them. When each fetch completes, the .then() callback at blame-lens-manager.ts:1552-1571 only calls updateStatusBar — it never re-invokes triggerCASFetches to start fetching the next batch.

Root Cause and Impact

The comment says "Cap concurrent fetches at 3" (line 1546), implying a concurrency limit. But because triggerCASFetches is only called once per blame result (at blame-lens-manager.ts:523 and blame-lens-manager.ts:601), and the .then() completion handler does not call it again, the slice(0, 3) acts as a hard total limit, not a concurrency limit.

If a file has, say, 8 distinct AI prompts with messages_url but no messages, only the first 3 will ever be fetched. The remaining 5 prompts will permanently show "Loading prompt from cloud..." in the hover tooltip (as coded at blame-lens-manager.ts:1216-1218) until the user triggers a full blame re-fetch (e.g., by saving the file).

The fix is to call this.triggerCASFetches(blameResult, ...) inside the .then() callback after a successful fetch, so the next batch of up to 3 is started once a slot frees up.

Prompt for agents
In agent-support/vscode/src/blame-lens-manager.ts, inside the triggerCASFetches method (around line 1531), the .then() callback at line 1552 should re-invoke triggerCASFetches after a successful fetch completes, so that the remaining prompts beyond the first 3 are eventually fetched. Specifically, after the updateStatusBar call at line 1559, add a call like: this.triggerCASFetches(blameResult, documentUri); You will need to capture documentUri in the closure scope. The triggerCASFetches method already filters out prompts that are in casFetchInProgress or already have messages, so re-invoking it is safe and will only start new fetches for the remaining prompts.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@acunniffe acunniffe merged commit 3cefdd3 into main Feb 17, 2026
16 checks passed
@acunniffe acunniffe deleted the feat/aidan-ask-skill branch February 17, 2026 19:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant