Replies: 1 comment
-
|
Adding @SergeyMenshykh |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Native Agent Skill support made it into Agent Framework. This is great!
I have been working on skill support myself and, although the implementation is pretty similar, I did it slightly differently and want to discuss the differences.
The implementation in Agent Framework adds the skill content to the context window (chat history) of the LLM. This has some benefits, but also some issues:
I took a slightly different approach. Instead of loading the skill content into the chat history, I added it to the system instructions. This has different benefits:
At first, this might not seem like a good idea, because on each run the skill is removed from the instructions. But in practice it works really well. The LLM understands perfectly that it needs to reload the skill each run. It even understands which skill it loaded before, because the tool call remains in the chat history. The performance penalty for the extra tool call each run is compensated with the smaller context size (chat history).
The main benefit for me is being able to progressively load and unload tools. Since my use case involves a large number of tools, limiting the available tools is important.
The below snippet shows my 'load_skill' tool implementation. It includes using the current user identity to filter tools based on user authorization. The last line demonstrates how the instructions are progressively expanded each time a skill is loaded. Multiple skill loadings in a single agent run will append each skill to the system instructions.
The system instructions used to select the skill is in the snippet below.
Beta Was this translation helpful? Give feedback.
All reactions