-
Notifications
You must be signed in to change notification settings - Fork 280
fix(767): 支持设置生成选项传递给模型以便跟踪 #775
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- 在ReActAgent中添加generateOptions字段,用于配置生成参数(温度、topP、最大token等) - 更新buildGenerateOptions方法,将modelExecutionConfig与generateOptions合并 - 在预推理事件通知中包含生成选项以丰富事件信息 - 在Builder中新增generateOptions设置方法,支持链式调用 - 添加单元测试,验证生成选项正确传递给模型并在调用中生效
|
zhanghangwei seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
1 similar comment
|
zhanghangwei seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
Summary of ChangesHello @hangweizhang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses an issue where LLM generation parameters, such as temperature and topP, were not being recorded in trace spans when using Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
这个 Pull Request 很好地解决了在追踪(tracing)中 GenerateOptions(如 temperature)未能传递给模型的问题。代码变更逻辑清晰,通过在 ReActAgent.Builder 中添加 generateOptions,并在 buildGenerateOptions 方法中合并选项,正确地实现了功能。同时,还添加了相应的单元测试来验证新功能。我提出了一些关于代码可读性和测试健壮性的建议,但总体来说,这是一个高质量的提交。
| protected GenerateOptions buildGenerateOptions() { | ||
| GenerateOptions.Builder builder = GenerateOptions.builder(); | ||
| if (modelExecutionConfig != null) { | ||
| builder.executionConfig(modelExecutionConfig); | ||
| } | ||
| return builder.build(); | ||
| // Build options with modelExecutionConfig (timeout, retry, etc.) | ||
| GenerateOptions executionOptions = | ||
| modelExecutionConfig != null | ||
| ? GenerateOptions.builder().executionConfig(modelExecutionConfig).build() | ||
| : null; | ||
| // Merge: user's generateOptions (temperature, topP, maxTokens, etc.) as base, | ||
| // executionOptions overrides executionConfig | ||
| return GenerateOptions.mergeOptions(executionOptions, generateOptions); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
当前的实现虽然功能正确,但使用三元运算符使代码略显紧凑,可读性稍差。通过提前处理 modelExecutionConfig == null 的情况,可以使逻辑更清晰:如果没有特定于 agent 的执行配置,就直接使用 generateOptions;否则,将 agent 的配置合并到基础选项之上。
| protected GenerateOptions buildGenerateOptions() { | |
| GenerateOptions.Builder builder = GenerateOptions.builder(); | |
| if (modelExecutionConfig != null) { | |
| builder.executionConfig(modelExecutionConfig); | |
| } | |
| return builder.build(); | |
| // Build options with modelExecutionConfig (timeout, retry, etc.) | |
| GenerateOptions executionOptions = | |
| modelExecutionConfig != null | |
| ? GenerateOptions.builder().executionConfig(modelExecutionConfig).build() | |
| : null; | |
| // Merge: user's generateOptions (temperature, topP, maxTokens, etc.) as base, | |
| // executionOptions overrides executionConfig | |
| return GenerateOptions.mergeOptions(executionOptions, generateOptions); | |
| } | |
| protected GenerateOptions buildGenerateOptions() { | |
| if (modelExecutionConfig == null) { | |
| return generateOptions; | |
| } | |
| // The agent's modelExecutionConfig should override any execution config | |
| // within the provided generateOptions. | |
| GenerateOptions executionOptions = | |
| GenerateOptions.builder().executionConfig(modelExecutionConfig).build(); | |
| return GenerateOptions.mergeOptions(executionOptions, generateOptions); | |
| } |
| assertEquals(0.7, lastOptions.getTemperature(), "Temperature should be passed to model"); | ||
| assertEquals(0.9, lastOptions.getTopP(), "TopP should be passed to model"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在比较浮点数时,使用一个容差(delta)是最佳实践,可以避免因精度问题导致的测试失败。虽然在这里使用字面量比较可能不会出问题,但使用带 delta 的 assertEquals 会使测试更加健壮。
| assertEquals(0.7, lastOptions.getTemperature(), "Temperature should be passed to model"); | |
| assertEquals(0.9, lastOptions.getTopP(), "TopP should be passed to model"); | |
| assertEquals(0.7, lastOptions.getTemperature(), 0.0, "Temperature should be passed to model"); | |
| assertEquals(0.9, lastOptions.getTopP(), 0.0, "TopP should be passed to model"); |
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
Fix: Trace does not record temperature parameter in span attributes (#767)
Problem
Using AgentScope-Java with TelemetryTracer (e.g. Langfuse/Jaeger),
gen_ai.request.temperatureand related attributes were not recorded in trace spans, even though the OpenTelemetry GenAI semantic conventions define them.Root Cause
ReActAgent.buildGenerateOptions()only setexecutionConfig(timeout/retry), and did not propagate generation options such as temperature, topP, and maxTokensReActAgent.Builderhad no way to configureGenerateOptions, so these parameters could not be set at agent levelAttributesExtractors.getLLMRequestAttributes()only records attributes when the options are non-null, so temperature stayed missing in tracesSolution
generateOptions(GenerateOptions)toReActAgent.Builderso users can configure generation options (temperature, topP, maxTokens, etc.) at agent creation.buildGenerateOptions()to merge user-providedgenerateOptionswithmodelExecutionConfigviaGenerateOptions.mergeOptions().notifyPreReasoningEvent()so the event receives the effective options frombuildGenerateOptions(), enabling hooks and tracing to see the full configuration.Usage Example
ReActAgent agent = ReActAgent.builder()
.name("TestAgent")
.model(chatModel)
.generateOptions(GenerateOptions.builder()
.temperature(0.7)
.topP(0.9)
.maxTokens(1000)
.build())
.build();
agent.call(Msg.userMsg("Hello")).block();