Skip to content

mito-ai: gemini-3.1-pro support#2195

Closed
aarondr77 wants to merge 5 commits intodevfrom
gemini-3.1-pro
Closed

mito-ai: gemini-3.1-pro support#2195
aarondr77 wants to merge 5 commits intodevfrom
gemini-3.1-pro

Conversation

@aarondr77
Copy link
Member

@aarondr77 aarondr77 commented Feb 27, 2026

Description

Gemini 3.1 pro support!

Testing

Try it out!

Documentation

Note if any new documentation needs to addressed or reviewed.


Note

Medium Risk
Moderate risk because it changes Gemini API request/stream configuration (adds ThinkingConfig) and updates model identifiers across backend and frontend, which could affect runtime behavior if the SDK/model name expectations differ.

Overview
Adds support for the gemini-3.1-pro-preview model across the stack, replacing references to the older gemini-3-pro-preview in model selection and UI constants.

When calling Gemini directly (API key path), requests and streaming now conditionally include ThinkingConfig(thinking_level="MEDIUM") for gemini-3.1-pro-preview, while other models remain unchanged. Tests are updated to reflect the new model name and fast-model fallback behavior.

Written by Cursor Bugbot for commit 6a40b14. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Feb 27, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
monorepo Ready Ready Preview, Comment Mar 2, 2026 3:56pm

Request Review

@ngafar
Copy link
Collaborator

ngafar commented Feb 27, 2026

This works, but boy is it slow. I'm talking several minutes with no response.

We might want to adjust the thinking-level (like we did with gpt-5).

@aarondr77
Copy link
Member Author

@cursoragent can you update this so that we turn on medium reasoning for gemini-3.1-pro. High reasoning is too slow. You can see an example of how we do this for gpt-5.2 in the openai client or open ai client utils

@cursor
Copy link

cursor bot commented Feb 27, 2026

I've implemented medium reasoning for gemini-3.1-pro-preview, following the same pattern used for gpt-5.2 with reasoning_effort. Here's what I changed:

Changes Made:

  • mito-ai/mito_ai/utils/gemini_utils.py:

    • Added GEMINI_3_1_PRO_MODEL constant
    • Updated get_gemini_completion_function_params to add thinking_config with thinking_level: "MEDIUM" for gemini-3.1-pro-preview
    • Updated _prepare_gemini_request_data_and_headers to include thinking_config in server requests
  • mito-ai/mito_ai/gemini_client.py:

    • Imported ThinkingConfig from google.genai.types
    • Updated request_completions to build and pass ThinkingConfig to GenerateContentConfig
    • Updated stream_completions to also use the thinking_config
  • mito-ai/mito_ai/tests/utils/test_gemini_utils.py:

    • Added 4 new tests verifying the thinking_config is correctly set for gemini-3.1-pro-preview and not set for other models

All existing tests pass, and the new tests verify the functionality works correctly. This should reduce the response time significantly compared to the HIGH reasoning level that was causing multi-minute delays.

Open in Web Open in Cursor 

- Add thinking_config with thinking_level='MEDIUM' for gemini-3.1-pro-preview
  in get_gemini_completion_function_params (similar to reasoning_effort for gpt-5.2)
- Update GeminiClient to use ThinkingConfig in both request_completions and
  stream_completions methods
- Update _prepare_gemini_request_data_and_headers to include thinking_config
  in server requests
- Add tests for thinking_config functionality

Co-authored-by: Aaron Diamond-Reivich <aarondr77@gmail.com>
Single source of truth - get_gemini_completion_function_params sets thinking_config,
and it's passed through to server functions rather than duplicating the model check.

Co-authored-by: Aaron Diamond-Reivich <aarondr77@gmail.com>
Match OpenAI pattern - only set thinking_config in get_gemini_completion_function_params.
The Mito server will handle thinking_config based on the model it receives.

Co-authored-by: Aaron Diamond-Reivich <aarondr77@gmail.com>
Remove thinking_config from get_gemini_completion_function_params entirely.
Check model == GEMINI_3_1_PRO_MODEL directly when building GenerateContentConfig.

Co-authored-by: Aaron Diamond-Reivich <aarondr77@gmail.com>
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

model=model,
contents=contents,
message_type=message_type,
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused provider_data variable in streaming method

Low Severity

The provider_data variable is computed via get_gemini_completion_function_params but never referenced anywhere in stream_completions. The comment says "Get provider data for thinking config," but the thinking config is handled inline via a direct model == GEMINI_3_1_PRO_MODEL check instead. This is dead code that adds unnecessary computation and confusion about intent.

Fix in Cursor Fix in Web

response_mime_type=provider_data.get("config", {}).get("response_mime_type"),
response_schema=provider_data.get("config", {}).get("response_schema")
response_schema=provider_data.get("config", {}).get("response_schema"),
thinking_config=ThinkingConfig(thinking_level="MEDIUM") if model == GEMINI_3_1_PRO_MODEL else None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking config missing for Mito server fallback path

Medium Severity

The ThinkingConfig(thinking_level="MEDIUM") is only applied in the direct API key path (self.api_key). Users without a Gemini API key who fall back to the Mito server don't receive the thinking config—neither via provider_data config nor as a separate parameter. The OpenAI equivalent (reasoning_effort for GPT 5.2) is passed through get_open_ai_completion_function_params so it reaches the Mito server, but the Gemini thinking config has no such mechanism.

Additional Locations (1)

Fix in Cursor Fix in Web

@aarondr77 aarondr77 closed this Mar 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants