🚀 Feat: Model selection + tests#62
Conversation
Fixed frontend display bug where models named "default" incorrectly overrode explicit default settings. Updated lib/storage.py to strictly enforce a single default model constraint. Implemented auto-default logic: the first model created is automatically set as default. Implemented default reassignment: deleting the current default model automatically promotes another model to default. Added integration tests covering auto-default and reassignment scenarios.
Fixed frontend display bug where models named "default" incorrectly overrode explicit default settings. Updated lib/storage.py to strictly enforce a single default model constraint. Implemented auto-default logic: the first model created is automatically set as default. Implemented default reassignment: deleting the current default model automatically promotes another model to default. Added integration tests covering auto-default and reassignment scenarios.
…shaj/DataGenFlow into feature-model-selection
WalkthroughAdds default-model selection for LLM and embedding configs: DB/schema + storage support for Changes
Sequence DiagramsequenceDiagram
participant Client as Frontend (Client)
participant API as API Server
participant Mgr as LLMConfigManager
participant Storage as Storage Layer
participant DB as Database
Client->>API: PUT /api/llm-models/{name}/default
activate API
API->>Mgr: set_default_llm_model(name)
activate Mgr
Mgr->>Storage: set_default_llm_model(name)
activate Storage
Storage->>DB: BEGIN TRANSACTION
Storage->>DB: SELECT EXISTS FROM llm_models WHERE name={name}
alt Not Found
DB-->>Storage: no rows
Storage->>DB: ROLLBACK
Storage-->>Mgr: False
else Found
DB-->>Storage: row exists
Storage->>DB: UPDATE llm_models SET is_default = FALSE
Storage->>DB: UPDATE llm_models SET is_default = TRUE WHERE name = {name}
Storage->>DB: COMMIT
Storage-->>Mgr: True
end
deactivate Storage
Mgr-->>API: success / raise LLMConfigNotFoundError
deactivate Mgr
alt Success
API-->>Client: 200 {message: "Default model set"}
else Not Found
API-->>Client: 404 {detail: "Model not found"}
end
deactivate API
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/integration/test_default_model_selection_integration.py`:
- Around line 14-20: The cleanup lambdas passed to
storage._execute_with_connection return coroutines that are never awaited;
replace them with an async function so the inner db.execute is awaited. For
example, define an async helper (e.g., async def clear_tables(db): await
db.execute("DELETE FROM llm_models"); await db.execute("DELETE FROM
embedding_models")) and pass clear_tables to _execute_with_connection (or create
two async functions to delete each table) so that the calls inside
_execute_with_connection truly await db.execute.
🧹 Nitpick comments (7)
CHANGELOG.md (1)
8-12: Consider adding an[Unreleased]comparison link for changelog parity.Keep a Changelog convention typically includes an
[Unreleased]compare link at the bottom. Adding it makes release diffs easier to track.Also applies to: 76-79
tests/test_api.py (1)
531-542: Consider verifying the model is actually marked as default.The test verifies the API returns success, but doesn't confirm the model's
is_defaultflag is set. A follow-up GET call could verify the state change persisted.client.post("/api/llm-models", json=model_config) response = client.put("/api/llm-models/test-llm/default") assert response.status_code == 200 assert response.json()["message"] == "llm model set as default successfully" + # verify the model is actually marked as default + get_response = client.get("/api/llm-models/test-llm") + assert get_response.json()["is_default"] is TrueThis applies to the embedding model test as well (lines 549-560).
app.py (2)
701-711: Add exception chaining for LLMConfigNotFoundError.Line 708 raises HTTPException without
from e, breaking the exception chain. This makes debugging harder when the original exception context is lost.Proposed fix
except LLMConfigNotFoundError as e: - raise HTTPException(status_code=404, detail=e.message) + raise HTTPException(status_code=404, detail=e.message) from e
769-779: Same exception chaining issue here.Line 776 also lacks
from efor the NotFoundError case.Proposed fix
except LLMConfigNotFoundError as e: - raise HTTPException(status_code=404, detail=e.message) + raise HTTPException(status_code=404, detail=e.message) from etests/integration/test_default_model_selection_integration.py (1)
143-147: Consider adding an embedding error case test.The test only covers LLM error case. For completeness, consider adding a similar test for
set_default_embedding_modelwith a non-existent model.lib/storage.py (2)
742-763: Minor: existence check outside transaction.The existence check (lines 746-749) happens before the transaction starts. If the model is deleted concurrently between the check and the UPDATE, the function returns
Truebut no model becomes default.This is a minor edge case for SQLite (single-writer), but for correctness, consider moving the check inside the transaction.
♻️ Proposed fix
async def _set_default(db: Connection) -> bool: - # check if model exists - cursor = await db.execute("SELECT 1 FROM llm_models WHERE name = ?", (name,)) - if not await cursor.fetchone(): - return False - await db.execute("BEGIN") try: + # check if model exists inside transaction + cursor = await db.execute("SELECT 1 FROM llm_models WHERE name = ?", (name,)) + if not await cursor.fetchone(): + await db.execute("ROLLBACK") + return False + # reset all to false await db.execute("UPDATE llm_models SET is_default = 0") # set selected to true await db.execute("UPDATE llm_models SET is_default = 1 WHERE name = ?", (name,)) await db.execute("COMMIT") return True except Exception: await db.execute("ROLLBACK") raise
882-905: Same existence check concern as LLM version.Consider moving the existence check inside the transaction for consistency and correctness.
|
|
||
| # Clear any models created by auto-migration from env | ||
| await storage._execute_with_connection(lambda db: db.execute("DELETE FROM llm_models")) | ||
| await storage._execute_with_connection(lambda db: db.execute("DELETE FROM embedding_models")) | ||
|
|
||
| yield storage | ||
| await storage.close() |
There was a problem hiding this comment.
Lambda returns unawaited coroutine.
The lambda lambda db: db.execute(...) returns a coroutine that won't be awaited. This works by accident because _execute_with_connection calls await func(self._conn), but the inner result is never awaited. Use an async function instead.
🐛 Proposed fix
- # Clear any models created by auto-migration from env
- await storage._execute_with_connection(lambda db: db.execute("DELETE FROM llm_models"))
- await storage._execute_with_connection(lambda db: db.execute("DELETE FROM embedding_models"))
+ # clear any models created by auto-migration from env
+ async def clear_llm(db):
+ await db.execute("DELETE FROM llm_models")
+ async def clear_embedding(db):
+ await db.execute("DELETE FROM embedding_models")
+ await storage._execute_with_connection(clear_llm)
+ await storage._execute_with_connection(clear_embedding)🤖 Prompt for AI Agents
In `@tests/integration/test_default_model_selection_integration.py` around lines
14 - 20, The cleanup lambdas passed to storage._execute_with_connection return
coroutines that are never awaited; replace them with an async function so the
inner db.execute is awaited. For example, define an async helper (e.g., async
def clear_tables(db): await db.execute("DELETE FROM llm_models"); await
db.execute("DELETE FROM embedding_models")) and pass clear_tables to
_execute_with_connection (or create two async functions to delete each table) so
that the calls inside _execute_with_connection truly await db.execute.
|
GG! I would like to ask you two edits:
Everything looks good, be careful to the theme: dark and light mode, nice job! |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@frontend/src/pages/Settings.tsx`:
- Line 287: The two badge texts are inconsistently capitalized: the LLM section
renders "Default" while the Embedding section renders "default"; update one to
match the other for consistency (choose and apply the same capitalization across
both places). Locate the badge JSX in the Settings component (search for the
badge text in the LLM-related JSX and the embedding-related JSX, e.g., where the
"Default"/"default" label is rendered) and change the string so both badges use
identical capitalization.
- Around line 134-154: Add and use a per-model "setting" state for the
set-default handlers: introduce a state variable like settingDefaultLlm and
settingDefaultEmbedding (strings or null) and set it to the model name at the
start of handleSetDefaultLlm / handleSetDefaultEmbedding, clear it in a finally
block (so it is reset on success or error), and optionally early-return if the
same model is already being set to prevent concurrent requests; update the UI
buttons to disable when settingDefaultLlm === model.name or
settingDefaultEmbedding === model.name and show a spinner while that state is
non-null.
🧹 Nitpick comments (2)
frontend/src/pages/Settings.tsx (2)
10-11: Unused import:StarFillIcon
StarFillIconis imported but never used in this file. OnlyStarIconis used for the "Set Default" button.🧹 Remove unused import
CheckCircleFillIcon, - StarFillIcon, StarIcon,
235-370: Duplicated JSX pattern for model cards.The LLM model card (lines 238-368) and Embedding model card (lines 415-544) share nearly identical structure: name, provider badge, default badge, action buttons. This violates the guideline to extract repeated JSX patterns (3+ occurrences) into reusable components.
Extract a shared
ModelCardcomponent that accepts the model data and action handlers as props. This improves maintainability and ensures visual consistency.// example component signature interface ModelCardProps<T extends LLMModelConfig | EmbeddingModelConfig> { model: T; isDefault: boolean; isTesting: boolean; onSetDefault: () => void; onTest: () => void; onEdit: () => void; onDelete: () => void; extraDetails?: React.ReactNode; }As per coding guidelines: "Reject repeated JSX in frontend - copied JSX patterns (3+ times) must be extracted into components."
Also applies to: 412-546
Created a reusable generic component that works with both LLMModelConfig and EmbeddingModelConfig
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/src/pages/Settings.tsx (1)
238-309:⚠️ Potential issue | 🟡 MinorUpdate
llm/state-frontend.mdfor the new default-selection flow.The Settings UI flow now includes explicit “Set Default” actions via
ModelCard, so the frontend state doc should reflect this updated flow. Based on learnings, Update llm/state-frontend.md when UI flow changes.
🤖 Fix all issues with AI agents
In `@frontend/src/components/settings/ModelCard.tsx`:
- Around line 47-93: The JSX comments in ModelCard.tsx (the blocks above the
name/badges row and "Model details") are redundant and not following the
lowercase/concise guideline; remove those obvious structural comments or replace
them with short, lowercase, intent-focused comments (e.g., explain why a
conditional like isDefault renders the default badge or why extraDetails may be
appended), keeping references to `model`, `model.model_name`, `model.endpoint`,
`isDefault`, `extraDetails`, and `CheckCircleFillIcon` so reviewers can find the
related logic.
- Around line 11-33: ModelCard currently accepts nine props via ModelCardProps;
reduce to five by grouping related flags and callbacks into objects: replace
isDefault, isTesting, isSettingDefault with a single status: { isDefault:
boolean; isTesting: boolean; isSettingDefault: boolean } and replace
onSetDefault, onTest, onEdit, onDelete with an actions: { onSetDefault: () =>
void; onTest: () => void; onEdit: () => void; onDelete: () => void }; keep model
and extraDetails as standalone props. Update the ModelCardProps<T> type and the
ModelCard<T> parameter destructuring to accept model, status, actions,
extraDetails, then update all call sites to pass the new status and actions
objects (or move them into context if preferred).
- Around line 9-21: The ModelCardProps type uses React.ReactNode but React isn't
imported and the prop list is too long; import the ReactNode type with "import
type { ReactNode } from 'react'" and update extraDetails?: ReactNode, then
reduce props by grouping handler props into a single actions object (e.g.,
replace onSetDefault, onTest, onEdit, onDelete with actions: { setDefault: () =>
void; test: () => void; edit: () => void; remove: () => void }) or move the
action logic into a custom hook used by the ModelCard component (update usages
accordingly) and finally change the Title Case comment strings to lowercase:
replace "/* Name and badges row */", "/* Model details */", and "/* Action
buttons */" with "/* name and badges row */", "/* model details */", and "/*
action buttons */".
In `@frontend/src/pages/Settings.tsx`:
- Around line 128-155: The handlers handleSetDefaultLlm and
handleSetDefaultEmbedding (and also handleTestLlm and handleTestEmbedding)
update component state in finally blocks without checking whether the component
is still mounted and they don't log caught errors to console; add a mounted
guard (e.g., use a ref like isMountedRef.current) and only call
setSettingDefaultLlm / setSettingDefaultEmbedding (and corresponding test-state
setters in handleTestLlm/handleTestEmbedding) if the component is mounted, and
in each catch block call console.error(error) before showing toast.error so
errors are logged for debugging. Ensure you reference the existing functions
handleSetDefaultLlm, handleSetDefaultEmbedding, handleTestLlm, and
handleTestEmbedding when applying the guard and console.error calls.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@frontend/src/pages/Settings.tsx`:
- Around line 23-38: The async loaders loadLlmModels, loadEmbeddingModels, and
loadLangfuseStatus currently call setState after awaiting network operations
without checking if the component is still mounted; update each loader to wrap
every setState (e.g., setSettingDefaultLlm, setSettingDefaultEmbedding,
setLangfuseEnabled, setLangfuseHost, setLoadingLangfuse) in an if
(isMountedRef.current) guard so state updates only occur when the component is
mounted (isMountedRef is already defined and toggled in the cleanup).
In `@llm/state-frontend.md`:
- Around line 73-75: Update the bullet "provider/model selection (OpenAI,
Anthropic, Ollama, etc)" to include the missing period after "etc" so it reads
"provider/model selection (OpenAI, Anthropic, Ollama, etc.)"; locate the exact
string "provider/model selection (OpenAI, Anthropic, Ollama, etc)" in
llm/state-frontend.md and add the period to "etc" to fix the punctuation.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
frontend/src/pages/Settings.tsx (3)
41-62:⚠️ Potential issue | 🟡 MinorAdd console.error logging in model loaders.
The LLM/embedding loaders toast errors but don’t log them, which makes debugging harder.
🛠️ Suggested fix
} catch (error) { + console.error("Failed to load LLM models:", error); const message = error instanceof Error ? error.message : "Unknown error"; toast.error(`Failed to load LLM models: ${message}`); } }; const loadEmbeddingModels = async () => { try { const models = await llmConfigApi.listEmbeddingModels(); if (isMountedRef.current) { setEmbeddingModels(models); } } catch (error) { + console.error("Failed to load embedding models:", error); const message = error instanceof Error ? error.message : "Unknown error"; toast.error(`Failed to load embedding models: ${message}`); } };As per coding guidelines: Log errors to console in frontend - ensure errors are logged for debugging.
65-83:⚠️ Potential issue | 🟠 MajorMove Langfuse status fetch to service layer, add AbortController, and surface failures.
This inline fetch violates the service-layer rule and lacks abort + user feedback on failure. Please move it into the service layer, pass an AbortSignal, and toast errors.
🛠️ Suggested fix (Settings.tsx)
useEffect(() => { + const controller = new AbortController(); loadLlmModels(); loadEmbeddingModels(); - loadLangfuseStatus(); + loadLangfuseStatus(controller.signal); return () => { + controller.abort(); isMountedRef.current = false; }; }, []); -const loadLangfuseStatus = async () => { +const loadLangfuseStatus = async (signal: AbortSignal) => { try { - const res = await fetch("/api/langfuse/status"); - if (!res.ok) { - throw new Error(`http ${res.status}`); - } - const data = await res.json(); + const data = await llmConfigApi.getLangfuseStatus({ signal }); if (isMountedRef.current) { setLangfuseEnabled(data.enabled); setLangfuseHost(data.host); } } catch (error) { const message = error instanceof Error ? error.message : "Unknown error"; - console.error("Failed to load Langfuse status:", message); + console.error("Failed to load Langfuse status:", error); + toast.error(`Failed to load Langfuse status: ${message}`); } finally { if (isMountedRef.current) { setLoadingLangfuse(false); } } };You’ll also need a service wrapper (e.g.,
LLMConfigApi.getLangfuseStatus) to keep fetches out of components.As per coding guidelines: Reject inline fetch calls in frontend - all API calls must be in service layer, not in components; Use AbortController for fetch calls in frontend - prevent race conditions and memory leaks; Add error handling with user feedback in frontend - show toast, flash, or modal for failures.
226-345:⚠️ Potential issue | 🟡 MinorUpdate llm/state-frontend.md for the new default-model UI flow.
The Settings UI now supports explicit default selection via ModelCard actions; please document the updated flow and state changes.
Based on learnings: Update llm/state-frontend.md when UI flow changes.
🚀 Feat: Model selection + e2e tests
Description
Related Issue
Checklist
make formatpassesmake pre-mergepassesSummary by CodeRabbit
New Features
Behavior
Tests