feat: add datalake connection support for DBT projects#215
feat: add datalake connection support for DBT projects#215
Conversation
📝 WalkthroughWalkthroughIntroduces DuckLake data lake integration across IPC, services, and UI layers with a new Changes
Sequence Diagram(s)sequenceDiagram
participant User as User / UI
participant SelectProject as SelectProject Screen
participant ConnectorsService as Connectors Service
participant IPCHandler as IPC Handler
participant MainService as Main Connectors Service
participant SecureStorage as Secure Storage
User->>SelectProject: Select datalake instance and create project
SelectProject->>SelectProject: Sanitize datalake name
SelectProject->>ConnectorsService: saveConnection(datalakeConnection)
ConnectorsService->>IPCHandler: POST connector:save
IPCHandler->>MainService: saveNewConnection(ConnectionInput)
MainService->>SecureStorage: Store S3 credentials (region, keys)
MainService-->>IPCHandler: Return connectionId
IPCHandler-->>ConnectorsService: Return connectionId
ConnectorsService-->>SelectProject: Return connectionId
SelectProject->>SelectProject: Use connectionId for new project
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/main/services/update.service.ts (1)
143-145:⚠️ Potential issue | 🟠 Major
allowPrerelease = trueconflicts with the prerelease-skip flow.Line 144 enables prerelease candidates, but
checkForUpdates()explicitly skips prerelease updates (Line 67-70). This can cause valid stable updates to be missed when prerelease is selected first.In electron-updater (electron-builder), when allowPrerelease is true for GitHub provider, does checkForUpdates select prerelease releases as the update candidate over stable ones if prerelease has a higher version? Also confirm whether allowPrerelease=true sets allowDowngrade=true.Proposed fix (if prereleases should remain blocked in app flow)
- autoUpdater.allowPrerelease = true; // Allow beta/alpha updates + autoUpdater.allowPrerelease = false; // Keep candidate selection aligned with prerelease-skip logic🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/update.service.ts` around lines 143 - 145, autoUpdater.allowPrerelease = true conflicts with the explicit prerelease-skip logic in checkForUpdates() (see checkForUpdates() around lines 67-70) and can cause prerelease candidates to be favored or stable updates to be missed; fix by making the behavior consistent: either set autoUpdater.allowPrerelease = false if your app flow intends to block prereleases, or remove/adjust the prerelease skip logic inside checkForUpdates() so it respects allowPrerelease (choose one approach and apply it consistently), and update any related gating/flags that control prerelease selection to use the same source of truth.
🧹 Nitpick comments (2)
src/main/services/update.service.ts (1)
13-38: Consider deduplicating version comparison logic across main/renderer.
compareVersionsis implemented in bothsrc/main/services/update.service.tsandsrc/renderer/components/settings/InstallationSettings.tsx. A shared utility would reduce drift risk in update UI vs backend decisions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/update.service.ts` around lines 13 - 38, Duplicate version comparison logic exists in compareVersions (uses isPrerelease); extract this function (and isPrerelease) into a shared utility module (e.g., versionUtils) and export it, then replace the implementations in src/main/services/update.service.ts and src/renderer/components/settings/InstallationSettings.tsx with imports from that shared module; ensure signatures remain the same, update any tests or callers to use the new exported functions, and run lint/type checks to confirm no breakages.src/main/utils/yamlPartialUpdate.ts (1)
108-115: Consolidate DuckLake handling to avoid dead-path drift in this partial updater.These new DuckLake branches are currently bypassed by early returns in the same module’s update flow, so they can easily diverge from real behavior over time. Consider centralizing DuckLake handling in one explicit path.
Also applies to: 151-153
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/utils/yamlPartialUpdate.ts` around lines 108 - 115, The ducklake-specific partial update is duplicated and can drift from the real generator; remove the inline case 'ducklake' return and instead centralize DuckLake handling into a single helper used by the module's update flow (e.g., create or reuse a function like getDuckLakePartialFields or handleDuckLakePartialUpdate) so both places that currently special-case DuckLake (the case at 'ducklake' and the other branch at lines 151-153) delegate to that helper; ensure the helper returns the exact fields the full generator expects (path: ':memory:', schema: 'main' and any extensions comment) and replace both inline branches with calls to that helper so there is one authoritative implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/services/connectors.service.ts`:
- Around line 301-314: The S3 temporary (STS) session token isn’t being
persisted or restored; update the S3 credential handling to store and restore
the session token along with region/accessKeyId/secretAccessKey by adding
SecureStorageService.setCredential(`db-s3-session-token-${connection.name}`,
credentials.storage.s3.sessionToken) when saving and
SecureStorageService.getCredential(`db-s3-session-token-${connection.name}`)
(assigning it to credentials.storage.s3.sessionToken) when loading; apply this
change in the same code paths that use
SecureStorageService.setCredential/getCredential for S3 (the blocks that
currently write/read `db-s3-region-*`, `db-s3-access-key-*`,
`db-s3-secret-key-*`) so functions handling connection save/load use the session
token end-to-end.
- Around line 640-643: The 'ducklake' switch branch currently unconditionally
returns true; update the 'ducklake' case in the connector connection test to
actually validate the provided instanceId instead of assuming it's valid:
retrieve the instanceId from the connector config and verify it exists/is
reachable (e.g., via the existing instance store method like
getInstanceById(...) or a DuckLake client call such as
DuckLakeClient.validateInstance(instanceId)); return false or throw a
descriptive error when the instance is missing/invalid and return true only when
the validation succeeds. Ensure you modify the branch that currently reads "case
'ducklake': ... return true;" so it performs the lookup/validation and handles
errors cleanly.
- Around line 819-870: Remove the verbose console.log calls in the ducklake
branch of generateRosettaYml: delete the console.log statements that print the
full connection, instance, credentials.storage?.type, and connectionConfig
objects (the lines around the calls to DuckLakeService.getInstance,
DuckLakeInstanceStore.retrieveCredentials, and the final connectionConfig log).
If minimal tracing is needed, replace them with non-sensitive logs via the app
logger (e.g., processLogger.debug) that only emit safe identifiers (like
connection.name or instance.id) and avoid printing full objects or credential
fields; ensure references to DuckLakeService.getInstance,
DuckLakeInstanceStore.retrieveCredentials, and connectionConfig remain intact so
behavior is unchanged.
In `@src/main/services/projects.service.ts`:
- Around line 1231-1232: The switch branch for case 'ducklake' in
projects.service.ts currently returns [] which hides that DuckLake extraction is
unsupported; change this branch to surface a clear failure instead of an empty
array — e.g., throw a specific error (UnsupportedSchemaError or similar) or
return a discriminated error result/object so callers can distinguish
"unsupported" from a valid empty schema; update the logic around the function
handling schema extraction (the switch branch labeled 'ducklake') and any
callers to handle the new error/result shape.
In `@src/main/services/update.service.ts`:
- Around line 33-35: The current branch that returns v1.localeCompare(v2) when
both v1IsPrerelease and v2IsPrerelease is wrong because localeCompare does
lexicographic ordering and mis-sorts numeric prerelease identifiers (e.g.,
beta.10 vs beta.2); replace that logic with a semantic prerelease-aware
comparator: parse the prerelease segments of v1 and v2 into identifier arrays,
iterate comparing each segment by converting numeric tokens to numbers and
comparing numerically when both are numeric, otherwise compare as strings, treat
shorter arrays as smaller when all equal, and use that result instead of
localeCompare (update the code path where v1IsPrerelease and v2IsPrerelease are
checked and remove the localeCompare call).
- Around line 105-113: The returned payload can mismap release notes to a
different version — after computing latestVersion with
compareVersions(currentVersion, newVersion) and setting newVersion:
latestVersion, select releaseNotes that match latestVersion instead of always
using result.updateInfo.releaseNotes; e.g. if latestVersion === newVersion keep
result.updateInfo.releaseNotes, otherwise use the release notes tied to the
currentVersion (e.g. result.currentUpdateInfo.releaseNotes) or a safe fallback,
then return that chosen releaseNotes alongside newVersion.
In `@src/renderer/components/newProject/index.tsx`:
- Around line 355-358: The tooltip title currently interpolates
datalake.storage?.type which can be undefined; update the title expression in
the component (where isDisabled and datalake.storage?.type?.toUpperCase() are
used) to only build the string when datalake.storage?.type is truthy (e.g.,
datalake.storage?.type ? `${datalake.storage.type.toUpperCase()} storage is not
supported for DBT projects` : ''), ensuring you call toUpperCase on a defined
string to avoid showing "undefined storage" or throwing.
In `@src/renderer/screens/selectProject/index.tsx`:
- Around line 247-272: The current logic can pass the raw selectedConnection (an
instance id) as connectionId when connectionType === 'datalake' even if the
lookup via datalakeInstances.find(...) fails; update the flow around
connectionId/connectionType/selectedConnection so that when connectionType ===
'datalake' and a matching datalakeInstance is not found you explicitly set
connectionId = undefined (and only call connectorsServices.saveConnection(...)
when datalakeInstance is truthy); reference the variables/function names
datalakeInstances, datalakeInstance, connectionId, connectionType,
selectedConnection, and connectorsServices.saveConnection to locate and apply
this change.
In `@src/types/backend.ts`:
- Line 108: The backend added a new status literal 'connecting' on the status
union (status?: 'active' | 'inactive' | 'error' | 'connecting'), but UI types
and consumers still omit it; update all DuckLake-related UI type definitions,
discriminated unions, and switch/if branches (e.g., DuckLake status type
aliases, getDuckLakeStatusIcon / getDuckLakeStatusColor, DuckLakeStatusBadge /
status switch statements, and any map objects) to accept and handle 'connecting'
explicitly so the UI renders the new state consistently (add a mapping,
icon/color/tooltip, and a default/fallback where appropriate).
---
Outside diff comments:
In `@src/main/services/update.service.ts`:
- Around line 143-145: autoUpdater.allowPrerelease = true conflicts with the
explicit prerelease-skip logic in checkForUpdates() (see checkForUpdates()
around lines 67-70) and can cause prerelease candidates to be favored or stable
updates to be missed; fix by making the behavior consistent: either set
autoUpdater.allowPrerelease = false if your app flow intends to block
prereleases, or remove/adjust the prerelease skip logic inside checkForUpdates()
so it respects allowPrerelease (choose one approach and apply it consistently),
and update any related gating/flags that control prerelease selection to use the
same source of truth.
---
Nitpick comments:
In `@src/main/services/update.service.ts`:
- Around line 13-38: Duplicate version comparison logic exists in
compareVersions (uses isPrerelease); extract this function (and isPrerelease)
into a shared utility module (e.g., versionUtils) and export it, then replace
the implementations in src/main/services/update.service.ts and
src/renderer/components/settings/InstallationSettings.tsx with imports from that
shared module; ensure signatures remain the same, update any tests or callers to
use the new exported functions, and run lint/type checks to confirm no
breakages.
In `@src/main/utils/yamlPartialUpdate.ts`:
- Around line 108-115: The ducklake-specific partial update is duplicated and
can drift from the real generator; remove the inline case 'ducklake' return and
instead centralize DuckLake handling into a single helper used by the module's
update flow (e.g., create or reuse a function like getDuckLakePartialFields or
handleDuckLakePartialUpdate) so both places that currently special-case DuckLake
(the case at 'ducklake' and the other branch at lines 151-153) delegate to that
helper; ensure the helper returns the exact fields the full generator expects
(path: ':memory:', schema: 'main' and any extensions comment) and replace both
inline branches with calls to that helper so there is one authoritative
implementation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2a0973cd-49a1-4e92-aca7-668910f1cf8e
⛔ Files ignored due to path filters (2)
package-lock.jsonis excluded by!**/package-lock.jsonrelease/app/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (16)
.github/workflows/test.ymlpackage.jsonsrc/main/ipcHandlers/connectors.ipcHandlers.tssrc/main/services/connectors.service.tssrc/main/services/projects.service.tssrc/main/services/update.service.tssrc/main/utils/yamlPartialUpdate.tssrc/renderer/components/newProject/index.tsxsrc/renderer/controllers/connectors.controller.tssrc/renderer/hooks/useDbt.tssrc/renderer/hooks/useRosettaDBT.tssrc/renderer/hooks/useRosettaExtract.tssrc/renderer/screens/selectProject/index.tsxsrc/renderer/services/connectors.service.tssrc/types/backend.tssrc/types/ipc.ts
💤 Files with no reviewable changes (1)
- .github/workflows/test.yml
| if (credentials.storage?.type === 's3' && credentials.storage.s3) { | ||
| await SecureStorageService.setCredential( | ||
| `db-s3-region-${connection.name}`, | ||
| credentials.storage.s3.region, | ||
| ); | ||
| await SecureStorageService.setCredential( | ||
| `db-s3-access-key-${connection.name}`, | ||
| credentials.storage.s3.accessKeyId, | ||
| ); | ||
| await SecureStorageService.setCredential( | ||
| `db-s3-secret-key-${connection.name}`, | ||
| credentials.storage.s3.secretAccessKey, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Propagate S3 session token end-to-end for temporary credentials.
The flow stores/loads region, access key, and secret, but omits session token. STS-based S3 credentials will fail without it.
🛠️ Suggested fix
@@
if (credentials.storage?.type === 's3' && credentials.storage.s3) {
await SecureStorageService.setCredential(
`db-s3-region-${connection.name}`,
credentials.storage.s3.region,
);
await SecureStorageService.setCredential(
`db-s3-access-key-${connection.name}`,
credentials.storage.s3.accessKeyId,
);
await SecureStorageService.setCredential(
`db-s3-secret-key-${connection.name}`,
credentials.storage.s3.secretAccessKey,
);
+ await SecureStorageService.setCredential(
+ `db-s3-session-token-${connection.name}`,
+ credentials.storage.s3.sessionToken || '',
+ );
}
@@
const s3SecretKey = await SecureStorageService.getCredential(
`db-s3-secret-key-${connection.connection.name}`,
);
+ const s3SessionToken = await SecureStorageService.getCredential(
+ `db-s3-session-token-${connection.connection.name}`,
+ );
@@
if (s3SecretKey) {
process.env[`db-s3-secret-key-${connection.connection.name}`] =
s3SecretKey;
}
+ if (s3SessionToken) {
+ process.env[`db-s3-session-token-${connection.connection.name}`] =
+ s3SessionToken;
+ }
@@
if (credentials.storage?.type === 's3' && credentials.storage.s3) {
connectionConfig.s3Region = `\${db-s3-region-${connection.name}}`;
connectionConfig.s3AccessKeyId = `\${db-s3-access-key-${connection.name}}`;
connectionConfig.s3SecretAccessKey = `\${db-s3-secret-key-${connection.name}}`;
+ connectionConfig.s3SessionToken = `\${db-s3-session-token-${connection.name}}`;
}
@@
if (credentials.storage?.type === 's3' && credentials.storage.s3) {
duckLakeProfile.settings = {
s3_region: `{{ env_var("db-s3-region-${conn.name}") }}`,
s3_access_key_id: `{{ env_var("db-s3-access-key-${conn.name}") }}`,
s3_secret_access_key: `{{ env_var("db-s3-secret-key-${conn.name}") }}`,
+ s3_session_token: `{{ env_var("db-s3-session-token-${conn.name}") }}`,
};
}Also applies to: 349-371, 860-865, 1314-1319
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/services/connectors.service.ts` around lines 301 - 314, The S3
temporary (STS) session token isn’t being persisted or restored; update the S3
credential handling to store and restore the session token along with
region/accessKeyId/secretAccessKey by adding
SecureStorageService.setCredential(`db-s3-session-token-${connection.name}`,
credentials.storage.s3.sessionToken) when saving and
SecureStorageService.getCredential(`db-s3-session-token-${connection.name}`)
(assigning it to credentials.storage.s3.sessionToken) when loading; apply this
change in the same code paths that use
SecureStorageService.setCredential/getCredential for S3 (the blocks that
currently write/read `db-s3-region-*`, `db-s3-access-key-*`,
`db-s3-secret-key-*`) so functions handling connection save/load use the session
token end-to-end.
| case 'ducklake': | ||
| // DuckLake connection test - validate instance exists | ||
| return true; // For now, assume valid if it has an instanceId | ||
| case 'redshift': |
There was a problem hiding this comment.
DuckLake test path currently accepts invalid instances.
The branch always returns true, so stale/invalid instanceId values pass testing.
🧪 Suggested fix
case 'ducklake':
- // DuckLake connection test - validate instance exists
- return true; // For now, assume valid if it has an instanceId
+ await DuckLakeService.getInstance(connection.instanceId);
+ return true;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/services/connectors.service.ts` around lines 640 - 643, The
'ducklake' switch branch currently unconditionally returns true; update the
'ducklake' case in the connector connection test to actually validate the
provided instanceId instead of assuming it's valid: retrieve the instanceId from
the connector config and verify it exists/is reachable (e.g., via the existing
instance store method like getInstanceById(...) or a DuckLake client call such
as DuckLakeClient.validateInstance(instanceId)); return false or throw a
descriptive error when the instance is missing/invalid and return true only when
the validation succeeds. Ensure you modify the branch that currently reads "case
'ducklake': ... return true;" so it performs the lookup/validation and handles
errors cleanly.
| // eslint-disable-next-line no-console | ||
| console.log( | ||
| '[generateRosettaYml] Processing ducklake connection:', | ||
| connection, | ||
| ); | ||
| const instance = await DuckLakeService.getInstance(connection.instanceId); | ||
| // eslint-disable-next-line no-console | ||
| console.log( | ||
| '[generateRosettaYml] Retrieved datalake instance:', | ||
| instance, | ||
| ); | ||
|
|
||
| // Get credentials if available | ||
| const credentials = await DuckLakeInstanceStore.retrieveCredentials( | ||
| instance.id, | ||
| instance.catalog as any, | ||
| instance.storage as any, | ||
| ); | ||
| // eslint-disable-next-line no-console | ||
| console.log( | ||
| '[generateRosettaYml] Retrieved credentials, storage type:', | ||
| credentials.storage?.type, | ||
| ); | ||
|
|
||
| connectionConfig.ducklakeDataPath = instance.dataPath; | ||
|
|
||
| // Metadata DB path (from catalog config) | ||
| if ( | ||
| instance.catalog.type === 'duckdb' && | ||
| instance.catalog.duckdb?.metadataPath | ||
| ) { | ||
| connectionConfig.ducklakeMetadataDb = | ||
| instance.catalog.duckdb.metadataPath; | ||
| } else if ( | ||
| instance.catalog.type === 'sqlite' && | ||
| instance.catalog.sqlite?.metadataPath | ||
| ) { | ||
| connectionConfig.ducklakeMetadataDb = | ||
| instance.catalog.sqlite.metadataPath; | ||
| } | ||
|
|
||
| // Add S3 credentials if storage is S3 - use env vars for security | ||
| if (credentials.storage?.type === 's3' && credentials.storage.s3) { | ||
| connectionConfig.s3Region = `\${db-s3-region-${connection.name}}`; | ||
| connectionConfig.s3AccessKeyId = `\${db-s3-access-key-${connection.name}}`; | ||
| connectionConfig.s3SecretAccessKey = `\${db-s3-secret-key-${connection.name}}`; | ||
| } | ||
| // eslint-disable-next-line no-console | ||
| console.log( | ||
| '[generateRosettaYml] Final ducklake config:', | ||
| connectionConfig, | ||
| ); |
There was a problem hiding this comment.
Remove verbose console.log calls from connection config generation.
This logs full connection/instance objects from the main process and can expose sensitive operational details in app logs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/services/connectors.service.ts` around lines 819 - 870, Remove the
verbose console.log calls in the ducklake branch of generateRosettaYml: delete
the console.log statements that print the full connection, instance,
credentials.storage?.type, and connectionConfig objects (the lines around the
calls to DuckLakeService.getInstance, DuckLakeInstanceStore.retrieveCredentials,
and the final connectionConfig log). If minimal tracing is needed, replace them
with non-sensitive logs via the app logger (e.g., processLogger.debug) that only
emit safe identifiers (like connection.name or instance.id) and avoid printing
full objects or credential fields; ensure references to
DuckLakeService.getInstance, DuckLakeInstanceStore.retrieveCredentials, and
connectionConfig remain intact so behavior is unchanged.
| case 'ducklake': | ||
| return []; |
There was a problem hiding this comment.
Avoid silently treating DuckLake schema extraction as an empty schema.
At Line 1232, returning [] makes unsupported extraction indistinguishable from a legitimate empty database. This can mislead callers into proceeding as if extraction succeeded.
Suggested fix
case 'ducklake':
- return [];
+ throw new Error(
+ 'Schema extraction is not supported for DuckLake connections in project:extractSchema.',
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'ducklake': | |
| return []; | |
| case 'ducklake': | |
| throw new Error( | |
| 'Schema extraction is not supported for DuckLake connections in project:extractSchema.', | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/services/projects.service.ts` around lines 1231 - 1232, The switch
branch for case 'ducklake' in projects.service.ts currently returns [] which
hides that DuckLake extraction is unsupported; change this branch to surface a
clear failure instead of an empty array — e.g., throw a specific error
(UnsupportedSchemaError or similar) or return a discriminated error
result/object so callers can distinguish "unsupported" from a valid empty
schema; update the logic around the function handling schema extraction (the
switch branch labeled 'ducklake') and any callers to handle the new error/result
shape.
| if (v1IsPrerelease && v2IsPrerelease) { | ||
| return v1.localeCompare(v2); | ||
| } |
There was a problem hiding this comment.
Fix prerelease ordering: localeCompare mis-sorts numeric prerelease identifiers.
At Line 34, lexicographic compare can rank 1.2.0-beta.10 as older than 1.2.0-beta.2, which can select the wrong “latest” version.
Proposed fix
- if (v1IsPrerelease && v2IsPrerelease) {
- return v1.localeCompare(v2);
- }
+ if (v1IsPrerelease && v2IsPrerelease) {
+ return v1.localeCompare(v2, undefined, { numeric: true, sensitivity: 'base' });
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (v1IsPrerelease && v2IsPrerelease) { | |
| return v1.localeCompare(v2); | |
| } | |
| if (v1IsPrerelease && v2IsPrerelease) { | |
| return v1.localeCompare(v2, undefined, { numeric: true, sensitivity: 'base' }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/services/update.service.ts` around lines 33 - 35, The current branch
that returns v1.localeCompare(v2) when both v1IsPrerelease and v2IsPrerelease is
wrong because localeCompare does lexicographic ordering and mis-sorts numeric
prerelease identifiers (e.g., beta.10 vs beta.2); replace that logic with a
semantic prerelease-aware comparator: parse the prerelease segments of v1 and v2
into identifier arrays, iterate comparing each segment by converting numeric
tokens to numbers and comparing numerically when both are numeric, otherwise
compare as strings, treat shorter arrays as smaller when all equal, and use that
result instead of localeCompare (update the code path where v1IsPrerelease and
v2IsPrerelease are checked and remove the localeCompare call).
| // Determine which version is actually latest using semantic versioning | ||
| const versionComparison = compareVersions(currentVersion, newVersion); | ||
| const latestVersion = versionComparison >= 0 ? currentVersion : newVersion; | ||
|
|
||
| return { | ||
| currentVersion, | ||
| newVersion, | ||
| newVersion: latestVersion, | ||
| lastInstalledVersion, | ||
| releaseNotes: result.updateInfo.releaseNotes, |
There was a problem hiding this comment.
Keep newVersion and releaseNotes consistent in settings payload.
newVersion is rewritten to latestVersion at Line 111, but releaseNotes still comes from result.updateInfo (Line 113). When latestVersion === currentVersion, this can return notes for a different version than the one exposed in newVersion.
Proposed fix
- return {
- currentVersion,
- newVersion: latestVersion,
- lastInstalledVersion,
- releaseNotes: result.updateInfo.releaseNotes,
- };
+ const isRemoteVersionLatest = latestVersion === newVersion;
+ return {
+ currentVersion,
+ newVersion: latestVersion,
+ lastInstalledVersion,
+ releaseNotes: isRemoteVersionLatest ? result.updateInfo.releaseNotes : null,
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/services/update.service.ts` around lines 105 - 113, The returned
payload can mismap release notes to a different version — after computing
latestVersion with compareVersions(currentVersion, newVersion) and setting
newVersion: latestVersion, select releaseNotes that match latestVersion instead
of always using result.updateInfo.releaseNotes; e.g. if latestVersion ===
newVersion keep result.updateInfo.releaseNotes, otherwise use the release notes
tied to the currentVersion (e.g. result.currentUpdateInfo.releaseNotes) or a
safe fallback, then return that chosen releaseNotes alongside newVersion.
| title={ | ||
| isDisabled | ||
| ? `${datalake.storage?.type?.toUpperCase()} storage is not supported for DBT projects` | ||
| : '' |
There was a problem hiding this comment.
Avoid showing undefined storage in the disabled-option tooltip.
For instances without storage.type, the current title text can surface a confusing message to users.
💡 Suggested fix
- const isSupported =
- datalake.storage?.type === 's3' ||
- datalake.storage?.type === 'local';
+ const storageType = datalake.storage?.type;
+ const isSupported =
+ storageType === 's3' || storageType === 'local';
const isDisabled = !isSupported;
@@
- title={
- isDisabled
- ? `${datalake.storage?.type?.toUpperCase()} storage is not supported for DBT projects`
- : ''
- }
+ title={
+ isDisabled
+ ? `${(storageType ?? 'unknown').toUpperCase()} storage is not supported for DBT projects`
+ : ''
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/components/newProject/index.tsx` around lines 355 - 358, The
tooltip title currently interpolates datalake.storage?.type which can be
undefined; update the title expression in the component (where isDisabled and
datalake.storage?.type?.toUpperCase() are used) to only build the string when
datalake.storage?.type is truthy (e.g., datalake.storage?.type ?
`${datalake.storage.type.toUpperCase()} storage is not supported for DBT
projects` : ''), ensuring you call toUpperCase on a defined string to avoid
showing "undefined storage" or throwing.
| let connectionId = selectedConnection || undefined; | ||
|
|
||
| if (connectionType === 'datalake' && selectedConnection) { | ||
| const datalakeInstance = datalakeInstances.find( | ||
| (dl) => dl.id === selectedConnection, | ||
| ); | ||
|
|
||
| if (datalakeInstance) { | ||
| const sanitizedName = | ||
| `${datalakeInstance.name}_${newProject.name}`.replace( | ||
| /[^a-zA-Z0-9_]/g, | ||
| '_', | ||
| ); | ||
| const datalakeConnection = { | ||
| type: 'ducklake' as const, | ||
| name: sanitizedName, | ||
| host: '', | ||
| port: 0, | ||
| instanceId: datalakeInstance.id, | ||
| dataPath: datalakeInstance.dataPath, | ||
| status: datalakeInstance.status, | ||
| }; | ||
| connectionId = | ||
| await connectorsServices.saveConnection(datalakeConnection); | ||
| } | ||
| } |
There was a problem hiding this comment.
Prevent invalid connectionId when datalake instance lookup fails.
In datalake mode, selectedConnection is an instance id. If lookup fails, the current flow can still pass that instance id as connectionId to project creation.
🛠️ Suggested fix
- let connectionId = selectedConnection || undefined;
+ let connectionId =
+ connectionType === 'standard'
+ ? selectedConnection || undefined
+ : undefined;
@@
if (connectionType === 'datalake' && selectedConnection) {
const datalakeInstance = datalakeInstances.find(
(dl) => dl.id === selectedConnection,
);
- if (datalakeInstance) {
- const sanitizedName =
- `${datalakeInstance.name}_${newProject.name}`.replace(
- /[^a-zA-Z0-9_]/g,
- '_',
- );
- const datalakeConnection = {
- type: 'ducklake' as const,
- name: sanitizedName,
- host: '',
- port: 0,
- instanceId: datalakeInstance.id,
- dataPath: datalakeInstance.dataPath,
- status: datalakeInstance.status,
- };
- connectionId =
- await connectorsServices.saveConnection(datalakeConnection);
- }
+ if (!datalakeInstance) {
+ throw new Error('Selected datalake instance no longer exists');
+ }
+
+ const sanitizedName =
+ `${datalakeInstance.name}_${newProject.name}`.replace(
+ /[^a-zA-Z0-9_]/g,
+ '_',
+ );
+ const datalakeConnection = {
+ type: 'ducklake' as const,
+ name: sanitizedName,
+ host: '',
+ port: 0,
+ instanceId: datalakeInstance.id,
+ dataPath: datalakeInstance.dataPath,
+ status: datalakeInstance.status,
+ };
+ connectionId =
+ await connectorsServices.saveConnection(datalakeConnection);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let connectionId = selectedConnection || undefined; | |
| if (connectionType === 'datalake' && selectedConnection) { | |
| const datalakeInstance = datalakeInstances.find( | |
| (dl) => dl.id === selectedConnection, | |
| ); | |
| if (datalakeInstance) { | |
| const sanitizedName = | |
| `${datalakeInstance.name}_${newProject.name}`.replace( | |
| /[^a-zA-Z0-9_]/g, | |
| '_', | |
| ); | |
| const datalakeConnection = { | |
| type: 'ducklake' as const, | |
| name: sanitizedName, | |
| host: '', | |
| port: 0, | |
| instanceId: datalakeInstance.id, | |
| dataPath: datalakeInstance.dataPath, | |
| status: datalakeInstance.status, | |
| }; | |
| connectionId = | |
| await connectorsServices.saveConnection(datalakeConnection); | |
| } | |
| } | |
| let connectionId = | |
| connectionType === 'standard' | |
| ? selectedConnection || undefined | |
| : undefined; | |
| if (connectionType === 'datalake' && selectedConnection) { | |
| const datalakeInstance = datalakeInstances.find( | |
| (dl) => dl.id === selectedConnection, | |
| ); | |
| if (!datalakeInstance) { | |
| throw new Error('Selected datalake instance no longer exists'); | |
| } | |
| const sanitizedName = | |
| `${datalakeInstance.name}_${newProject.name}`.replace( | |
| /[^a-zA-Z0-9_]/g, | |
| '_', | |
| ); | |
| const datalakeConnection = { | |
| type: 'ducklake' as const, | |
| name: sanitizedName, | |
| host: '', | |
| port: 0, | |
| instanceId: datalakeInstance.id, | |
| dataPath: datalakeInstance.dataPath, | |
| status: datalakeInstance.status, | |
| }; | |
| connectionId = | |
| await connectorsServices.saveConnection(datalakeConnection); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/renderer/screens/selectProject/index.tsx` around lines 247 - 272, The
current logic can pass the raw selectedConnection (an instance id) as
connectionId when connectionType === 'datalake' even if the lookup via
datalakeInstances.find(...) fails; update the flow around
connectionId/connectionType/selectedConnection so that when connectionType ===
'datalake' and a matching datalakeInstance is not found you explicitly set
connectionId = undefined (and only call connectorsServices.saveConnection(...)
when datalakeInstance is truthy); reference the variables/function names
datalakeInstances, datalakeInstance, connectionId, connectionType,
selectedConnection, and connectorsServices.saveConnection to locate and apply
this change.
| catalogType?: 'duckdb' | 'postgresql' | 'sqlite'; | ||
| dataPath?: string; | ||
| status?: 'active' | 'inactive' | 'error'; | ||
| status?: 'active' | 'inactive' | 'error' | 'connecting'; |
There was a problem hiding this comment.
Propagate the new 'connecting' status to all DuckLake UI type/switch consumers.
Line 108 expands the backend status union, but UI-specific status models/visual mappings still appear to exclude 'connecting', which can cause inconsistent status rendering/behavior for that state.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/types/backend.ts` at line 108, The backend added a new status literal
'connecting' on the status union (status?: 'active' | 'inactive' | 'error' |
'connecting'), but UI types and consumers still omit it; update all
DuckLake-related UI type definitions, discriminated unions, and switch/if
branches (e.g., DuckLake status type aliases, getDuckLakeStatusIcon /
getDuckLakeStatusColor, DuckLakeStatusBadge / status switch statements, and any
map objects) to accept and handle 'connecting' explicitly so the UI renders the
new state consistently (add a mapping, icon/color/tooltip, and a
default/fallback where appropriate).
Summary by CodeRabbit
New Features
Enhancements
Chores