Conversation
…xploration
This commit implements the full integration of DuckLake into the SQL Editor, enabling
users to execute queries, explore schemas, and utilize IntelliSense for DuckLake
instances alongside traditional database connections.
Key features and improvements:
- SQL Editor Integration: Added support for 'ducklake' as a first-class connection type.
- Query Execution: Implemented backend routing to DuckLakeService for SQL execution,
including support for SELECT, DDL, and DML operations with command detection.
- Schema Explorer: Developed a professional MUI TreeView for DuckLake metadata,
displaying the schemas → tables → columns hierarchy with type-specific icons.
- Monaco completions: Created a dedicated IntelliSense provider for DuckLake-specific
system tables, metadata views, functions, and time-travel syntax.
- IPC Layer: Added new handlers for query execution, cancellation, and schema extraction.
- Critical Bug Fixes:
- Resolved metadata qualification issues by automatically prefixing internal
queries with the correct metadata database.
- Fixed a critical data normalization bug in the DuckDB adapter where string
values were being incorrectly converted to undefined.
- Corrected schema extraction queries to align with the DuckLake v0.3 specification.
- Fixed infinite loading states in the SQL screen for DuckLake connections.
- UI/UX Polish: Replaced custom lists with standardized components, added
theme-aware icons, and improved connection grouping in the sidebar.
Files changed:
- types: Updated backend, frontend, duckLake, and ipc definitions.
- services: Enhanced DuckLake main and frontend services with execution/extraction logic.
- adapters: Updated DuckDB, PostgreSQL, and SQLite adapters for data normalization.
- UI: Modified SqlEditor, SqlScreen sidebar, and icon Registries.
- utils: Added Monaco completion generator for DuckLake.
Add export button to query results toolbar with dropdown menu for JSON, CSV, and Parquet formats. JSON and CSV exports are client-side downloads, while Parquet export uses DuckDB/COPY command for DuckDB and DuckLake connections. Export is disabled when there are no results.
- Implement paginated query execution with limit/offset for DuckLake connections - Add server-side data fetching with custom pagination controls in query results table - Support query cancellation through adapter's getCancelFn method - Preserve original SQL across query execution and result display - Enhance table pagination with formatted number display and extended rows per page options - Adjust SQL screen layout to accommodate pagination controls
…and result display limits, and correct CSV value escaping.
… 10, and enable schema level display in the schema browser.
- Remove debug console logs across duckLake service, adapter, and SQL screen - Escape single quotes in schema names to prevent SQL injection in queries - Add recursive sanitization to convert BigInt values to Number in schema data - Add warnings and errors for presence of BigInt values during DuckDB adapter data processing - Skip regular schema loading for DuckLake connections to rely on custom extraction - Normalize numeric values including BigInt and hugeint objects in file utilities - Remove redundant console logs from SQL screen regarding DuckLake schema and completions state
…endering - Add sanitizeValue function to recursively convert BigInt to Number for IPC serialization - Remove unused snapshot and schema checks in adapter query - Add debug warnings for BigInt detection in raw DuckDB adapter rows - Log errors if BigInt remain after normalization in DuckDB adapter - Update QueryResult component to render null/undefined as italicized "NULL" - Improve cell rendering by stripping quotes from stringified JSON values
- Add isLoading state for DuckLake instances to manage loading status - Return a minimal connection object with instanceId when instance is not found - Prevent "Connection is still loading" errors when navigating back to SQL screen - Update dependencies of useMemo hook to include loading state of DuckLake instances
- Added 'TRUNCATE' and 'RENAME' commands to supported DDL operations - Changed detection to check statement start only, avoiding false positives - Simplified DDL keyword list to command verbs only for accurate matching - Removed specific table/view/schema suffixes from keyword checks
- Ensure LIMIT and OFFSET are appended for queries without existing limit - Skip applying LIMIT for queries with existing limits or non-SELECT statements - Clarify handling of LIMIT clause for DML/DDL queries to avoid invalid syntax
…ling - Refactor isDDLOperation to use startsWith for precise DDL detection - Add error toast when no connection is selected in connection mode - Add info toast if DuckLake connection is still loading and block query run - Prevent error toast duplicate by refining loading state checks - Show error when no connection or project is selected outside connection mode - Limit DuckLake query results only for SELECT commands using getCommandType - Clean up redundant code and improve query execution flow for DuckLake connections
- Add refetch function for DuckLake instances in useHook response - Trigger refetch when active tab connection ID changes for DuckLake - Display connection not found message if DuckLake instance is loading but missing - Provide retry button to allow manual refetch of DuckLake instances - Style connection error UI with centered box and descriptive text
…ter data processing, and qualify metadata tables for SQLite.
…s and add export progress UI.
… for parquet export operations.
…with JSON.stringify and fall back to String(), and use String() for non-object types.
… result fetching using a sequence ref.
…ing any query suffixes in database adapters.
…it` is specified.
…ctionality and prevent SQL editor cursor jumps by optimizing initial query handling.
📝 WalkthroughWalkthroughAdds DuckLake support across backend/types, main process IPC and service, catalog adapters, renderer UI and utilities: query execution (cancellation, timing, pagination, numeric normalization), schema extraction, completions, icons, and related plumbing. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant SQLScreen as SQL Screen (Renderer)
participant RendererSvc as DuckLakeService (Renderer)
participant IPC as Electron IPC
participant MainSvc as DuckLakeService (Main)
participant Adapter as Catalog Adapter
participant DB as Database
User->>SQLScreen: select instance / execute query
SQLScreen->>RendererSvc: extractSchema(instanceId) / executeQuery({ query, queryId, ... })
RendererSvc->>IPC: ducklake:schema:extract / ducklake:query
IPC->>MainSvc: forward schema extract / query
MainSvc->>Adapter: run metadata/query (may qualify metadata tables)
Adapter->>DB: execute SQL
DB-->>Adapter: rows, columns, metadata
Adapter->>MainSvc: normalized structured response
MainSvc->>MainSvc: sanitize BigInt, store cancel fn if queryId
MainSvc-->>IPC: response (schema or query result)
IPC-->>RendererSvc: response
RendererSvc-->>SQLScreen: render schema/completions or query result
User->>SQLScreen: cancel query
SQLScreen->>RendererSvc: cancelQuery(queryId)
RendererSvc->>IPC: ducklake:query:cancel
IPC->>MainSvc: cancelQuery(queryId)
MainSvc->>MainSvc: invoke stored cancel function, cleanup
MainSvc-->>SQLScreen: cancellation acknowledgement
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…rizontal padding, bottom border, and box-sizing.
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
src/main/services/duckLake/adapters/sqlite.adapter.ts (1)
274-356:⚠️ Potential issue | 🟠 Major
changes()may not be supported by DuckLake — affectsupdateRows,deleteRows, andupsertRows.Per prior learnings, DuckLake does not support DuckDB's
changes()function. The learning states this guidance applies to all adapter files insrc/main/services/duckLake/adapters/. These three methods all rely onSELECT changes() as changesto determine affected row counts, which may silently return incorrect results.Consider using a
RETURNINGclause with an explicit count, orresult.affectedRowsif available in the driver API. Based on learnings: "In the DuckLake adapter for DuckDB, do not rely on result.getRows().length to determine affected row counts because DuckLake changes() is not supported… This guidance should apply to all TypeScript files in the src/main/services/duckLake/adapters directory."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/sqlite.adapter.ts` around lines 274 - 356, The three methods updateRows, deleteRows, and upsertRows currently call 'SELECT changes()' which DuckLake/DuckDB may not support; instead, change the logic to obtain affected row counts from a reliable source: either use a RETURNING clause and count returned rows (use the result of connectionInfo.connection.run(...) and derive count from result.getRows().length when the DML includes RETURNING), or if the driver returns an affectedRows/rowCount field on the run() result, read that (e.g., result.affectedRows or result.rowCount) from connectionInfo.connection.run; update updateRows, deleteRows and upsertRows (and any similar adapter methods) to avoid 'SELECT changes()' and use one of these two approaches consistently, falling back to 0 if no count is available.src/main/services/duckLake/adapters/postgresql.adapter.ts (1)
294-376:⚠️ Potential issue | 🟠 Major
changes()may not be supported by DuckLake — same issue as SQLite adapter.All three DML methods (
updateRows,deleteRows,upsertRows) rely onSELECT changes() as changes. Per learnings, this guidance applies to all adapter files in this directory. Based on learnings: "This guidance should apply to all TypeScript files in the src/main/services/duckLake/adapters directory that perform DML and infer counts from returned rows."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/postgresql.adapter.ts` around lines 294 - 376, The adapter methods updateRows, deleteRows, and upsertRows currently rely on executing 'SELECT changes()' which DuckLake may not support; change each method to read the affected-row count from the DML execution result instead of always calling connection.run('SELECT changes() as changes'). Concretely: after await this.connectionInfo.connection.run(<dml>), inspect the returned result object from that run call for any metadata (e.g., a changes/rowsAffected property or similar) and use that when present; only fall back to a secondary approach (e.g., parsing returned rows or returning 0) if no metadata is available — update updateRows, deleteRows, and upsertRows to use the run result metadata rather than assuming SELECT changes() exists.src/renderer/components/dataLake/tableDetails/TableDataRowsTab.tsx (1)
1234-1269:⚠️ Potential issue | 🟠 MajorRender row cells by
fieldsorder, notObject.values(row).Current rendering can misalign data under headers when object key enumeration order differs from
queryResult.fields.💡 Proposed fix
- {queryResult?.data?.map((row, rowIndex) => ( + {queryResult?.data?.map((row, rowIndex) => ( <TableRow key={rowIndex} hover @@ - {Object.values(row).map((cell: any, cellIndex: number) => ( + {queryResult?.fields?.map((field, cellIndex) => { + const cell = (row as Record<string, any>)[field.name]; + return ( <TableCell key={cellIndex} @@ - </TableCell> - ))} + </TableCell> + ); + })} </TableRow> ))}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/dataLake/tableDetails/TableDataRowsTab.tsx` around lines 1234 - 1269, The rows are rendered using Object.values(row) which can misalign cells vs headers; update the TableRow rendering in TableDataRowsTab to iterate over queryResult.fields (e.g., queryResult.fields.map) and for each field access the cell value via row[field.name] (or row[field]) so cells follow the header order defined by queryResult.fields; keep the existing TableCell/NULL rendering logic but replace Object.values(row).map usage with mapping over queryResult.fields to preserve column alignment.src/renderer/screens/sql/index.tsx (1)
370-375:⚠️ Potential issue | 🔴 CriticalFix DuckLake query cancellation routing.
The SQL screen's
handleCancelQuery(line 374) always routes toconnectorsServices.cancelQuery(), but DuckLake queries execute viaDuckLakeService.executeQuery()and are tracked inDuckLakeService.activeQueries, notConnectorsService.runningQueries. This causes DuckLake cancel requests to fail silently.Route cancellation based on connection type (similar to the execution routing at lines 130–160): call
DuckLakeService.cancelQuery()for DuckLake tabs andconnectorsServices.cancelQuery()for connector tabs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/index.tsx` around lines 370 - 375, handleCancelQuery currently always calls connectorsServices.cancelQuery, which fails for DuckLake queries; update it to check the active tab's connection type (same logic used in the execution routing around the earlier executeQuery flow) and route cancellation accordingly: if the tab's connection type is DuckLake call DuckLakeService.cancelQuery(activeQueryId) and otherwise call connectorsServices.cancelQuery(activeQueryId); preserve the existing try/catch and success/error toasts (e.g., 'Query execution cancelled' / error toast) and reference activeTabId, tabQueryIds, DuckLakeService.cancelQuery, and connectorsServices.cancelQuery when making the change.src/main/services/connectors.service.ts (2)
761-794:⚠️ Potential issue | 🟠 Major
validateConnectionrejects the newly addedducklaketypeNow that
ducklakeis part ofSupportedConnectionTypes, this switch needs an explicit branch. Otherwise valid ducklake configs fail with “Unsupported connection type!”.💡 Suggested fix
switch (conn.type) { + case 'ducklake': + if (!conn.instanceId) throw new Error('DuckLake instanceId is required'); + break; case 'postgres': case 'redshift': if (!conn.host) throw new Error('Host is required'); if (!conn.port) throw new Error('Port is required'); break;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/connectors.service.ts` around lines 761 - 794, The validateConnection function is missing a branch for the new 'ducklake' connection type so valid ducklake configs fall through to the default error; add a case 'ducklake' in the switch inside validateConnection (near the existing 'duckdb' case) and implement the appropriate validations (at minimum require conn.database_path or other ducklake-specific fields your app expects, mirroring the duckdb checks or adding any extra required properties), then return/break so ducklake no longer triggers the "Unsupported connection type!" error.
749-754:⚠️ Potential issue | 🟡 MinorSkip USER/PASSWORD placeholders for ducklake in generated config
DuckLake connection configs do not carry DB username/password fields; emitting placeholders here can create unresolved env var references.
💡 Suggested fix
...(connection.type !== 'databricks' && connection.type !== 'duckdb' && - connection.type !== 'bigquery' && { + connection.type !== 'bigquery' && + connection.type !== 'ducklake' && { userName: `\${${USER}}`, password: `\${${PASSWORD}}`, }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/connectors.service.ts` around lines 749 - 754, The generated config spread currently adds userName/password placeholders when connection.type is not 'databricks', 'duckdb', or 'bigquery' but misses 'ducklake', causing unresolved env var references; update the condition in the spread that inserts `userName: \`${${USER}}\`` and `password: \`${${PASSWORD}}\`` (the code around connection.type checks in connectors.service.ts) to also exclude 'ducklake' (e.g., add connection.type !== 'ducklake') so that ducklake configs do not receive USER/PASSWORD placeholders.src/renderer/components/sqlEditor/index.tsx (1)
74-88:⚠️ Potential issue | 🟠 MajorKeep DuckLake command classification consistent with backend metadata
getCommandTypehere can diverge fromDuckLakeService.detectCommandType, but this value controls both SELECT limiting andisCommandrendering. That can misclassify some queries and produce inconsistent UI behavior.💡 Suggested fix
if (isDuckLakeConnection && instanceId) { const duckLakeQueryLimit = 10; - const commandType = getCommandType(selectedQuery); const duckLakeResult = await DuckLakeService.executeQuery({ instanceId, query: selectedQuery, queryId, - limit: commandType === 'SELECT' ? duckLakeQueryLimit : undefined, + limit: + (duckLakeResult?.commandType ?? getCommandType(selectedQuery)) === + 'SELECT' + ? duckLakeQueryLimit + : undefined, }); + const duckLakeCommandType = + duckLakeResult.commandType ?? getCommandType(selectedQuery); + // Transform DuckLakeQueryResult to QueryResponseType format // Map fields to ensure type is number (QueryResponseType expects number) const mappedFields = duckLakeResult.fields?.map((field) => ({ name: field.name, type: typeof field.type === 'number' ? field.type : 0, // Convert string types to 0 (unknown) })); result = { success: duckLakeResult.success, data: duckLakeResult.data, fields: mappedFields, rowCount: duckLakeResult.rowCount, error: duckLakeResult.error, + isCommand: + duckLakeResult.isCommand ?? duckLakeCommandType !== 'SELECT', + commandType: duckLakeCommandType, }; } @@ - const commandType = getCommandType(selectedQuery); + const commandType = + (result as any).commandType ?? getCommandType(selectedQuery); const enrichedResult = { ...result, - isCommand: commandType === 'DDL' || commandType === 'DML', + isCommand: + (result as any).isCommand ?? + (commandType === 'DDL' || commandType === 'DML'), commandType, originalSql: selectedQuery, };Also applies to: 130-154, 182-190
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/sqlEditor/index.tsx` around lines 74 - 88, getCommandType can diverge from DuckLakeService.detectCommandType causing misclassification; replace the ad-hoc logic in getCommandType (and the other duplicated blocks) by delegating to the shared detection routine used by the backend (e.g., call DuckLakeService.detectCommandType or a shared util) so the same classification values ('SELECT', 'DML', 'DDL', etc.) are returned and used for both SELECT limiting and isCommand rendering; update all occurrences (the getCommandType function and the two other duplicated blocks) to import/use that central detector and remove the local keyword-based checks.
🧹 Nitpick comments (5)
src/main/services/duckLake/adapters/postgresql.adapter.ts (3)
25-25: Same cross-boundary import issue as insqlite.adapter.ts.
normalizeNumericValueis imported from renderer utils into a main-process file. Same recommendation applies — move to a shared utility location.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/postgresql.adapter.ts` at line 25, The file postgresql.adapter.ts imports normalizeNumericValue from the renderer layer, causing a cross-boundary dependency; move normalizeNumericValue into a shared utilities module (e.g., main/shared or common/utils) and update imports in postgresql.adapter.ts (and sqlite.adapter.ts) to import from that shared location; ensure the exported function name stays normalizeNumericValue so references in functions like any numeric-parsing helpers inside postgresql.adapter.ts continue to work without changing runtime behavior.
867-900: Pagination logic correctly scoped withinisSelectQuery— but count subquery should have an alias.Unlike the SQLite adapter (where LIMIT/OFFSET leaks outside the
isSelectQueryblock), here the placement is correct. However,SELECT COUNT(*) as total FROM (${query})should use a subquery alias for robustness:Suggested fix
- const countQuery = `SELECT COUNT(*) as total FROM (${query})`; + const countQuery = `SELECT COUNT(*) as total FROM (${query}) AS _count_sub`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/postgresql.adapter.ts` around lines 867 - 900, The COUNT(*) subquery used when building countQuery lacks a required alias which can break on PostgreSQL; when computing totalRows inside the if (limit && !hasExistingLimit) && isSelectQuery block, wrap the existing query as a subquery with an explicit alias (e.g., SELECT COUNT(*) AS total FROM (<query>) AS sub) when constructing countQuery before calling this.connectionInfo.connection.run, ensuring the alias is appended to countQuery generation (refer to the variables countQuery, query, and the call to this.connectionInfo.connection.run).
758-836: Nearly identicalexecuteQuery,getMetadataPrefix, andqualifyMetadataTablesacross both adapters.The only meaningful difference is the metadata prefix style (PostgreSQL:
"db"., SQLite:"db".main.). This is a strong candidate for extraction into the base class.Also applies to: 838-976
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/postgresql.adapter.ts` around lines 758 - 836, There are near-duplicate implementations of executeQuery, getMetadataPrefix, and qualifyMetadataTables across adapters; extract the shared logic into a common base class (e.g., DuckLakeAdapterBase) so both PostgreSQL and SQLite adapters reuse it, and replace the per-adapter getMetadataPrefix with a small adapter-specific override that returns only the differing suffix/format (PostgreSQL: `"db".`, SQLite: `"db".main.`) or a method like getMetadataDbQualifier used by the base qualifyMetadataTables/executeQuery logic; update PostgreSQL adapter to keep its specific prefix behavior by implementing the override and remove the duplicated methods from the adapter classes.src/main/services/duckLake/adapters/sqlite.adapter.ts (2)
708-747: Substantial code duplication withpostgresql.adapter.ts.
getMetadataPrefix(),qualifyMetadataTables(), and the entireexecuteQuery()method body are nearly identical between the SQLite and PostgreSQL adapters (the only difference being".main."vs"."in the metadata prefix). Consider extracting the shared logic into theCatalogAdapterbase class or a shared mixin/utility, parameterized by the prefix style.Also applies to: 784-922
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/sqlite.adapter.ts` around lines 708 - 747, The SQLite adapter duplicates logic in getMetadataPrefix, qualifyMetadataTables, and executeQuery that is almost identical to the Postgres adapter; refactor by extracting the shared logic into the CatalogAdapter base class (or a shared utility/mixin) and parameterize the only adapter-specific part (the metadata prefix format, i.e., '".main."' vs '"."') via an abstract method or a constructor-injected value (e.g., getMetadataPrefixFormat or metadataPrefix string). Move the common implementations of qualifyMetadataTables and executeQuery to the base class to call the shared getMetadataPrefix logic, and have SQLiteAdapter and PostgresAdapter supply only the prefix format override; ensure existing behavior and error handling are preserved and update references to getMetadataPrefix, qualifyMetadataTables, and executeQuery to use the new base implementations.
27-27: MovenormalizeNumericValuefrom renderer utilities to main process utilities.
normalizeNumericValueis imported fromsrc/renderer/utils/fileUtils.tsinto main-process adapters (sqlite, postgresql, duckdb), but the function is never used by any renderer code. This creates an unnecessary cross-process coupling in an Electron application.Since
src/main/utils/already exists for main process utilities, movenormalizeNumericValuethere so that main process adapters can import from their own utility layer instead of renderer utilities.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/sqlite.adapter.ts` at line 27, The adapters in main process (e.g., sqlite.adapter.ts) import normalizeNumericValue from renderer utilities; move the normalizeNumericValue implementation into the main utilities folder and update imports: create or add normalizeNumericValue to src/main/utils/fileUtils (export it), change imports in sqlite.adapter.ts (and other main adapters like postgresql/duckdb adapters) to import normalizeNumericValue from src/main/utils/fileUtils, and remove or stop exporting it from src/renderer/utils/fileUtils so renderer code is no longer the source; ensure function signature/behavior and tests remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/services/connectors.service.ts`:
- Around line 207-209: The current logic in connectors.service.ts that derives
baseName by splitting duckConn.database_path on '/' is not cross-platform;
update the code to use Node's path utilities instead: import the path module and
replace the split logic with path.basename(duckConn.database_path, '.duckdb') to
get the fileName, then set baseName = fileName || duckConn.name; reference the
duckConn.database_path usage and the baseName assignment to locate the change
(and ensure any surrounding function that constructs DuckDB connection names
uses the new path-based approach).
In `@src/main/services/duckLake.service.ts`:
- Around line 1281-1284: The debug logging in extractSchema uses JSON.stringify
on rows (see schemasResult.data[0] and similar at the later log) which will
throw when rows contain BigInt; update the logging to sanitize BigInt values
before stringifying (e.g., use a safe stringify with a replacer that converts
BigInt to string or call the existing BigInt sanitization utility if one exists)
and apply the same change for the later log around lines 1344-1347 so logs never
directly JSON.stringify raw DB rows.
- Around line 1758-1764: The DML detection block that checks the normalized SQL
string (variable "normalized") currently only tests for INSERT, UPDATE, DELETE,
and UPSERT so MERGE statements are misclassified as SELECT; update that
conditional in the DML detection (the startsWith checks around variable
"normalized") to also test for 'MERGE' (ensuring case normalization is honored)
so MERGE is treated as a DML command and included in the same branch as
INSERT/UPDATE/DELETE/UPSERT.
In `@src/main/services/duckLake/adapters/duckdb.adapter.ts`:
- Around line 1062-1074: The diagnostic loop in duckdb.adapter.ts (the block
iterating over data.slice(0,3) that checks typeof value === 'bigint') currently
logs raw cell values via console.error; replace that to avoid leaking sensitive
data by removing the raw value from the message and using a safe placeholder or
metadata (e.g., log only rowIndex, column key, and the type 'BigInt' or a
hashed/obfuscated marker), and switch from console.error to the module logger
(or processLogger) used elsewhere; update the code referenced by the variables
data, rowIndex, key, value and the console.error call to emit a non-sensitive
message like "[DuckDB Adapter] ERROR: BigInt still present after normalization
in row X, column \"Y\" (value omitted)" or similar.
In `@src/main/services/duckLake/adapters/postgresql.adapter.ts`:
- Around line 855-857: The code interpolates snapshotId directly into the SQL
string (query = `${query} FOR SYSTEM_TIME AS OF SNAPSHOT '${snapshotId}'`)
creating SQL injection risk; update the postgresql.adapter.ts logic that appends
the snapshot clause to validate snapshotId is strictly numeric (e.g.,
digits-only) before interpolation, or reject/ignore non-numeric values and
throw/return an error; ensure you reference and sanitize the snapshotId variable
used when modifying query and mirror the numeric validation approach used in
sqlite.adapter.ts so only safe snapshot IDs are inserted.
In `@src/main/services/duckLake/adapters/sqlite.adapter.ts`:
- Around line 801-803: The code interpolates snapshotId directly into the SQL
(see the code that builds query variable and the snapshotId usage), which is a
SQL injection risk; change this to validate and sanitize snapshotId before
concatenation — ensure snapshotId contains only digits (e.g., /^\d+$/) or
otherwise reject/throw an error, and only then append it to query (`... FOR
SYSTEM_TIME AS OF SNAPSHOT 'snapshotId'`); if the SQL engine supports bind
parameters for this clause, prefer using a parameterized query instead of string
interpolation.
- Around line 814-846: The COUNT subquery and pagination are handled
incorrectly: update the countQuery in the isSelectQuery block (used when
computing totalRows) to include an explicit subquery alias (e.g., `SELECT
COUNT(*) as total FROM (<query>) AS _sub`) so engines that require aliases
succeed, and move the pagination appends (`query += \` LIMIT ${limit}\`` and the
optional `OFFSET`) inside the same if (isSelectQuery) branch so LIMIT/OFFSET are
only added for SELECTs; modify the logic around totalRows, countQuery, and the
query string construction in the same function/class (sqlite.adapter.ts,
symbols: isSelectQuery, countQuery, this.connectionInfo.connection.run,
totalRows, query) accordingly.
In `@src/renderer/components/customTable/CustomTablePagination.tsx`:
- Around line 30-39: The labelDisplayedRows formatter (labelDisplayedRows)
hardcodes 'de-DE' in the fmt helper causing German number formatting for all
users; change the Intl.NumberFormat call inside fmt to use the runtime/default
locale (i.e., new Intl.NumberFormat() or pass
navigator.language/Intl.DateTimeFormat().resolvedOptions().locale) so numbers
are formatted according to the user's locale and keep the existing try/catch
behavior.
In `@src/renderer/screens/sql/index.tsx`:
- Line 657: The dropdown list item is rendering DuckLake entries with the wrong
icon key: change the usage of connectionIcons.images.duckdb to
connectionIcons.images.ducklake in the list-item renderer (the same place that
sets src={connectionIcons.images.duckdb || defaultIcon}), so DuckLake entries
use connectionIcons.images.ducklake (falling back to defaultIcon) to match the
selected-value renderer.
- Around line 244-260: The async loadDuckLakeCompletions flow can overwrite
state with stale results if the user switches DuckLake instance mid-request;
before awaiting DuckLakeService.extractSchema, capture the current instanceId
(or create an AbortController/sequence token) inside loadDuckLakeCompletions,
and after each await (before calling setDuckLakeCompletions, setDuckLakeSchema,
setDuckLakeSchemaError, setDuckLakeSchemaLoading) verify the captured token
still matches the currently selected instanceId (or that the request wasn't
aborted); if it does not match, bail out and do not set state. Apply this guard
around the calls to DuckLakeService.extractSchema and
generateDuckLakeCompletions in loadDuckLakeCompletions so setDuckLakeCompletions
and setDuckLakeSchema only update when the response is for the active instance.
In `@src/renderer/screens/sql/queryResult.tsx`:
- Around line 108-119: The current fetch in QueryResult treats any returned
payload from DuckLakeService.executeQuery as success; update the fetch logic in
the async page refresh (the block calling DuckLakeService.executeQuery) to first
check if res.success === false (or otherwise indicates failure) and handle it by
surfacing the error (e.g., set an error state / show a toast / call
processLogger.error) and return early instead of applying
setColumns/setRows/setTotalCount; keep the existing seq check
(fetchSeqRef.current) and only update UI state when res.success is truthy and
seq matches, referencing the existing identifiers DuckLakeService.executeQuery,
fetchSeqRef, setColumns, setRows, and setTotalCount.
- Around line 334-342: The Parquet export guard uses exportContext.originalSql
directly which ignores the component's resolved SQL fallback; update the logic
to derive a single resolvedOriginalSql (e.g., from results?.originalSql or
results?.sql or exportContext?.originalSql depending on existing resolution) and
use that variable in canExportParquet and handleExportParquet instead of
exportContext.originalSql; keep the existing connectionType check
(duckdb/ducklake) and the exportContext null check, but base the "has SQL" test
on the resolvedOriginalSql value so export is enabled when results provide the
SQL.
In `@src/renderer/utils/duckLakeCompletions.ts`:
- Around line 125-145: The mergeCompletions function currently uses only
completion.label as the dedupe key causing valid DuckLake suggestions with
common names (e.g., "id") to collide; change the key to a composite identifier
(e.g., label + kind + detail and/or insertText) so completions with the same
label but different context are preserved, keep the existing behavior where
duckLakeCompletions override existingCompletions, and update the Map usage in
mergeCompletions to build and use that composite key when calling set/get.
---
Outside diff comments:
In `@src/main/services/connectors.service.ts`:
- Around line 761-794: The validateConnection function is missing a branch for
the new 'ducklake' connection type so valid ducklake configs fall through to the
default error; add a case 'ducklake' in the switch inside validateConnection
(near the existing 'duckdb' case) and implement the appropriate validations (at
minimum require conn.database_path or other ducklake-specific fields your app
expects, mirroring the duckdb checks or adding any extra required properties),
then return/break so ducklake no longer triggers the "Unsupported connection
type!" error.
- Around line 749-754: The generated config spread currently adds
userName/password placeholders when connection.type is not 'databricks',
'duckdb', or 'bigquery' but misses 'ducklake', causing unresolved env var
references; update the condition in the spread that inserts `userName:
\`${${USER}}\`` and `password: \`${${PASSWORD}}\`` (the code around
connection.type checks in connectors.service.ts) to also exclude 'ducklake'
(e.g., add connection.type !== 'ducklake') so that ducklake configs do not
receive USER/PASSWORD placeholders.
In `@src/main/services/duckLake/adapters/postgresql.adapter.ts`:
- Around line 294-376: The adapter methods updateRows, deleteRows, and
upsertRows currently rely on executing 'SELECT changes()' which DuckLake may not
support; change each method to read the affected-row count from the DML
execution result instead of always calling connection.run('SELECT changes() as
changes'). Concretely: after await this.connectionInfo.connection.run(<dml>),
inspect the returned result object from that run call for any metadata (e.g., a
changes/rowsAffected property or similar) and use that when present; only fall
back to a secondary approach (e.g., parsing returned rows or returning 0) if no
metadata is available — update updateRows, deleteRows, and upsertRows to use the
run result metadata rather than assuming SELECT changes() exists.
In `@src/main/services/duckLake/adapters/sqlite.adapter.ts`:
- Around line 274-356: The three methods updateRows, deleteRows, and upsertRows
currently call 'SELECT changes()' which DuckLake/DuckDB may not support;
instead, change the logic to obtain affected row counts from a reliable source:
either use a RETURNING clause and count returned rows (use the result of
connectionInfo.connection.run(...) and derive count from result.getRows().length
when the DML includes RETURNING), or if the driver returns an
affectedRows/rowCount field on the run() result, read that (e.g.,
result.affectedRows or result.rowCount) from connectionInfo.connection.run;
update updateRows, deleteRows and upsertRows (and any similar adapter methods)
to avoid 'SELECT changes()' and use one of these two approaches consistently,
falling back to 0 if no count is available.
In `@src/renderer/components/dataLake/tableDetails/TableDataRowsTab.tsx`:
- Around line 1234-1269: The rows are rendered using Object.values(row) which
can misalign cells vs headers; update the TableRow rendering in TableDataRowsTab
to iterate over queryResult.fields (e.g., queryResult.fields.map) and for each
field access the cell value via row[field.name] (or row[field]) so cells follow
the header order defined by queryResult.fields; keep the existing TableCell/NULL
rendering logic but replace Object.values(row).map usage with mapping over
queryResult.fields to preserve column alignment.
In `@src/renderer/components/sqlEditor/index.tsx`:
- Around line 74-88: getCommandType can diverge from
DuckLakeService.detectCommandType causing misclassification; replace the ad-hoc
logic in getCommandType (and the other duplicated blocks) by delegating to the
shared detection routine used by the backend (e.g., call
DuckLakeService.detectCommandType or a shared util) so the same classification
values ('SELECT', 'DML', 'DDL', etc.) are returned and used for both SELECT
limiting and isCommand rendering; update all occurrences (the getCommandType
function and the two other duplicated blocks) to import/use that central
detector and remove the local keyword-based checks.
In `@src/renderer/screens/sql/index.tsx`:
- Around line 370-375: handleCancelQuery currently always calls
connectorsServices.cancelQuery, which fails for DuckLake queries; update it to
check the active tab's connection type (same logic used in the execution routing
around the earlier executeQuery flow) and route cancellation accordingly: if the
tab's connection type is DuckLake call
DuckLakeService.cancelQuery(activeQueryId) and otherwise call
connectorsServices.cancelQuery(activeQueryId); preserve the existing try/catch
and success/error toasts (e.g., 'Query execution cancelled' / error toast) and
reference activeTabId, tabQueryIds, DuckLakeService.cancelQuery, and
connectorsServices.cancelQuery when making the change.
---
Nitpick comments:
In `@src/main/services/duckLake/adapters/postgresql.adapter.ts`:
- Line 25: The file postgresql.adapter.ts imports normalizeNumericValue from the
renderer layer, causing a cross-boundary dependency; move normalizeNumericValue
into a shared utilities module (e.g., main/shared or common/utils) and update
imports in postgresql.adapter.ts (and sqlite.adapter.ts) to import from that
shared location; ensure the exported function name stays normalizeNumericValue
so references in functions like any numeric-parsing helpers inside
postgresql.adapter.ts continue to work without changing runtime behavior.
- Around line 867-900: The COUNT(*) subquery used when building countQuery lacks
a required alias which can break on PostgreSQL; when computing totalRows inside
the if (limit && !hasExistingLimit) && isSelectQuery block, wrap the existing
query as a subquery with an explicit alias (e.g., SELECT COUNT(*) AS total FROM
(<query>) AS sub) when constructing countQuery before calling
this.connectionInfo.connection.run, ensuring the alias is appended to countQuery
generation (refer to the variables countQuery, query, and the call to
this.connectionInfo.connection.run).
- Around line 758-836: There are near-duplicate implementations of executeQuery,
getMetadataPrefix, and qualifyMetadataTables across adapters; extract the shared
logic into a common base class (e.g., DuckLakeAdapterBase) so both PostgreSQL
and SQLite adapters reuse it, and replace the per-adapter getMetadataPrefix with
a small adapter-specific override that returns only the differing suffix/format
(PostgreSQL: `"db".`, SQLite: `"db".main.`) or a method like
getMetadataDbQualifier used by the base qualifyMetadataTables/executeQuery
logic; update PostgreSQL adapter to keep its specific prefix behavior by
implementing the override and remove the duplicated methods from the adapter
classes.
In `@src/main/services/duckLake/adapters/sqlite.adapter.ts`:
- Around line 708-747: The SQLite adapter duplicates logic in getMetadataPrefix,
qualifyMetadataTables, and executeQuery that is almost identical to the Postgres
adapter; refactor by extracting the shared logic into the CatalogAdapter base
class (or a shared utility/mixin) and parameterize the only adapter-specific
part (the metadata prefix format, i.e., '".main."' vs '"."') via an abstract
method or a constructor-injected value (e.g., getMetadataPrefixFormat or
metadataPrefix string). Move the common implementations of qualifyMetadataTables
and executeQuery to the base class to call the shared getMetadataPrefix logic,
and have SQLiteAdapter and PostgresAdapter supply only the prefix format
override; ensure existing behavior and error handling are preserved and update
references to getMetadataPrefix, qualifyMetadataTables, and executeQuery to use
the new base implementations.
- Line 27: The adapters in main process (e.g., sqlite.adapter.ts) import
normalizeNumericValue from renderer utilities; move the normalizeNumericValue
implementation into the main utilities folder and update imports: create or add
normalizeNumericValue to src/main/utils/fileUtils (export it), change imports in
sqlite.adapter.ts (and other main adapters like postgresql/duckdb adapters) to
import normalizeNumericValue from src/main/utils/fileUtils, and remove or stop
exporting it from src/renderer/utils/fileUtils so renderer code is no longer the
source; ensure function signature/behavior and tests remain unchanged.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
assets/connectionIcons/ducklake.pngis excluded by!**/*.png
📒 Files selected for processing (24)
assets/connectionIcons/index.tssrc/main/ipcHandlers/duckLake.ipcHandlers.tssrc/main/services/connectors.service.tssrc/main/services/duckLake.service.tssrc/main/services/duckLake/adapters/duckdb.adapter.tssrc/main/services/duckLake/adapters/postgresql.adapter.tssrc/main/services/duckLake/adapters/sqlite.adapter.tssrc/main/utils/yamlPartialUpdate.tssrc/renderer/components/customTable/CustomTablePagination.tsxsrc/renderer/components/dataLake/tableDetails/TableDataRowsTab.tsxsrc/renderer/components/schemaTreeViewer/TreeItems.tsxsrc/renderer/components/sqlEditor/index.tsxsrc/renderer/components/sqlTabs/styles.tssrc/renderer/controllers/duckLake.controller.tssrc/renderer/screens/sql/SchemaTreeViewerWithSchema.tsxsrc/renderer/screens/sql/index.tsxsrc/renderer/screens/sql/queryResult.tsxsrc/renderer/services/duckLake.service.tssrc/renderer/utils/duckLakeCompletions.tssrc/renderer/utils/fileUtils.tssrc/types/backend.tssrc/types/duckLake.tssrc/types/frontend.tssrc/types/ipc.ts
Replace JSON.stringify on raw DuckDB result rows with a BigInt-safe stringify helper in extractSchema so debug logging won’t throw when rows contain BigInt. Applies to both the schemas and tables “first raw row” logs.
Include MERGE in detectCommandType() DML keyword checks so MERGE queries aren't misclassified as SELECT.
Replace console.error in the BigInt post-normalization diagnostic loop with electron-log, and omit the raw cell value from the message to prevent sensitive data leakage. Log only row index and column key.
…vel clause Ensure snapshotId is digits-only before interpolating into FOR SYSTEM_TIME AS OF SNAPSHOT to prevent SQL injection; throw DuckLakeError.validation on invalid values.
Prevent SQL injection by rejecting non-numeric snapshotId values before appending `FOR SYSTEM_TIME AS OF SNAPSHOT` to the query.
…llisions - Gate DuckLake QueryResult page fetches until the instance is ready to avoid executeQuery failures on refresh - Dedupe DuckLake/DB completions using a composite key (label + kind + detail + insertText) so common labels (e.g. "id") don’t collide, while keeping DuckLake precedence
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/main/services/duckLake/adapters/sqlite.adapter.ts (1)
274-356:⚠️ Potential issue | 🟠 Major
changes()is not supported by DuckLake —rowsAffectedwill always be incorrect.All three DML methods (
updateRows,deleteRows,upsertRows) follow the same pattern:await this.connectionInfo.connection.run(dmlQuery); const changesResult = await this.connectionInfo.connection.run('SELECT changes() as changes');The DuckLake extension does not support DuckDB's
changes()function, so the count returned is always wrong (typically 0). The DuckDB adapter in this same directory handles this by requiring aRETURNINGclause and usingresult.getRows().length.Apply the same approach here:
🔧 Proposed fix (shown for `updateRows`; apply the same pattern to `deleteRows` and `upsertRows`)
- async updateRows( - _tableName: string, - updateQuery: string, - ): Promise<{ rowsAffected: number }> { + async updateRows( + _tableName: string, + updateQuery: string, + ): Promise<{ rowsAffected: number }> { try { if (!this.connectionInfo) { throw new Error('No active connection'); } - await this.connectionInfo.connection.run(updateQuery); - const changesResult = await this.connectionInfo.connection.run( - 'SELECT changes() as changes', - ); - const rows = await changesResult.getRows(); - const value = rows?.[0]?.[0] ?? 0; - const numeric = - typeof value === 'object' && value?.hugeint !== undefined - ? Number(value.hugeint) - : Number(value); - - return { rowsAffected: Number.isFinite(numeric) ? numeric : 0 }; + // DuckLake does not support changes(); use RETURNING clause to count affected rows. + const result = await this.connectionInfo.connection.run(updateQuery); + const rows = await result.getRows(); + return { rowsAffected: rows.length }; } catch (error) { // eslint-disable-next-line no-console console.error('Failed to update rows:', error); throw error; } }Based on learnings: "DuckLake extension does not support DuckDB's
changes()function for retrieving affected row counts. Ensure updateRows, deleteRows, and upsertRows either use a RETURNING clause with an explicit count or switch to a supported method."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/sqlite.adapter.ts` around lines 274 - 356, The three DML methods updateRows, deleteRows and upsertRows currently call 'SELECT changes()' which DuckLake doesn't support, so replace that logic: require callers to use a RETURNING clause (or otherwise return rows) and, after running the query via this.connectionInfo.connection.run(...), call the returned result.getRows() and use its length to compute rowsAffected (similar to the DuckDB adapter). Update each method (updateRows, deleteRows, upsertRows) to validate this.connectionInfo, run the provided query, obtain rows via changesResult.getRows(), compute rowsAffected as the number of returned rows (fallback to 0), and remove the 'SELECT changes()' usage and related hugeint handling.
♻️ Duplicate comments (1)
src/renderer/screens/sql/queryResult.tsx (1)
352-360:⚠️ Potential issue | 🟠 MajorParquet export still ignores the resolved SQL fallback.
Line 354 and Line 359 still gate on
exportContext.originalSql, even though this component already resolvesoriginalSqlfrom multiple sources. That can block valid exports.💡 Proposed fix
- const canExportParquet = - !!exportContext && - !!exportContext.originalSql && + const canExportParquet = + !!exportContext && + !!originalSql && (exportContext.connectionType === 'duckdb' || exportContext.connectionType === 'ducklake'); const handleExportParquet = async () => { - if (!canExportParquet || !exportContext?.originalSql) return; + if (!canExportParquet || !originalSql) return; @@ - const baseSql = - (results as any).originalSql ?? exportContext.originalSql ?? ''; + const baseSql = originalSql;Also applies to: 380-382
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/queryResult.tsx` around lines 352 - 360, canExportParquet and the guard in handleExportParquet (and the other checks around lines 380-382) incorrectly rely on exportContext.originalSql instead of the component's resolved original SQL; update these checks to use the component's resolved originalSql variable (the computed/resolved SQL value the component already derives from multiple sources) and remove the strict dependency on exportContext.originalSql so exports use the resolved SQL fallback for duckdb/ducklake exports.
🧹 Nitpick comments (1)
src/renderer/screens/sql/index.tsx (1)
153-159: Remove unusedisLoadingDuckLakeInstancesfromconnectionInputmemo deps.This dependency is not used in the memo body and can trigger unnecessary recomputation and downstream DuckLake completion reloads.
💡 Proposed fix
}, [ activeConnection, activeConnectionId, activeConnectionName, duckLakeInstances, - isLoadingDuckLakeInstances, ]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/index.tsx` around lines 153 - 159, The memo for connectionInput includes an unused dependency isLoadingDuckLakeInstances which causes extra recomputations; remove isLoadingDuckLakeInstances from the dependency array of the connectionInput memo so the memo only depends on the actual used values (e.g., activeConnection, activeConnectionId, activeConnectionName, duckLakeInstances) and ensure connectionInput remains stable for downstream DuckLake completion logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/services/duckLake.service.ts`:
- Around line 1154-1173: The allowedPrefixes array in duckLake.service.ts is
missing 'EXPLAIN' and 'VACUUM', causing validateSingleStatement(request.query,
...) to reject those valid queries; update the allowedPrefixes array (the
constant named allowedPrefixes used with validateSingleStatement) to include
'EXPLAIN' and 'VACUUM' so requests starting with those keywords are accepted.
- Around line 1311-1341: The queryId generation inside the schemaNames.map(...)
passed to adapter.executeQuery uses Date.now() which can collide across
concurrently started promises; update the queryId in the executeQuery call so it
is unique per iteration (e.g., include the schemaName or the map index: use the
map callback's second arg or include escapedSchemaName in the template) so that
queryId (and activeQueries.set) won't overwrite other in-flight queries; keep
the rest of the executeQuery call and escaping logic unchanged.
In `@src/main/services/duckLake/adapters/duckdb.adapter.ts`:
- Around line 949-952: The code interpolates snapshotId directly into SQL (in
duckdb.adapter.ts where the variable snapshotId is appended to query with "FOR
SYSTEM_TIME AS OF SNAPSHOT '<snapshotId>'"), which allows injection; validate
snapshotId before building the clause by rejecting or sanitizing non-numeric
values (e.g., require /^\d+$/ or Number.isInteger after parse) and throw an
error if invalid, or convert to a safe numeric type and then append; ensure you
only ever use the validated numeric snapshotId when modifying the query variable
so no raw user input is interpolated into SQL.
In `@src/renderer/utils/duckLakeCompletions.ts`:
- Around line 21-25: Replace hard-coded numeric CompletionItemKind IDs with the
Monaco enum constants: in the schema.systemTables.forEach block (where
completions.push currently uses kind: 17) change to kind:
Monaco.languages.CompletionItemKind.Struct; in the system/column completions
where kind: 4 use Monaco.languages.CompletionItemKind.Field; and where kind: 14
use Monaco.languages.CompletionItemKind.Keyword. Update all completions.push
calls mentioned in the review (the blocks around schema.systemTables.forEach and
the other occurrences at the noted ranges) to use the corresponding
Monaco.languages.CompletionItemKind.* constants instead of numeric literals.
---
Outside diff comments:
In `@src/main/services/duckLake/adapters/sqlite.adapter.ts`:
- Around line 274-356: The three DML methods updateRows, deleteRows and
upsertRows currently call 'SELECT changes()' which DuckLake doesn't support, so
replace that logic: require callers to use a RETURNING clause (or otherwise
return rows) and, after running the query via
this.connectionInfo.connection.run(...), call the returned result.getRows() and
use its length to compute rowsAffected (similar to the DuckDB adapter). Update
each method (updateRows, deleteRows, upsertRows) to validate
this.connectionInfo, run the provided query, obtain rows via
changesResult.getRows(), compute rowsAffected as the number of returned rows
(fallback to 0), and remove the 'SELECT changes()' usage and related hugeint
handling.
---
Duplicate comments:
In `@src/renderer/screens/sql/queryResult.tsx`:
- Around line 352-360: canExportParquet and the guard in handleExportParquet
(and the other checks around lines 380-382) incorrectly rely on
exportContext.originalSql instead of the component's resolved original SQL;
update these checks to use the component's resolved originalSql variable (the
computed/resolved SQL value the component already derives from multiple sources)
and remove the strict dependency on exportContext.originalSql so exports use the
resolved SQL fallback for duckdb/ducklake exports.
---
Nitpick comments:
In `@src/renderer/screens/sql/index.tsx`:
- Around line 153-159: The memo for connectionInput includes an unused
dependency isLoadingDuckLakeInstances which causes extra recomputations; remove
isLoadingDuckLakeInstances from the dependency array of the connectionInput memo
so the memo only depends on the actual used values (e.g., activeConnection,
activeConnectionId, activeConnectionName, duckLakeInstances) and ensure
connectionInput remains stable for downstream DuckLake completion logic.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
src/main/services/duckLake.service.tssrc/main/services/duckLake/adapters/duckdb.adapter.tssrc/main/services/duckLake/adapters/postgresql.adapter.tssrc/main/services/duckLake/adapters/sqlite.adapter.tssrc/renderer/screens/sql/index.tsxsrc/renderer/screens/sql/queryResult.tsxsrc/renderer/utils/duckLakeCompletions.ts
- connectors.service.ts: use path.basename for cross-platform baseName derivation - queryResult.tsx: resolve Parquet export guard SQL fallback consistently - duckLake.service.ts: allow EXPLAIN and VACUUM in query validation - duckdb.adapter.ts: validate snapshotId to prevent SQL injection - duckLakeCompletions.ts: use Monaco CompletionItemKind enum constants instead of hard-coded numbers
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/main/services/connectors.service.ts (1)
581-584:⚠️ Potential issue | 🟠 MajorAdd missing
ducklakecases to methods that can receive DuckLake connections.Several methods lack
ducklakesupport but can be called with DuckLake connections via IPC:validateConnection(connector:validate),testConnection(connector:test),generateJdbcUrl,mapToDbtConnection,mapToDbtProfileOutput, andextractSchemaFromConnection. While DuckLake uses separate handlers for most operations, these methods are still reachable and will throw errors if passed a DuckLake connection.Add
case 'ducklake':handlers to these methods, or document that DuckLake connections must never reach them. TheexecuteSelectStatementmethod already has proper handling and can serve as a reference for the pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/connectors.service.ts` around lines 581 - 584, Several methods can receive DuckLake connections via IPC and currently fall through to the default error: add a "case 'ducklake':" branch to validateConnection, testConnection, generateJdbcUrl, mapToDbtConnection, mapToDbtProfileOutput, and extractSchemaFromConnection so DuckLake inputs are handled consistently; follow the pattern used in executeSelectStatement (either delegate to the DuckLake-specific handler or return the appropriate success/empty response) to avoid throwing the Unsupported connection type error, or explicitly document and assert that DuckLake connections must never reach these methods if you intend not to implement handling.
♻️ Duplicate comments (2)
src/renderer/utils/duckLakeCompletions.ts (1)
84-90:⚠️ Potential issue | 🟠 MajorPreserve same-named columns across schemas during merge.
mergeCompletionsusesdetailin the dedupe key, but Line 89 omits schema name. If two schemas contain the sametable.column(type), one completion is overwritten.💡 Proposed fix
table.columns.forEach((column) => { completions.push({ label: column.name, kind: Monaco.languages.CompletionItemKind.Field, insertText: column.name, - detail: `${table.name}.${column.name} (${column.type})`, - documentation: `Column: ${column.name}\nType: ${column.type}\nTable: ${table.name}`, + detail: `${schemaObj.name}.${table.name}.${column.name} (${column.type})`, + documentation: `Column: ${column.name}\nType: ${column.type}\nTable: ${schemaObj.name}.${table.name}`, }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/utils/duckLakeCompletions.ts` around lines 84 - 90, The dedupe key in mergeCompletions relies on the completion.detail but the current completions.push entry omits the schema, causing same-named columns from different schemas to collide; update the completions object created in the loop (where completions.push is called) to include the schema in detail (e.g. use `${table.schema}.${table.name}.${column.name} (${column.type})`) and also update documentation to include the schema (and optionally any other human-facing fields that participate in deduplication) so completions from different schemas remain distinct; change only the completion fields (detail/documentation) within the completions.push in duckLakeCompletions.ts and leave mergeCompletions logic unchanged.src/main/services/duckLake/adapters/duckdb.adapter.ts (1)
607-616:⚠️ Potential issue | 🟠 MajorAvoid logging raw database list payloads
Line 615 logs
allDatabasesdirectly. Those rows can include internal database names/paths and should not be dumped verbatim in logs.💡 Proposed fix
- // eslint-disable-next-line no-console - console.log('[DuckDB Adapter] All databases:', allDatabases); + log.warn('[DuckDB Adapter] Metadata database not found'); + log.debug('[DuckDB Adapter] Attached database count', { + count: allDatabases.length, + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake/adapters/duckdb.adapter.ts` around lines 607 - 616, The code currently logs the raw database rows variable allDatabases, which may expose internal database names/paths; update the DuckDB adapter to stop dumping the full payload: remove or replace the console.log('[DuckDB Adapter] All databases:', allDatabases) with a safe summary (e.g., count of rows or a redacted list) and use the project's logger (e.g., processLogger or the adapter's logger) instead of console; locate the snippet around the connection call this.connectionInfo.connection.run('SELECT database_name FROM duckdb_databases()') and change handling of allDatabases / allDatabasesResult.getRows() to log only non-sensitive info or nothing.
🧹 Nitpick comments (4)
src/main/services/duckLake.service.ts (1)
1407-1412: Avoid lossy BigInt coercion in schema sanitization.Converting
biginttoNumbercan lose precision. Prefer string conversion for serialization safety.♻️ Suggested fix
if (typeof obj === 'bigint') { - return Number(obj); + return obj.toString(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/services/duckLake.service.ts` around lines 1407 - 1412, In sanitizeValue (the local helper function named sanitizeValue), avoid coercing bigint to Number (which can lose precision); change the bigint branch to serialize bigints as strings (e.g., use obj.toString() or String(obj)) so downstream serialization and schema handling remain lossless and safe.src/renderer/screens/sql/queryResult.tsx (3)
72-75: Inconsistent SQL source between export handlers.
originalSqlis used for JSON/CSV DuckLake exports (lines 235, 290) whileresolvedOriginalSqlis used for Parquet (line 382). Ifresults.sqlis present butoriginalSqlis not, Parquet export would work but JSON/CSV DuckLake exports would skip the streaming path and fall back to in-memory blob export.Consider using
resolvedOriginalSqlconsistently across all DuckLake export handlers, or document why the distinction is intentional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/queryResult.tsx` around lines 72 - 75, The export handlers are using different SQL sources—JSON/CSV handlers read originalSql while Parquet uses resolvedOriginalSql—causing inconsistent behavior; update the JSON/CSV DuckLake export handlers to use resolvedOriginalSql (the same value computed by resolvedOriginalSql) instead of originalSql so all DuckLake streaming exports consistently prefer results.sql when originalSql is absent, by replacing references to originalSql in the JSON/CSV export code paths with resolvedOriginalSql (or alternately compute and pass resolvedOriginalSql into those handlers).
72-73: Consider extendingQueryResponseTypeto avoidas anycasts.The casts to
anyfor accessingoriginalSqlandsqlsuggest these fields exist at runtime but aren't in the type definition. Consider extendingQueryResponseTypeto include these optional fields for better type safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/queryResult.tsx` around lines 72 - 73, The code is using (results as any).originalSql and similar casts because QueryResponseType lacks those optional fields; update the QueryResponseType definition to include optional properties originalSql?: string and sql?: string (or the appropriate types), then remove the as any casts in queryResult.tsx and any other files (e.g., where results or exportContext are accessed) so the compiler enforces correct access; ensure any places setting these fields (response builders/parsers) are updated to return the extended type or are cast only at the creation boundary.
147-166: Consider cleanup for async fetch on unmount.The
useEffectcallsfetchPagebut has no cleanup function. If the component unmounts mid-fetch, the sequence check prevents stale data but setState may still be called on an unmounted component. React 18+ handles this gracefully, but a cleanup pattern would be cleaner.💡 Optional cleanup pattern
React.useEffect(() => { + let cancelled = false; setColumns(results.fields?.map((f) => f.name) ?? []); const baseTotal = results.rowCount ?? (results.data ? results.data.length : 0); if (isDuckLake) { setTotalCount(baseTotal); setPage(0); - if (isDuckLakeReady) { + if (isDuckLakeReady && !cancelled) { fetchPage(0, perPage); } } else { setRows(results.data ?? []); setTotalCount(baseTotal); setPage(0); } + return () => { cancelled = true; }; // eslint-disable-next-line react-hooks/exhaustive-deps }, [results, isDuckLake, isDuckLakeReady, fetchPage]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/queryResult.tsx` around lines 147 - 166, The effect calls fetchPage asynchronously but never cleans up, so add a cancellation pattern to avoid calling setColumns/setRows/setTotalCount/setPage after unmount: inside the useEffect create a local cancelled flag (or an AbortController) and wrap the async fetchPage call in an async wrapper that checks cancelled before applying state; if fetchPage can accept a signal prefer passing an AbortSignal; finally return a cleanup function that sets cancelled=true (or calls controller.abort()) to prevent state updates on unmounted components—apply this change in the useEffect that references fetchPage, setColumns, setRows, setTotalCount, setPage and isDuckLakeReady.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/services/connectors.service.ts`:
- Around line 886-895: The databaseName field for DuckDB currently returns the
full connection.database_path; update the DuckDB branch in the
mapToRosettaConnection (or the function constructing this object) to call
extractDbNameFromPath(connection.database_path) instead of returning
connection.database_path so the returned value matches generateRosettaYml and
yamlPartialUpdate.ts; keep the ducklake and other branches unchanged and ensure
schemaName logic remains connection.type === 'ducklake' ? '' :
connection.schema.
In `@src/main/services/duckLake.service.ts`:
- Around line 1154-1176: The query validation rejects MERGE while
detectCommandType treats it as DML; update the allowedPrefixes array used by
validateSingleStatement in executeQuery to include 'MERGE' (add 'MERGE'
alongside the other SQL verbs) so validation and detectCommandType are
consistent; make the same change for the other occurrence of the allowedPrefixes
block referenced in the file (the second instance around the later execute
path).
- Around line 1260-1275: The extractSchema flow currently calls
adapter.executeQuery (e.g., the call that sets schemasResult) and only logs
failures; change it to fail fast by checking schemasResult.success immediately
after each executeQuery call (e.g., the schemasResult variable returned from
adapter.executeQuery) and throw an Error (or reject) including the failure
details (schemasResult.error or the returned payload) so callers see the failure
instead of continuing with partial data; apply the same pattern to the other
metadata queries in extractSchema (the subsequent executeQuery calls around the
later block that set variables like tablesResult/columnsResult) so every
adapter.executeQuery is validated and errors are surfaced immediately.
In `@src/main/services/duckLake/adapters/duckdb.adapter.ts`:
- Around line 973-974: The pagination values limit and offset are being
interpolated directly into the SQL and are not validated (plus the current check
`if (limit)` rejects valid 0); update the code that builds the
total-rows/base-query branch to: validate and coerce limit and offset to safe
non-negative integers (e.g., parseInt/Number and clamp >=0), treat 0 as a valid
value, and use parameterized query placeholders (or bind parameters) instead of
string interpolation when inserting limit/offset into SQL. Locate the variables
`limit` and `offset` and the block that "calculate[s] total rows for the base
query" in duckdb.adapter (around the pagination logic) and apply these changes
so only sanitized integers are used in SQL.
- Around line 891-927: qualifyMetadataTables currently rewrites every incoming
SQL which can corrupt user queries; restrict this behavior to internal metadata
queries only by gating it on a flag. Add the optional boolean
internalMetadataQuery to the DuckLakeQueryRequest type, propagate that flag
through the query-handling call path, and change qualifyMetadataTables signature
(or its caller) to only perform table qualification when internalMetadataQuery
is true; update all callers of qualifyMetadataTables (or the upstream method
that invokes it) to pass the new flag so only internalMetadataQuery=true
triggers the metadata table prefixing logic in qualifyMetadataTables.
- Around line 875-880: The returned metadata identifier is quoted but not
escaped, so embedded double quotes will break SQL; update the code that builds
metadataDatabase (the variable named metadataDatabase and the return that
produces `"${metadataDatabase}".main.` in duckdb.adapter.ts) to escape internal
double quotes first (e.g., replace all `"` with `""`) before wrapping in outer
quotes, mirroring the escaping approach used in duckLake.service.ts.
In `@src/renderer/screens/sql/queryResult.tsx`:
- Around line 133-137: The catch block in the QueryResult component currently
only logs and sets state (inside the catch handling for fetch page data) but
doesn't show a toast like the earlier error branch (the one that handles
res.success === false); update the catch block in the same function to call the
same toast/showToast helper used in that res.success === false branch, passing a
clear message that includes e?.message (or a fallback like "Failed to fetch page
data") so the user sees a toast on exceptions as well, while preserving the
existing seq check (seq !== fetchSeqRef.current) and setFetchError behavior.
---
Outside diff comments:
In `@src/main/services/connectors.service.ts`:
- Around line 581-584: Several methods can receive DuckLake connections via IPC
and currently fall through to the default error: add a "case 'ducklake':" branch
to validateConnection, testConnection, generateJdbcUrl, mapToDbtConnection,
mapToDbtProfileOutput, and extractSchemaFromConnection so DuckLake inputs are
handled consistently; follow the pattern used in executeSelectStatement (either
delegate to the DuckLake-specific handler or return the appropriate
success/empty response) to avoid throwing the Unsupported connection type error,
or explicitly document and assert that DuckLake connections must never reach
these methods if you intend not to implement handling.
---
Duplicate comments:
In `@src/main/services/duckLake/adapters/duckdb.adapter.ts`:
- Around line 607-616: The code currently logs the raw database rows variable
allDatabases, which may expose internal database names/paths; update the DuckDB
adapter to stop dumping the full payload: remove or replace the
console.log('[DuckDB Adapter] All databases:', allDatabases) with a safe summary
(e.g., count of rows or a redacted list) and use the project's logger (e.g.,
processLogger or the adapter's logger) instead of console; locate the snippet
around the connection call this.connectionInfo.connection.run('SELECT
database_name FROM duckdb_databases()') and change handling of allDatabases /
allDatabasesResult.getRows() to log only non-sensitive info or nothing.
In `@src/renderer/utils/duckLakeCompletions.ts`:
- Around line 84-90: The dedupe key in mergeCompletions relies on the
completion.detail but the current completions.push entry omits the schema,
causing same-named columns from different schemas to collide; update the
completions object created in the loop (where completions.push is called) to
include the schema in detail (e.g. use
`${table.schema}.${table.name}.${column.name} (${column.type})`) and also update
documentation to include the schema (and optionally any other human-facing
fields that participate in deduplication) so completions from different schemas
remain distinct; change only the completion fields (detail/documentation) within
the completions.push in duckLakeCompletions.ts and leave mergeCompletions logic
unchanged.
---
Nitpick comments:
In `@src/main/services/duckLake.service.ts`:
- Around line 1407-1412: In sanitizeValue (the local helper function named
sanitizeValue), avoid coercing bigint to Number (which can lose precision);
change the bigint branch to serialize bigints as strings (e.g., use
obj.toString() or String(obj)) so downstream serialization and schema handling
remain lossless and safe.
In `@src/renderer/screens/sql/queryResult.tsx`:
- Around line 72-75: The export handlers are using different SQL
sources—JSON/CSV handlers read originalSql while Parquet uses
resolvedOriginalSql—causing inconsistent behavior; update the JSON/CSV DuckLake
export handlers to use resolvedOriginalSql (the same value computed by
resolvedOriginalSql) instead of originalSql so all DuckLake streaming exports
consistently prefer results.sql when originalSql is absent, by replacing
references to originalSql in the JSON/CSV export code paths with
resolvedOriginalSql (or alternately compute and pass resolvedOriginalSql into
those handlers).
- Around line 72-73: The code is using (results as any).originalSql and similar
casts because QueryResponseType lacks those optional fields; update the
QueryResponseType definition to include optional properties originalSql?: string
and sql?: string (or the appropriate types), then remove the as any casts in
queryResult.tsx and any other files (e.g., where results or exportContext are
accessed) so the compiler enforces correct access; ensure any places setting
these fields (response builders/parsers) are updated to return the extended type
or are cast only at the creation boundary.
- Around line 147-166: The effect calls fetchPage asynchronously but never
cleans up, so add a cancellation pattern to avoid calling
setColumns/setRows/setTotalCount/setPage after unmount: inside the useEffect
create a local cancelled flag (or an AbortController) and wrap the async
fetchPage call in an async wrapper that checks cancelled before applying state;
if fetchPage can accept a signal prefer passing an AbortSignal; finally return a
cleanup function that sets cancelled=true (or calls controller.abort()) to
prevent state updates on unmounted components—apply this change in the useEffect
that references fetchPage, setColumns, setRows, setTotalCount, setPage and
isDuckLakeReady.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/main/services/connectors.service.tssrc/main/services/duckLake.service.tssrc/main/services/duckLake/adapters/duckdb.adapter.tssrc/renderer/screens/sql/queryResult.tsxsrc/renderer/utils/duckLakeCompletions.ts
- add `showSearch` flag to CustomTable/CustomTableToolbar - disable search input in SQL QueryResult table to avoid broken behavior
There was a problem hiding this comment.
🧹 Nitpick comments (4)
src/renderer/components/customTable/CustomTableToolbar.tsx (1)
65-67: MigrateInputPropstoslotPropsfor MUI v6 compatibility.
InputPropsis deprecated in MUI v6 and will be removed in v7. Use theslotPropsAPI instead:Migration path
- InputProps={{ - style: { height: 32, fontSize: '0.875rem' }, - }} + slotProps={{ + input: { + style: { height: 32, fontSize: '0.875rem' }, + }, + }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/components/customTable/CustomTableToolbar.tsx` around lines 65 - 67, The code uses the deprecated InputProps prop (in CustomTableToolbar) which breaks MUI v6; replace InputProps={{ style: { height: 32, fontSize: '0.875rem' } }} with slotProps that target the input slot, e.g. slotProps={{ input: { style: { height: 32, fontSize: '0.875rem' } } }} (or use sx instead of style: slotProps={{ input: { sx: { height: 32, fontSize: '0.875rem' } } }}), ensuring the change is made where InputProps is passed (the TextField / input-related component inside CustomTableToolbar).src/renderer/screens/sql/queryResult.tsx (3)
671-679: Consider using theme color instead of hardcoded value.Minor consistency nit: The Backdrop uses hardcoded
color: '#fff'while the rest of the component uses theme values. Considertheme.palette.common.whitefor consistency with the theme system.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/queryResult.tsx` around lines 671 - 679, Replace the hardcoded color string in the Backdrop sx with the theme's white color: change the color: '#fff' entry to use the theme accessor (e.g. color: (theme) => theme.palette.common.white) so it matches how zIndex already uses the theme; update the Backdrop instance in queryResult.tsx (the Backdrop with open={isExporting}) to use that theme expression.
65-71: Consider using the existingformatNumberutility for consistent locale handling.The component defines a local
formatNumberwith hardcoded German locale ('de-DE'), whilesrc/renderer/helpers/utils.tsalready exports aformatNumberutility that uses the browser's default locale. This could lead to inconsistent number formatting across the application (e.g., "1.000" vs "1,000" for one thousand).♻️ Suggested refactor
- const formatNumber = React.useCallback((n: number) => { - try { - return new Intl.NumberFormat('de-DE').format(n); - } catch { - return String(n); - } - }, []);Then import and use the existing utility:
import { underscoreToTitleCase, formatNumber } from '../../helpers/utils';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/queryResult.tsx` around lines 65 - 71, The local React.useCallback named formatNumber in queryResult.tsx hardcodes the 'de-DE' locale causing inconsistent formatting; replace its local definition by importing and using the shared formatNumber utility from src/renderer/helpers/utils (alongside underscoreToTitleCase if already imported) so the component uses the app-wide, browser-default locale formatter; update the imports at the top of queryResult.tsx to import formatNumber and remove the local formatNumber callback.
187-192: Sorting and search state are tracked but not applied to queries.The
order,orderBy, andkeywordfields are tracked incustomPaginationbut never passed tofetchPageor included in the DuckLake query. If these features aren't planned for this PR, consider removing them to avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/screens/sql/queryResult.tsx` around lines 187 - 192, customPagination is tracking order, orderBy and keyword but those values are never passed to fetchPage or incorporated into the DuckLake query; update calls to fetchPage (and any surrounding pagination handlers) to pass the current order, orderBy and keyword and modify the code that builds the DuckLake SQL (or query params) to apply an ORDER BY using orderBy + order and a WHERE/LIKE filter using keyword (or, if sort/search are out of scope, remove the order/orderBy/keyword fields and their setters from customPagination to avoid dead state). Ensure you update the fetchPage signature (and any invocations) to accept these new parameters and use the same identifiers (customPagination, fetchPage, orderBy, order, keyword) so the state flows into query construction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/renderer/components/customTable/CustomTableToolbar.tsx`:
- Around line 65-67: The code uses the deprecated InputProps prop (in
CustomTableToolbar) which breaks MUI v6; replace InputProps={{ style: { height:
32, fontSize: '0.875rem' } }} with slotProps that target the input slot, e.g.
slotProps={{ input: { style: { height: 32, fontSize: '0.875rem' } } }} (or use
sx instead of style: slotProps={{ input: { sx: { height: 32, fontSize:
'0.875rem' } } }}), ensuring the change is made where InputProps is passed (the
TextField / input-related component inside CustomTableToolbar).
In `@src/renderer/screens/sql/queryResult.tsx`:
- Around line 671-679: Replace the hardcoded color string in the Backdrop sx
with the theme's white color: change the color: '#fff' entry to use the theme
accessor (e.g. color: (theme) => theme.palette.common.white) so it matches how
zIndex already uses the theme; update the Backdrop instance in queryResult.tsx
(the Backdrop with open={isExporting}) to use that theme expression.
- Around line 65-71: The local React.useCallback named formatNumber in
queryResult.tsx hardcodes the 'de-DE' locale causing inconsistent formatting;
replace its local definition by importing and using the shared formatNumber
utility from src/renderer/helpers/utils (alongside underscoreToTitleCase if
already imported) so the component uses the app-wide, browser-default locale
formatter; update the imports at the top of queryResult.tsx to import
formatNumber and remove the local formatNumber callback.
- Around line 187-192: customPagination is tracking order, orderBy and keyword
but those values are never passed to fetchPage or incorporated into the DuckLake
query; update calls to fetchPage (and any surrounding pagination handlers) to
pass the current order, orderBy and keyword and modify the code that builds the
DuckLake SQL (or query params) to apply an ORDER BY using orderBy + order and a
WHERE/LIKE filter using keyword (or, if sort/search are out of scope, remove the
order/orderBy/keyword fields and their setters from customPagination to avoid
dead state). Ensure you update the fetchPage signature (and any invocations) to
accept these new parameters and use the same identifiers (customPagination,
fetchPage, orderBy, order, keyword) so the state flows into query construction.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/renderer/components/customTable/CustomTableToolbar.tsxsrc/renderer/components/customTable/index.tsxsrc/renderer/components/customTable/types.tssrc/renderer/screens/sql/queryResult.tsx
Summary by CodeRabbit
New Features
Improvements