Skip to content

fix: backport drizzle plugin patch to eliminate per-block DDL overhead#135

Merged
loothero merged 1 commit intomainfrom
backport-drizzle-patch
Mar 11, 2026
Merged

fix: backport drizzle plugin patch to eliminate per-block DDL overhead#135
loothero merged 1 commit intomainfrom
backport-drizzle-patch

Conversation

@loothero
Copy link
Member

@loothero loothero commented Mar 11, 2026

Summary

  • Pins Apibara dependencies to concrete versions (@apibara/plugin-drizzle@2.1.0-beta.55, etc.) instead of floating next tags
  • Applies the drizzle plugin patch from the next branch that eliminates per-block DDL overhead: triggers are registered once, order_key is passed via PostgreSQL session variable, and invalidation runs in a dedicated transaction
  • Adds COPY patches ./patches to both Dockerfile build stages so pnpm can apply the patch

What the patch fixes

  1. Re-creates reorg triggers on every pending block → triggers registered once (guarded by reorgTriggersRegistered flag)
  2. Removes triggers after every transaction → triggers persist across blocks
  3. Passes order_key as hardcoded trigger argument → uses set_config session variable (one SET per block vs N trigger re-creations)
  4. Runs invalidation inside the data transaction → splits into dedicated transaction to prevent reorg state corruption

Test plan

  • pnpm install — patch applies cleanly
  • pnpm exec tsc --noEmit — typecheck passes
  • pnpm build — build succeeds
  • pnpm test — 88/88 tests pass
  • Deploy to staging and confirm indexer processes blocks without trigger-related errors

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Chores
    • Updated dependencies to latest beta versions for improved stability.
    • Enhanced chain reorganization handling and trigger management.
    • Added patch support to Docker builds for better dependency management.

Pins Apibara dependencies to concrete versions and applies the
@apibara/plugin-drizzle patch from next branch that:
- Registers reorg triggers once (guarded flag) instead of every block
- Uses PostgreSQL session variable for order_key instead of trigger args
- Removes post-transaction trigger teardown
- Splits invalidation into dedicated transaction to prevent reorg corruption

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings March 11, 2026 01:00
@vercel
Copy link

vercel bot commented Mar 11, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
summit Ready Ready Preview, Comment Mar 11, 2026 1:01am

Request Review

@coderabbitai
Copy link

coderabbitai bot commented Mar 11, 2026

📝 Walkthrough

Walkthrough

This PR updates the Apibara indexer to beta version 2.1.0 dependencies and introduces a reorg ordering mechanism through a patch that centralizes trigger management logic. The patch modifies trigger registration and removal flows, replaces direct cursor key usage with a configurable reorg order key, and adds state tracking for transaction-bounded trigger operations.

Changes

Cohort / File(s) Summary
Docker & Package Configuration
indexer/Dockerfile, indexer/package.json
Added patches directory to Docker image; updated all Apibara dependencies to 2.1.0-beta.x releases and configured pnpm patch entry for @apibara/plugin-drizzle.
Reorg Trigger Management Patch
indexer/patches/@apibara__plugin-drizzle@2.1.0-beta.55.patch
Implemented centralized reorg order key mechanism replacing direct endCursor usage; refactored registerTriggers signature (removed endCursor parameter); added setReorgOrderKey function; introduced reorgTriggersRegistered and registeredTriggersInTxn flags for state tracking; modified trigger removal logic to defer deletion when triggers are registered in current transaction; synced changes across src/ and dist/ modules.

Sequence Diagram

sequenceDiagram
    participant Handler as Handler Execution
    participant DB as Database Config
    participant Triggers as Trigger System
    participant Storage as Storage Layer
    participant Tx as Transaction

    Handler->>DB: Check finality status
    alt Finality not finalized
        Handler->>Tx: Begin transaction
        Handler->>Storage: getReorgOrderKey() from config
        Handler->>Triggers: registerTriggers(tx, tables, idColumnMap, indexerId)
        Note over Triggers: Creates triggers using<br/>reorg_order_key from config
        Handler->>Storage: setReorgOrderKey(tx, endCursor)
        Note over Storage: Persist computed order key<br/>Set reorgTriggersRegistered=true
        Handler->>Tx: Commit
    else Skip trigger removal
        Note over Handler: Do not remove triggers<br/>in current transaction
    end
    
    alt Previous finality was pending
        Handler->>Tx: Begin separate transaction
        Handler->>Storage: Invalidate pending finality state
        Handler->>Tx: Commit
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 Through patches and reorg we hop,
Centralizing order keys, never to stop,
Triggers now managed with elegant grace,
State flags track each transactional space,
Beta versions bloom in their dance! 🌿✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The pull request title accurately describes the main change: backporting a drizzle plugin patch to eliminate per-block DDL overhead, which is the core objective of the changeset.
Description check ✅ Passed The pull request description covers the required template sections: Summary (with bullets on what changed and why), Scope (indexer changes), Change Type (fix), and Validation (test commands with results). Risk and rollout sections are not explicitly filled but the change is low-risk.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch backport-drizzle-patch

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link

claude bot commented Mar 11, 2026

Claude finished @loothero's task —— View job


Claude Review - General Engineering Review

Todo List

  • Run git diff command to identify changes outside core packages
  • Review changes for correctness and regression risk
  • Analyze reliability and safety concerns
  • Check performance and efficiency implications
  • Evaluate maintainability aspects
  • Identify testing gaps
  • Format findings according to severity levels
  • Post final review results

Review Findings

Analysis Scope: Reviewing changes outside contracts/, client/, indexer/, and api/ directories as requested.

Result: All changes in this PR are contained within the indexer/ directory:

  • indexer/Dockerfile
  • indexer/package.json
  • indexer/patches/@apibara__plugin-drizzle@2.1.0-beta.55.patch
  • indexer/pnpm-lock.yaml

Since the review scope explicitly excludes the indexer/ directory, there are no changes to review under the general engineering review criteria.

No issues found.

Summary: 0 CRITICAL, 0 HIGH, 0 MEDIUM, 0 LOW, 0 INFO


run=22931536972 attempt=1 sha=da02a5b5cba5468857e35f0c391064082ea6cd8b scope=general

@railway-app
Copy link

railway-app bot commented Mar 11, 2026

🚅 Deployed to the summit-pr-135 environment in Summit stack

Service Status Web Updated (UTC)
summit-indexer ◻️ Removed (View Logs) Mar 11, 2026 at 1:04 am
1 service not affected by this PR
  • summit-api

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the performance and stability of the Drizzle plugin by backporting a crucial patch that optimizes PostgreSQL DDL operations. It addresses inefficiencies related to reorg trigger management and order_key handling, which previously introduced per-block overhead. The changes also ensure dependency stability by pinning Apibara packages to specific beta versions and update the Docker build process to correctly integrate the necessary patch.

Highlights

  • Dependency Pinning: Apibara dependencies have been pinned to concrete beta versions (e.g., @apibara/plugin-drizzle@2.1.0-beta.55) instead of floating next tags, ensuring more stable builds.
  • Drizzle Plugin Patch Backport: A critical patch from the next branch has been backported to the Drizzle plugin to eliminate per-block Data Definition Language (DDL) overhead in PostgreSQL.
  • Docker Build Updates: Dockerfiles for the indexer have been updated to copy the patches directory, allowing pnpm to correctly apply the necessary patch during the build process.
  • Optimized Trigger Management: Reorg triggers are now registered only once and persist across blocks, preventing their frequent re-creation and removal, which previously caused significant overhead.
  • Improved Order Key Handling: The order_key is now passed via a PostgreSQL session variable (set_config) instead of a hardcoded trigger argument, reducing DDL operations per block.
  • Dedicated Invalidation Transaction: The invalidation logic has been moved to a dedicated transaction, separating it from the data transaction to prevent reorg state corruption.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • indexer/Dockerfile
    • Added COPY patches ./patches instruction to both the build and production stages to ensure patch files are available for pnpm.
  • indexer/package.json
    • Updated @apibara/indexer, @apibara/plugin-drizzle, @apibara/protocol, @apibara/starknet, and apibara dependencies from next to specific beta versions.
    • Added a pnpm.patchedDependencies entry to specify the patch for @apibara/plugin-drizzle@2.1.0-beta.55.
  • indexer/patches/@apibara__plugin-drizzle@2.1.0-beta.55.patch
    • Introduced a new patch file for @apibara/plugin-drizzle to modify its core logic.
    • Modified initializeReorgRollbackTable to retrieve order_key from a PostgreSQL session variable (current_setting) instead of a direct trigger argument.
    • Updated registerTriggers to include DROP TRIGGER IF EXISTS before creating new triggers and removed endCursor as a direct parameter.
    • Added a new setReorgOrderKey function to set the reorg_order_key session variable.
    • Refactored the drizzleStorage middleware to use a reorgTriggersRegistered flag, call setReorgOrderKey, and move the invalidate operation to a dedicated transaction when prevFinality is 'pending'.
    • Removed removeTriggers calls from the main handler logic, allowing triggers to persist.
  • indexer/pnpm-lock.yaml
    • Updated locked versions for all @apibara related packages to their new beta versions.
    • Added patchedDependencies entry for @apibara/plugin-drizzle@2.1.0-beta.55 with its corresponding hash and path.
    • Updated libc entries for various @rolldown and @rollup packages.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

Codex Review - General Engineering Review

No issues found.

Summary: 0 CRITICAL, 0 HIGH, 0 MEDIUM, 0 LOW, 0 INFO

@claude
Copy link

claude bot commented Mar 11, 2026

Claude finished @loothero's task —— View job


Claude Review - Indexer/API Review

Critical Analysis Complete

Findings:

No issues found.

Summary: 0 CRITICAL, 0 HIGH, 0 MEDIUM, 0 LOW, 0 INFO

Detailed Review

1. Dependency Pinning (package.json)

  • ✅ All @apibara/* dependencies properly pinned to specific beta versions
  • ✅ Eliminates floating next tag drift risk
  • ✅ Consistent versioning across the stack (beta.55/beta.56)
  • ✅ Proper patchedDependencies configuration for pnpm

2. Docker Build Integration (Dockerfile)

  • COPY patches ./patches added to both build stages
  • ✅ Patch directory available before pnpm install execution
  • ✅ No changes to production runtime behavior

3. Core Patch Analysis (@apibara__plugin-drizzle@2.1.0-beta.55.patch)

A. Trigger Management Optimization:

  • One-time registration: reorgTriggersRegistered flag prevents repeated DDL ops
  • Idempotent trigger creation: DROP TRIGGER IF EXISTS before CREATE eliminates conflicts
  • Persistent triggers: Removes removeTriggers() calls, eliminating per-block teardown

B. Order Key Handling via Session Variable:

  • PostgreSQL session variable: current_setting('${SCHEMA_NAME}.reorg_order_key', true)
  • Safe null handling: Trigger returns NULL when session variable unset
  • Type safety: Proper TEXTINTEGER conversion with validation

C. Transaction Separation for Invalidation:

  • Dedicated invalidation transaction: Prevents reorg writes from mixing with current block data
  • Correct ordering: Invalidation commits before replacement transaction begins
  • Documented tradeoff: Clear comment about partial state visibility risk on process failure

4. Reorg Safety & Finality Handling:

  • Finality-aware logic: Only registers triggers for non-finalized blocks
  • State persistence: Maintains reorg infrastructure across block processing
  • Error handling: Preserves transaction isolation on failures

5. Operational Reliability:

  • DDL reduction: From N trigger operations per block to 1 SET operation
  • PostgreSQL efficiency: Session variables avoid trigger argument parsing overhead
  • Recovery compatibility: Works with Railway restart policy + checkpoint resumption

Summary

This patch implements a well-engineered optimization that eliminates the primary DDL performance bottleneck in Apibara's drizzle plugin. The changes are:

  1. Architecturally sound: Proper separation of concerns between trigger lifecycle and per-block state
  2. Reorg-safe: Maintains data consistency guarantees during chain reorganizations
  3. Production-ready: Includes appropriate error handling and operational considerations
  4. Performance-focused: Reduces PostgreSQL overhead from O(N tables) to O(1) per block

The documented tradeoff regarding partial state visibility during process failures is acceptable given Apibara's restart-based recovery model.


run=22931536972 attempt=1 sha=da02a5b5cba5468857e35f0c391064082ea6cd8b scope=indexer-api

@codecov
Copy link

codecov bot commented Mar 11, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
indexer/patches/@apibara__plugin-drizzle@2.1.0-beta.55.patch (1)

76-94: Dedicated invalidation transaction introduces a brief inconsistency window.

The comment correctly documents the tradeoff: if the main data transaction fails after the invalidation transaction commits, readers see the tip removed until process restart. This is acceptable given Apibara's no-retry behavior and reliance on Railway's restart policy, but worth monitoring in staging.

Consider adding observability (metric/log) when the main transaction fails after invalidation commits, to aid debugging if this edge case occurs.

Also applies to: 211-229, 292-313

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@indexer/patches/`@apibara__plugin-drizzle@2.1.0-beta.55.patch around lines 76
- 94, Add observability for the window where the dedicated invalidation
transaction commits but the subsequent main transaction fails: after the
invalidation call inside the handler middleware (the withTransaction/invalidate
sequence) record a log/metric when the later replacement/main withTransaction
fails; emit a structured processLogger.error or increment a metric (include
indexerId, cursor/endCursor, prevFinality, and the caught error) and tag that
this occurred after invalidation so SRE can find these edge cases; place this
reporting in the catch handling around the main withTransaction that follows the
invalidate call and reuse existing symbols (withTransaction, invalidate,
indexerId, cursor/ endCursor, and registeredTriggersInTxn) for context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@indexer/patches/`@apibara__plugin-drizzle@2.1.0-beta.55.patch:
- Around line 76-94: Add observability for the window where the dedicated
invalidation transaction commits but the subsequent main transaction fails:
after the invalidation call inside the handler middleware (the
withTransaction/invalidate sequence) record a log/metric when the later
replacement/main withTransaction fails; emit a structured processLogger.error or
increment a metric (include indexerId, cursor/endCursor, prevFinality, and the
caught error) and tag that this occurred after invalidation so SRE can find
these edge cases; place this reporting in the catch handling around the main
withTransaction that follows the invalidate call and reuse existing symbols
(withTransaction, invalidate, indexerId, cursor/ endCursor, and
registeredTriggersInTxn) for context.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 717546e9-fda7-44f1-8358-7941954590ea

📥 Commits

Reviewing files that changed from the base of the PR and between f47d2eb and da02a5b.

⛔ Files ignored due to path filters (1)
  • indexer/pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (3)
  • indexer/Dockerfile
  • indexer/package.json
  • indexer/patches/@apibara__plugin-drizzle@2.1.0-beta.55.patch

@github-actions
Copy link

Codex Review - Indexer/API Review

[MEDIUM] indexer/patches/@apibara__plugin-drizzle@2.1.0-beta.55.patch:304 - Reorg invalidation is now committed in a separate transaction before replacement writes, and recovery is explicitly coupled to an external Railway restart assumption if the replacement transaction fails.
Impact: A failure between the two transactions can leave DB state with the previous pending tip removed but no replacement applied, causing stale/partial reads until an external restart/replay happens; this is a correctness gap under real transient DB/runtime failures.
Fix: Add deterministic in-process recovery instead of platform coupling, e.g. bounded retry for the replacement transaction in the middleware path before surfacing failure, and on terminal failure force a controlled fatal exit path documented as a hard runtime requirement (rather than relying on an implicit Railway policy comment).

Summary: 0 CRITICAL, 0 HIGH, 1 MEDIUM, 0 LOW, 0 INFO

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR backports an upstream @apibara/plugin-drizzle patch and pins Apibara dependencies to concrete versions to eliminate per-block DDL overhead during indexing (reorg trigger creation/removal), improving indexer performance and reducing DB churn.

Changes:

  • Pin Apibara packages from floating next tags to specific 2.1.0-beta.* versions.
  • Apply a pnpm patch to @apibara/plugin-drizzle@2.1.0-beta.55 to register reorg triggers once, pass order_key via a PostgreSQL session variable, and run invalidation in a separate transaction.
  • Update Docker build stages to include patches/ so pnpm install can apply the patch in-container.

Reviewed changes

Copilot reviewed 3 out of 4 changed files in this pull request and generated 2 comments.

File Description
indexer/pnpm-lock.yaml Adds patchedDependencies metadata and pins Apibara package resolutions to specific beta versions.
indexer/patches/@apibara__plugin-drizzle@2.1.0-beta.55.patch Backports the drizzle plugin changes to avoid per-block trigger DDL and adjust reorg invalidation behavior.
indexer/package.json Pins Apibara dependencies and configures pnpm patchedDependencies for the drizzle plugin patch.
indexer/Dockerfile Copies patches/ in both build and production stages so the pnpm patch applies during container builds.
Files not reviewed (1)
  • indexer/pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

+ try {
+ await tx.execute(
+ sql.raw(
+ `SELECT set_config('${SCHEMA_NAME}.reorg_order_key', '${Number(endCursor.orderKey)}', true);`,
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setReorgOrderKey uses Number(endCursor.orderKey) when writing to set_config. In Apibara, orderKey is a bigint (e.g., status.starting?.orderKey ?? 0n in scripts/check-dna-status.ts), so converting to Number can lose precision (or stringify to scientific notation) and lead to incorrect rollback cursor values. Prefer passing the bigint as a decimal string (e.g., endCursor.orderKey.toString()) and casting to an appropriate PG integer type on the database side if needed; update both the src/storage.ts and dist/* hunks in this patch for consistency.

Suggested change
+ `SELECT set_config('${SCHEMA_NAME}.reorg_order_key', '${Number(endCursor.orderKey)}', true);`,
+ `SELECT set_config('${SCHEMA_NAME}.reorg_order_key', '${endCursor.orderKey.toString()}', true);`,

Copilot uses AI. Check for mistakes.
Comment on lines +376 to +387
+ order_key_text TEXT := current_setting('${SCHEMA_NAME}.reorg_order_key', true);
+ order_key INTEGER;
+ indexer_id TEXT := TG_ARGV[2]::TEXT;
new_id_value TEXT := row_to_json(NEW.*)->>id_col;
old_id_value TEXT := row_to_json(OLD.*)->>id_col;
BEGIN
+ IF order_key_text IS NULL THEN
+ RETURN NULL;
+ END IF;
+
+ order_key := order_key_text::INTEGER;
+
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The trigger function still declares/casts order_key as INTEGER (order_key := order_key_text::INTEGER). If orderKey is a bigint cursor, this can overflow 32-bit integers and break reorg rollback for higher block heights. Consider using BIGINT consistently (order_key BIGINT and order_key_text::BIGINT) and ensure the rollback table cursor column matches.

Copilot uses AI. Check for mistakes.
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request backports a performance patch for the drizzle plugin, which is a significant improvement, and the changes to reorg handling are well-structured. However, the patch introduces a potential SQL Injection vulnerability in the trigger registration logic due to unsanitized string interpolation in raw SQL queries, specifically with the indexerId and tableIdColumn variables. Additionally, two important issues related to handling large block numbers (bigint values) have been identified that could lead to data corruption or errors. My review includes suggestions to address these potential bugs and the SQL injection vulnerability.

Comment on lines +377 to +386
+ order_key INTEGER;
+ indexer_id TEXT := TG_ARGV[2]::TEXT;
new_id_value TEXT := row_to_json(NEW.*)->>id_col;
old_id_value TEXT := row_to_json(OLD.*)->>id_col;
BEGIN
+ IF order_key_text IS NULL THEN
+ RETURN NULL;
+ END IF;
+
+ order_key := order_key_text::INTEGER;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The order_key variable in the trigger function is declared as INTEGER. A PostgreSQL INTEGER has a maximum value of 2,147,483,647. Starknet block numbers (order_key) are u64 and will eventually exceed this limit. This will cause an 'integer out of range' error in the database trigger, breaking the reorg mechanism. This should be BIGINT to support u64 block numbers.

        order_key BIGINT;
        indexer_id TEXT := TG_ARGV[2]::TEXT;
         new_id_value TEXT := row_to_json(NEW.*)->>id_col;
         old_id_value TEXT := row_to_json(OLD.*)->>id_col;
       BEGIN
        IF order_key_text IS NULL THEN
          RETURN NULL;
        END IF;

        order_key := order_key_text::BIGINT;

+ try {
+ await tx.execute(
+ sql.raw(
+ `SELECT set_config('${SCHEMA_NAME}.reorg_order_key', '${Number(endCursor.orderKey)}', true);`,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a potential for precision loss when converting bigint to Number. The endCursor.orderKey is a bigint and can represent numbers larger than Number.MAX_SAFE_INTEGER. Converting it to Number can lead to a loss of precision for large block numbers, causing an incorrect order_key to be set. This could compromise the reorg handling mechanism. It's safer to convert the bigint to a string.

        `SELECT set_config('${SCHEMA_NAME}.reorg_order_key', '${endCursor.orderKey.toString()}', true);`,

Comment on lines +39 to +45
+ DROP TRIGGER IF EXISTS ${getReorgTriggerName(table, indexerId)} ON ${table};
CREATE CONSTRAINT TRIGGER ${getReorgTriggerName(table, indexerId)}
AFTER INSERT OR UPDATE OR DELETE ON ${table}
DEFERRABLE INITIALLY DEFERRED
- FOR EACH ROW EXECUTE FUNCTION ${constants.SCHEMA_NAME}.reorg_checkpoint('${table}', '${tableIdColumn}', ${Number(endCursor.orderKey)}, '${indexerId}');
+ FOR EACH ROW EXECUTE FUNCTION ${constants.SCHEMA_NAME}.reorg_checkpoint('${table}', '${tableIdColumn}', '${indexerId}');
`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The patch introduces a potential SQL Injection vulnerability in the registerTriggers function. It uses sql.raw to construct a DDL statement (DROP TRIGGER and CREATE CONSTRAINT TRIGGER) by interpolating variables like indexerId and tableIdColumn directly into the SQL string.

If these configuration options (e.g., indexerName passed to the plugin) are sourced from untrusted external input such as environment variables or user-provided configuration, an attacker could execute arbitrary SQL commands. For example, a crafted indexerId containing a single quote could break out of the literal and inject additional SQL statements.

To remediate this, ensure that all variables interpolated into raw SQL are properly sanitized or escaped. For identifiers like trigger names, use proper SQL identifier quoting (e.g., double quotes in PostgreSQL). For string literals, use SQL string escaping or Drizzle's built-in SQL construction methods that handle parameterization safely.

Comment on lines +174 to +180
+ DROP TRIGGER IF EXISTS ${getReorgTriggerName(table, indexerId)} ON ${table};
CREATE CONSTRAINT TRIGGER ${getReorgTriggerName(table, indexerId)}
AFTER INSERT OR UPDATE OR DELETE ON ${table}
DEFERRABLE INITIALLY DEFERRED
- FOR EACH ROW EXECUTE FUNCTION ${SCHEMA_NAME}.reorg_checkpoint('${table}', '${tableIdColumn}', ${Number(endCursor.orderKey)}, '${indexerId}');
+ FOR EACH ROW EXECUTE FUNCTION ${SCHEMA_NAME}.reorg_checkpoint('${table}', '${tableIdColumn}', '${indexerId}');
`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

Similar to the issue in dist/index.cjs, this section of the patch for dist/index.mjs uses sql.raw with unsanitized string interpolation for trigger registration, leading to a potential SQL Injection vulnerability if configuration options are untrusted.

Comment on lines +403 to +409
+ DROP TRIGGER IF EXISTS ${getReorgTriggerName(table, indexerId)} ON ${table};
CREATE CONSTRAINT TRIGGER ${getReorgTriggerName(table, indexerId)}
AFTER INSERT OR UPDATE OR DELETE ON ${table}
DEFERRABLE INITIALLY DEFERRED
- FOR EACH ROW EXECUTE FUNCTION ${SCHEMA_NAME}.reorg_checkpoint('${table}', '${tableIdColumn}', ${Number(endCursor.orderKey)}, '${indexerId}');
+ FOR EACH ROW EXECUTE FUNCTION ${SCHEMA_NAME}.reorg_checkpoint('${table}', '${tableIdColumn}', '${indexerId}');
`),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

This section of the patch for src/storage.ts also contains the same SQL Injection vulnerability where indexerId and tableIdColumn are interpolated into a raw SQL string used for trigger creation.

@loothero loothero merged commit 9020d30 into main Mar 11, 2026
29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants