Skip to content

Conversation

@Benzbeeb
Copy link
Contributor

@Benzbeeb Benzbeeb commented Dec 3, 2025

  • Updated BlockHandler and RichListHandler to utilize the new querier for improved data retrieval.
  • Refactored NFT handling to replace util functions with cache methods for consistency.
  • Enhanced context handling in various modules to support better concurrency and error management.
  • Removed deprecated query functions and streamlined cache initialization across the application.

Summary by CodeRabbit

  • New Features

    • Centralized, rotation-aware Querier for Cosmos/EVM/Move/WASM; richer EVM trace/internal-tx types.
  • Improvements

    • Better reliability: endpoint rotation, context-aware cancellation, timeout/backoff and retry handling.
    • Broader caching: cache-backed lookups for accounts, validators and EVM-denom resolution; updated NFT key handling.
  • Configuration Changes

    • Support lists of RPC/REST/JSON-RPC endpoints and a new validator cache size setting.

✏️ Tip: You can customize this high-level summary in your review settings.

- Updated BlockHandler and RichListHandler to utilize the new querier for improved data retrieval.
- Refactored NFT handling to replace util functions with cache methods for consistency.
- Enhanced context handling in various modules to support better concurrency and error management.
- Removed deprecated query functions and streamlined cache initialization across the application.
@coderabbitai
Copy link

coderabbitai bot commented Dec 3, 2025

Walkthrough

Introduces a reusable, context-aware Querier and splits util into util/cache and util/querier; threads context+querier through handlers, indexer, collectors, extensions; replaces many local REST/JSON‑RPC helpers with Querier calls; converts single URL config fields to slices and adds cache accessors and new shared types.

Changes

Cohort / File(s) Summary
API handlers (block)
api/handler/block/block.go, api/handler/block/handler.go, api/handler/block/types.go, api/handler/block/cache.go
BlockHandler gains querier; ToBlocksResponse/ToBlockResponse now accept context.Context and *querier.Querier; validator lookup uses Querier; validator cache accessors used.
API registration / init
api/handler/handler.go, cmd/api.go, cmd/indexer.go
Initialization updated to initialize querier and cache subpackages; Block handler ctor now receives config.
Config
config/chain.go, config/config.go
Chain URLs become slices (RpcUrls, RestUrls, JsonRpcUrls) with Get* accessors; CacheConfig adds ValidatorCacheSize; validations refactored.
Querier (new)
util/querier/querier.go, util/querier/*.go
New Querier type with endpoint rotation, retries/backoff and methods for Cosmos/EVM/Move/WASM (GetLatestHeight, GetCosmosTxs, GetEvmTxs, GetCollectionName, GetTokenUri, GetMoveResource, GetAllBalances, GetNodeInfo, QueryERC20Balances, TraceCallByBlock, etc.).
Cache utilities (moved)
util/cache/cache.go, util/common-handler/common/handler.go
util split into cache package; adds validator/account/evm-denom caches and Get/Set accessors; site calls updated to use cache.GetOrCreate* helpers.
Indexer & submodules (context propagation)
indexer/indexer.go, indexer/types/types.go, indexer/scraper/*, indexer/collector/*, indexer/collector/*/submodule.go
Prepare signatures now accept context.Context; Indexer and collectors use Querier for latest height and data fetches; prepare retry/cancellation logic added; scraper uses GetRpcUrl accessor.
TX collection (removed helpers)
indexer/collector/tx/*, indexer/collector/tx/query.go
Local tx query/pagination helpers removed; TxSubmodule uses Querier.GetCosmosTxs/GetEvmTxs; RestTx types qualified to types.RestTx.
NFT submodules
indexer/collector/evm-nft/*, indexer/collector/wasm-nft/*, indexer/collector/move-nft/*
NftKey switched to cache.NftKey; submodules gain querier and accept context; local on-chain query helpers removed and replaced by Querier methods where applicable.
Extensions: internaltx & richlist
indexer/extension/internaltx/*, indexer/extension/richlist/*
Internal-tx and richlist now use Querier for node/trace/balances/module-account lookups; types moved/adapted to types package; many REST/JSON‑RPC helpers removed.
Removed / disabled implementations
indexer/collector/*/query.go, indexer/collector/wasm-nft/query.go, indexer/util/query.go, indexer/extension/richlist/utils/rest.go, patcher/*/wasm-nft.go
Multiple local REST/JSON‑RPC helper files and Wasm NFT patcher implementations removed or commented out (functionality replaced by Querier or disabled).
Types consolidation
types/*.go
New centralized types added for accounts, EVM, internal traces, JSON‑RPC, Move resources, validators, wasm responses, etc.
Cache & util call-sites
assorted files (collector/*, extension/*, tests)
Call-sites migrated from util helpers to cache/querier equivalents; tests updated to use cache.InitializeCaches and ValidatorCacheSize.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant API as API Handler
    participant Q as Querier
    participant C as Cache
    participant EP as RPC/REST Endpoints

    Client->>API: Request (GetBlocks / GetBlockByHeight)
    API->>Q: Request data (ctx, params)
    Q->>C: GetValidatorCache(addr)
    alt validator cached
        C-->>Q: validator
    else not cached
        Q->>EP: HTTP/JSON-RPC request (rotating endpoints, retries)
        EP-->>Q: response
        Q->>C: SetValidatorCache(validator)
    end
    Q-->>API: enriched block(s)
    API-->>Client: JSON response
Loading
sequenceDiagram
    participant Indexer
    participant Collector
    participant SubA as BlockSubmodule
    participant SubB as TxSubmodule
    participant SubC as NFTSubmodule
    participant Q as Querier

    Indexer->>Collector: Prepare(ctx, scrapedBlock)
    Collector->>Collector: errgroup.WithContext(ctx)
    par prepare submodules
        Collector->>SubA: Prepare(gCtx, block)
        Collector->>SubB: Prepare(gCtx, block)
        Collector->>SubC: Prepare(gCtx, block)
    and
        SubA->>Q: GetCollectionData(gCtx,...)
        SubB->>Q: GetCosmosTxs(gCtx,...)
        SubC->>Q: GetEvmTxs(gCtx,...)
    end
    Collector->>Collector: wait for all
    Collector-->>Indexer: result
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120+ minutes

  • Focus review areas:
    • util/querier package: endpoint rotation, generic executeWithEndpointRotation, retry/backoff semantics, and correctness across Cosmos/EVM/Move/WASM flows.
    • Context propagation: verify errgroup.WithContext usage, cancellation behavior, and no goroutine leaks across many Prepare changes.
    • Removed helpers vs. Querier replacements: ensure pagination, retries, batching, and edge-case handling are preserved by Querier implementations.
    • Type/package migrations: cross-check all imports and type qualifications (types, cache, querier) to avoid cycles or name collisions.
    • Indexer prepare retry/cancellation logic and interaction with collector/submodules (ensure metrics/metrics updates remain correct).
    • Disabled/commented patcher code: confirm intended behavior and downstream impacts where patching was removed.

Poem

🐰 I hopped through endpoints, nose to the breeze,

Querier in paw, rotating with ease.
Context in pockets, caches tucked neat,
Types assembled, calls routed—nice and neat.
A nimble refactor—carrot crunch for the feat.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.59% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes: integration of querier and cache utilities across handlers is the primary refactoring throughout the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch multiple-query

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
util/querier/query.go (1)

259-275: Rate limit retries are also unbounded.

Similar to timeout handling, 429 responses trigger infinite retries. While rate limiting is transient, a misconfigured or permanently rate-limited endpoint could cause indefinite blocking.

Consider applying the same bounded retry logic suggested for timeouts.

indexer/extension/internaltx/extension.go (1)

310-325: Remove unused fiber client.

Similar to query.go, the fiber client is acquired (lines 311-312) but never used since i.querier.TraceCallByBlock now handles the HTTP calls internally.

Apply this diff to remove the dead code:

 func (i *InternalTxExtension) scrapeHeight(ctx context.Context, height int64) (*WorkItem, error) {
-	client := fiber.AcquireClient()
-	defer fiber.ReleaseClient(client)
-
 	// Scrape internal transaction data
 	callTraceRes, err := i.querier.TraceCallByBlock(ctx, height)

After removing both usages in this file, verify if the fiber/v2 import (line 11) is still needed by other code in this file.

indexer/collector/wasm-nft/submodule.go (1)

29-37: Critical: querier field is not initialized in the constructor.

The querier field was added to WasmNftSubmodule but is not initialized in New(). The prepare.go file calls sub.querier.GetCollectionData(ctx, ...), which will cause a nil pointer dereference. Both evm-nft and move-nft submodules initialize querier in their constructors using the same pattern.

 func New(logger *slog.Logger, cfg *config.Config) *WasmNftSubmodule {
 	cacheSize := cfg.GetCacheSize()
 	return &WasmNftSubmodule{
 		logger:    logger.With("submodule", SubmoduleName),
 		cfg:       cfg,
 		cache:     make(map[int64]CacheData),
 		blacklist: cache.New[string, interface{}](cacheSize),
+		querier:   querier.NewQuerier(cfg.GetChainConfig()),
 	}
 }
🧹 Nitpick comments (35)
patcher/v1_0_2/patch.go (1)

31-41: WasmVM patch path is now a silent no‑op; consider making intent explicit

For types.WasmVM, Patch no longer invokes any patch logic and just falls through to the final return nil. If Wasm NFT patching is still required anywhere, this will silently skip it while reporting success; if it’s intentionally deprecated/moved, the current code doesn’t communicate that.

I’d suggest either restoring explicit behavior or at least logging the decision so operators aren’t surprised:

 func Patch(tx *gorm.DB, cfg *config.Config, logger *slog.Logger) error {
 	switch cfg.GetVmType() {
 	case types.EVM:
 		return nil
 	case types.WasmVM:
-		// return PatchWasmNFT(tx, cfg, logger)
+		logger.Info("Skipping v1_0_2 WasmVM NFT patch; logic handled elsewhere or deprecated")
+		return nil
 	case types.MoveVM:
 		return PatchMoveNFT(tx, cfg, logger)
 	}
 
 	return nil
 }

If Wasm patching is still needed, instead re‑enable the appropriate patch function or return a clear error for unsupported VM types.

types/move.go (1)

3-10: Consider extracting the anonymous struct for reusability.

The nested anonymous struct works, but if Resource needs to be used independently elsewhere, consider extracting it as a named type:

+type MoveResource struct {
+	Address      string `json:"address"`
+	StructTag    string `json:"struct_tag"`
+	MoveResource string `json:"move_resource"`
+	RawBytes     string `json:"raw_bytes"`
+}
+
 type QueryMoveResourceResponse struct {
-	Resource struct {
-		Address      string `json:"address"`
-		StructTag    string `json:"struct_tag"`
-		MoveResource string `json:"move_resource"`
-		RawBytes     string `json:"raw_bytes"`
-	} `json:"resource"`
+	Resource MoveResource `json:"resource"`
 }
config/chain.go (1)

30-39: Inconsistent error types between RPC and REST URL validation.

RPC URL validation uses NewValidationError for invalid URL format (line 35), while REST URL validation uses NewInvalidValueError for the same scenario (line 50). Consider aligning them for consistency:

 	for idx, rpcUrl := range cc.RpcUrls {
 		if len(rpcUrl) == 0 {
 			return types.NewValidationError("RPC_URL", fmt.Sprintf("URL at index %d is empty", idx))
 		}
 		if u, err := url.Parse(rpcUrl); err != nil {
-			return types.NewValidationError("RPC_URL", fmt.Sprintf("invalid URL format at index %d: %s", idx, rpcUrl))
+			return types.NewInvalidValueError("RPC_URL", rpcUrl, fmt.Sprintf("invalid URL at index %d: %v", idx, err))
 		} else if u.Scheme != "http" && u.Scheme != "https" {
-			return types.NewValidationError("RPC_URL", fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme))
+			return types.NewInvalidValueError("RPC_URL", rpcUrl, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme))
 		}
 	}

Also applies to: 45-53

util/querier/query.go (2)

31-36: Global mutable state may complicate testing and concurrency.

The package uses global variables (limiter, cfg, endpointCategorizer) initialized via InitUtil. This pattern:

  • Makes unit testing harder (requires careful setup/teardown)
  • Can cause race conditions if InitUtil is called multiple times

Consider refactoring toward a Querier struct that holds these dependencies, allowing for easier testing and avoiding global state.


123-136: Custom LCG is acceptable for jitter but consider documenting the trade-off.

The LCG implementation is thread-safe and sufficient for jitter. However, a brief comment noting that this is intentionally not cryptographically secure (and doesn't need to be for backoff jitter) would help future maintainers.

util/querier/move.go (1)

31-37: Consider simplifying the return statement.

The intermediate res variable is unnecessary here.

 func (q *Querier) GetMoveResource(ctx context.Context, addr string, structTag string, height int64) (resource *types.QueryMoveResourceResponse, err error) {
-	res, err := executeWithEndpointRotation[types.QueryMoveResourceResponse](ctx, q.RestUrls, fetchMoveResource(addr, structTag, height, queryTimeout))
-	if err != nil {
-		return nil, err
-	}
-	return res, nil
+	return executeWithEndpointRotation[types.QueryMoveResourceResponse](ctx, q.RestUrls, fetchMoveResource(addr, structTag, height, queryTimeout))
 }
types/jsonrpc.go (1)

24-28: JSONRPCErrorResponse appears redundant.

JSONRPCResponse already handles error responses via the Error field. Consider removing this type unless there's a specific use case requiring a distinct error-only response type.

util/querier/querier.go (1)

47-57: Consider adding constructor validation.

Validating that at least one endpoint URL is configured would provide earlier, clearer error messages than failing at query time.

 func NewQuerier(cfg *config.ChainConfig) *Querier {
+	if len(cfg.RestUrls) == 0 && len(cfg.RpcUrls) == 0 && len(cfg.JsonRpcUrls) == 0 {
+		// Consider logging a warning or returning an error
+	}
 	return &Querier{
 		ChainId:              cfg.ChainId,
types/account.go (2)

20-42: CosmosAccount.UnmarshalJSON drops Type/Permissions and only populates Address

Because this custom UnmarshalJSON works off a map[string]any, it bypasses the normal struct decoding and only ever sets Address. Type and Permissions remain at their zero values even if present in the JSON, which is surprising given the exported fields and tags.

Consider decoding into an alias + helper struct so all fields are populated while still supporting the base_account fallback for address:

 func (a *CosmosAccount) UnmarshalJSON(data []byte) error {
-	// Try to unmarshal as a map to check the structure
-	var raw map[string]any
-	if err := json.Unmarshal(data, &raw); err != nil {
-		return err
-	}
-
-	// Check if this is a non BaseAccount
-	if baseAccount, ok := raw["base_account"].(map[string]any); ok {
-		if address, ok := baseAccount["address"].(string); ok {
-			a.Address = address
-			return nil
-		}
-	}
-
-	// Otherwise, try direct address field (BaseAccount)
-	if address, ok := raw["address"].(string); ok {
-		a.Address = address
-		return nil
-	}
-
-	return fmt.Errorf("no address field found in account JSON")
+	type alias CosmosAccount
+	var aux struct {
+		alias
+		BaseAccount *struct {
+			Address string `json:"address"`
+		} `json:"base_account"`
+	}
+
+	if err := json.Unmarshal(data, &aux); err != nil {
+		return err
+	}
+
+	*a = CosmosAccount(aux.alias)
+
+	// Prefer base_account.address if present
+	if aux.BaseAccount != nil && aux.BaseAccount.Address != "" {
+		a.Address = aux.BaseAccount.Address
+	}
+
+	if a.Address == "" {
+		return fmt.Errorf("no address field found in account JSON")
+	}
+
+	return nil
 }

This keeps Type / Permissions populated while preserving the flexible address extraction logic.


44-48: Avoid duplicating RestTx / RestTxBody / Pagination definitions across packages

Pagination, RestTx, RestTxBody, and QueryRestTxsResponse are now defined in types, while very similar types still exist in indexer/collector/tx/types.go. This split makes it easy to accidentally mix the tx.RestTx and types.RestTx variants or let them drift structurally over time.

Given CacheData.RestTxs and helpers like grepMsgTypesFromRestTx already use types.RestTx / types.RestTxBody, consider fully centralizing these contracts in the types package and updating indexer/collector/tx to reuse them (or removing the local duplicates there). This will simplify conversions and avoid subtle type mismatches.

Also applies to: 73-83, 94-97

api/handler/richlist/richlist.go (1)

37-43: Double‑check the context type passed into GetEvmContractByDenom

The switch to h.querier.GetEvmContractByDenom(c.Context(), denom) looks good conceptually, but in Fiber v2 Ctx.Context() returns the underlying fasthttp context, not a context.Context. If GetEvmContractByDenom expects a context.Context, you likely want to pass something like c.UserContext() or a request‑scoped context propagated from middleware instead.

Please verify the expected parameter type on querier.GetEvmContractByDenom and adjust the call accordingly to avoid type mismatches or losing cancellation/timeouts.

cmd/api.go (1)

41-46: Querier and cache initialization order looks correct

Initializing the querier globals (querier.InitUtil(cfg)) and then dictionary caches (cache.InitializeCaches(cfg.GetCacheConfig())) before opening the DB and starting the API server is a sensible ordering and keeps shared state ready for handlers.

Optionally, once the TODO is revisited, consider renaming InitUtil to something like InitQuerier or InitEndpoints to better reflect its purpose.

indexer/collector/tx/types.go (1)

9-12: Consolidate on a single RestTx type to avoid conversions and confusion

CacheData.RestTxs now uses []types.RestTx, but this file still defines its own RestTx/RestTxBody and QueryRestTxsResponse.Txs []RestTx. That means any code that unmarshals into QueryRestTxsResponse and then wants to populate CacheData.RestTxs either has to convert element‑wise or will hit type mismatches.

To simplify things, consider:

  • Changing QueryRestTxsResponse.Txs to []types.RestTx, and
  • Removing or inlining the local RestTx / RestTxBody definitions in favor of the shared ones in the types package.

That keeps all REST tx contracts in one place and avoids accidental mixing of structurally identical but distinct types.

Also applies to: 14-28

api/handler/richlist/handler.go (1)

11-18: RichListHandler querier wiring is fine; consider injecting for testability

Storing cfg and a *querier.Querier on RichListHandler and initializing it in NewRichListHandler cleanly supports the new querier‑based calls in the richlist routes.

If you anticipate unit‑testing or swapping implementations (e.g., mocks, different endpoint policies), you might later want to inject a Querier interface from the caller instead of constructing it directly here, but that can be deferred.

Also applies to: 22-27

indexer/scraper/scrape.go (1)

61-75: Using GetRpcUrl accessor is consistent; just confirm endpoint selection semantics

Switching to:

url := fmt.Sprintf("%s/block?height=%d", cfg.GetChainConfig().GetRpcUrl(), height)
...
url := fmt.Sprintf("%s/block_results?height=%d", cfg.GetChainConfig().GetRpcUrl(), height)

properly routes both calls through the new chain‑config accessor.

If GetRpcUrl() can return different endpoints on successive calls (e.g., round‑robin or random load‑balancing), there’s a small risk of fetching /block and /block_results for the same height from different nodes that are briefly out of sync. If that’s a concern, consider having scrapeBlock obtain a single base RPC URL once and pass it into both helpers so they always hit the same node for a given height.

Also applies to: 77-91

indexer/collector/tx/util.go (1)

13-31: Aligning grepMsgTypesFromRestTx with types.RestTx/RestTxBody is good; small cleanups possible

Switching grepMsgTypesFromRestTx to accept types.RestTx and unmarshal into types.RestTxBody is consistent with the move toward shared tx types in the types package and avoids yet another local struct.

Two optional nits you can consider later:

  • Use map[string]struct{} instead of map[string]interface{} for msgTypeMap to avoid allocating empty interface values:

  • msgTypeMap := make(map[string]interface{})

  • msgTypeMap := make(map[string]struct{})
    ...
  • msgTypeMap[msgType] = nil
  • msgTypeMap[msgType] = struct{}{}
    
    
  • Once indexer/collector/tx/types.go is fully migrated to types.RestTxBody, you can delete its duplicate RestTxBody definition to keep a single source of truth.
indexer/extension/internaltx/collect.go (1)

35-47: Consider reusing the same hex canonicalization helper used in the cache

You build hashes from trace.TxHash via util.HexToBytes, while GetOrCreateEvmTxHashIds internally keys by util.BytesToHex(hash). Later you derive hashHex with strings.ToLower(strings.TrimPrefix(trace.TxHash, "0x")) to index into hashIdMap.

To avoid any subtle mismatches (e.g., if BytesToHex formatting ever changes), consider deriving the lookup key from the same canonical source as the cache, e.g. by reusing the hex produced from the bytes instead of recomputing from the original string.

Also applies to: 65-71

indexer/collector/wasm-nft/prepare.go (1)

3-5: Context-aware wasm-NFT prepare flow looks good; consider errgroup.WithContext

Updating prepare to accept ctx and passing it into sub.querier.GetCollectionData(ctx, addr, block.Height) is a solid step toward proper cancellation/timeouts. The concurrency pattern with errgroup.Group and a local mutex around colInfos is sound.

Since you now have a context, you might optionally switch to errgroup.WithContext(ctx) so that if ctx is cancelled or one goroutine fails, the remaining goroutines can stop work early.

Also applies to: 26-27, 39-58

types/wasm.go (1)

1-13: New shared WASM query response types look fine; consider brief doc comments

Centralizing QueryContractInfoResponse and QueryWasmContractResponse in types is sensible and matches their reuse across modules. To improve discoverability, consider adding short doc comments on each exported type describing which query/endpoint they model.

indexer/collector/evm-nft/prepare.go (1)

25-27: Consider using errgroup.WithContext for cancellation propagation.

The plain errgroup.Group doesn't cancel sibling goroutines when one fails or when the parent context is cancelled. If GetCollectionName or GetTokenUri encounters an error or the parent ctx is cancelled, remaining goroutines will continue running unnecessarily.

-	var g errgroup.Group
+	g, gCtx := errgroup.WithContext(ctx)
	var nameMtx sync.Mutex
	var uriMtx sync.Mutex

Then pass gCtx to the querier calls instead of ctx:

-			name, err := sub.querier.GetCollectionName(ctx, addr, block.Height)
+			name, err := sub.querier.GetCollectionName(gCtx, addr, block.Height)
-				tokenUri, err := sub.querier.GetTokenUri(ctx, addr, id, block.Height)
+				tokenUri, err := sub.querier.GetTokenUri(gCtx, addr, id, block.Height)
util/querier/wasm.go (1)

46-58: Consider parallelizing the two independent queries for improved performance.

fetchContractInfo and fetchWasmContractInfo are independent queries. Executing them sequentially doubles the latency. Using errgroup could halve the query time.

 func (q *Querier) GetCollectionData(ctx context.Context, collectionAddr string, height int64) (name string, creator string, err error) {
-	contractInfo, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchContractInfo(collectionAddr, height, queryTimeout))
-	if err != nil {
-		return "", "", fmt.Errorf("failed to query contract info: %w", err)
-	}
-
-	wasmContractInfo, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchWasmContractInfo(collectionAddr, height, queryTimeout))
-	if err != nil {
-		return "", "", fmt.Errorf("failed to query wasm contract info: %w", err)
-	}
-
-	return contractInfo.Data.Name, wasmContractInfo.ContractInfo.Creator, nil
+	var contractInfo *types.QueryContractInfoResponse
+	var wasmContractInfo *types.QueryWasmContractResponse
+
+	g, gCtx := errgroup.WithContext(ctx)
+	g.Go(func() error {
+		var err error
+		contractInfo, err = executeWithEndpointRotation(gCtx, q.RestUrls, fetchContractInfo(collectionAddr, height, queryTimeout))
+		if err != nil {
+			return fmt.Errorf("failed to query contract info: %w", err)
+		}
+		return nil
+	})
+	g.Go(func() error {
+		var err error
+		wasmContractInfo, err = executeWithEndpointRotation(gCtx, q.RestUrls, fetchWasmContractInfo(collectionAddr, height, queryTimeout))
+		if err != nil {
+			return fmt.Errorf("failed to query wasm contract info: %w", err)
+		}
+		return nil
+	})
+
+	if err := g.Wait(); err != nil {
+		return "", "", err
+	}
+
+	return contractInfo.Data.Name, wasmContractInfo.ContractInfo.Creator, nil
 }
indexer/collector/move-nft/prepare.go (1)

24-41: LGTM with optional note.

The concurrent pattern using errgroup.WithContext is well-implemented. The local mutex properly protects the nftResources map from concurrent writes, and gCtx is correctly passed to the querier call for cancellation propagation.

Note: In Go 1.22+, loop variable capture (addr := nftAddr) is no longer necessary as the loop variable semantics changed. This can be removed if the project uses Go 1.22 or later.

api/handler/block/cache.go (1)

11-11: Parameter shadows package name.

The parameter querier shadows the imported package querier, which can lead to confusion and potential issues if you need to access the package within this function.

Consider renaming the parameter:

-func getValidator(ctx context.Context, querier *querier.Querier, validatorAddr string) (*types.Validator, error) {
+func getValidator(ctx context.Context, q *querier.Querier, validatorAddr string) (*types.Validator, error) {

Then update the usage on line 16:

-	validator, err := querier.GetValidator(ctx, validatorAddr)
+	validator, err := q.GetValidator(ctx, validatorAddr)
indexer/collector/tx/prepare.go (1)

13-39: Consider using errgroup.WithContext for proper context cancellation.

The current implementation uses a plain errgroup.Group. While context is passed to the querier calls, using errgroup.WithContext(ctx) would ensure that when one goroutine fails, the derived context is cancelled, allowing the other goroutine to exit early if it checks the context.

-	var g errgroup.Group
+	g, ctx := errgroup.WithContext(ctx)
 	var restTxs []types.RestTx
 	var evmTxs []types.EvmTx

 	g.Go(func() error {
-		txs, err := sub.querier.GetCosmosTxs(ctx, block.Height, len(block.Txs))
+		txs, err := sub.querier.GetCosmosTxs(ctx, block.Height, len(block.Txs))

This pattern is already used in similar code (e.g., indexer/collector/collector.go per the AI summary).

config/config.go (1)

305-306: Address the TODO: ValidatorCacheSize reuses generic CACHE_SIZE.

The TODO comment indicates this needs revisiting. ValidatorCacheSize currently inherits from CACHE_SIZE (default 1000) rather than having its own dedicated environment variable like other cache sizes (e.g., ACCOUNT_CACHE_SIZE, NFT_CACHE_SIZE).

Consider adding a dedicated environment variable for consistency:

 	viper.SetDefault("EVM_DENOM_CONTRACT_CACHE_SIZE", DefaultEvmDenomContractCacheSize)
+	viper.SetDefault("VALIDATOR_CACHE_SIZE", DefaultCacheSize)

And in the config initialization:

-			// TODO: revisit
-			ValidatorCacheSize: viper.GetInt("CACHE_SIZE"),
+			ValidatorCacheSize: viper.GetInt("VALIDATOR_CACHE_SIZE"),
indexer/extension/richlist/evmrichlist/jsonrpc.go (1)

83-97: Parameter shadowing and ID mapping assumption.

  1. The parameter querier shadows the imported package name.
  2. The ID mapping idToAddr[idx+1] assumes the querier returns responses with IDs starting at 1. Verify this matches the implementation in querier.QueryERC20Balances.

Rename parameter to avoid shadowing:

-func queryBatchBalances(ctx context.Context, querier *querier.Querier, erc20Address string, batch []richlistutils.AddressWithID, height int64) (map[richlistutils.AddressWithID]sdkmath.Int, error) {
+func queryBatchBalances(ctx context.Context, q *querier.Querier, erc20Address string, batch []richlistutils.AddressWithID, height int64) (map[richlistutils.AddressWithID]sdkmath.Int, error) {

Verify ID assignment in querier:

#!/bin/bash
# Check how QueryERC20Balances assigns request IDs
ast-grep --pattern $'func ($_ *Querier) QueryERC20Balances($$$) {
  $$$
}'
api/handler/block/types.go (1)

53-63: Parameter name shadows package import.

The parameter querier *querier.Querier shadows the imported package name querier, which can lead to confusion and prevents accessing other package-level functions within this scope.

Consider renaming the parameter to avoid shadowing:

-func ToBlocksResponse(ctx context.Context, cbs []types.CollectedBlock, querier *querier.Querier) ([]Block, error) {
+func ToBlocksResponse(ctx context.Context, cbs []types.CollectedBlock, q *querier.Querier) ([]Block, error) {
 	blocks := make([]Block, 0, len(cbs))
 	for _, cb := range cbs {
-		block, err := ToBlockResponse(ctx, cb, querier)
+		block, err := ToBlockResponse(ctx, cb, q)
 		if err != nil {
 			return nil, err
 		}
indexer/extension/richlist/evmrichlist/run.go (1)

18-18: Variable name shadows package import.

Similar to the block handler, querier := querier.NewQuerier(...) shadows the imported package name.

-	querier := querier.NewQuerier(cfg.GetChainConfig())
+	q := querier.NewQuerier(cfg.GetChainConfig())

Then update usages on lines 22 and 53 to use q.

indexer/indexer.go (3)

154-169: Unused fiber client acquisition.

The fiber.AcquireClient() and fiber.ReleaseClient(client) calls appear to be unused since i.querier.GetLatestHeight(i.ctx) doesn't use the acquired client. This is likely leftover from the previous implementation.

 func (i *Indexer) wait() {
-	client := fiber.AcquireClient()
-	defer fiber.ReleaseClient(client)
 	for {
 		chainHeight, err := i.querier.GetLatestHeight(i.ctx)

206-263: Well-implemented retry logic with proper cancellation handling.

The retry mechanism correctly:

  • Checks context cancellation before each attempt
  • Handles context.Canceled by stopping immediately
  • Handles context.DeadlineExceeded with retries and cooling duration
  • Properly decrements prepareCount in all exit paths
  • Checks for cancellation during the cooling wait

One consideration: the indefinite retry on timeouts may mask persistent endpoint failures. If this becomes an operational concern, adding observability (e.g., retry count metrics) or a maximum retry threshold could help.


80-86: Same unused fiber client issue.

Similar to wait(), the fiber client is acquired but not used since i.querier.GetLatestHeight(ctx) handles HTTP internally.

 	// get the current chain height for validation/clamping
-	client := fiber.AcquireClient()
-	defer fiber.ReleaseClient(client)
 	chainHeight, err := i.querier.GetLatestHeight(ctx)
types/internal_tx.go (1)

3-8: Consider adding JSON tags to InternalTxInfo for consistency.

Other structs in this file include JSON tags. If InternalTxInfo is serialized/deserialized (e.g., for caching or API responses), adding tags would be beneficial. If it's strictly internal, this is fine as-is.

util/querier/evm.go (1)

154-188: Consider thread-safety for cache operations.

GetEvmContractByDenom reads then writes to the cache without synchronization. If this method is called concurrently for the same denom, multiple goroutines may query the endpoint redundantly.

This is a minor concern since the cache hit path is checked first, and redundant writes would produce the same value. Consider this if performance under concurrent load becomes an issue.

indexer/extension/richlist/utils/utils.go (1)

125-125: Redundant double ToLower call.

strings.ToLower is called twice on the same value.

-		denom := strings.ToLower(strings.ToLower(coin.Denom))
+		denom := strings.ToLower(coin.Denom)
util/querier/cosmos.go (1)

79-92: Consider extracting pagination workaround into a helper.

The same offset-fallback pagination logic is repeated in three places (fetchAllTxsWithPagination, GetAllBalances, FetchAllAccountsWithPagination). This could be centralized to reduce duplication and improve maintainability.

Also applies to: 238-251, 311-324

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 3453cf0 and 394cf03.

📒 Files selected for processing (77)
  • api/handler/block/block.go (3 hunks)
  • api/handler/block/cache.go (1 hunks)
  • api/handler/block/handler.go (1 hunks)
  • api/handler/block/types.go (3 hunks)
  • api/handler/handler.go (1 hunks)
  • api/handler/nft/tx.go (2 hunks)
  • api/handler/richlist/handler.go (2 hunks)
  • api/handler/richlist/richlist.go (1 hunks)
  • api/handler/tx/edge_tables_test.go (1 hunks)
  • cmd/api.go (2 hunks)
  • cmd/indexer.go (2 hunks)
  • config/chain.go (3 hunks)
  • config/config.go (3 hunks)
  • indexer/collector/block/submodule.go (2 hunks)
  • indexer/collector/collector.go (2 hunks)
  • indexer/collector/evm-nft/collect.go (6 hunks)
  • indexer/collector/evm-nft/prepare.go (4 hunks)
  • indexer/collector/evm-nft/query.go (0 hunks)
  • indexer/collector/evm-nft/submodule.go (4 hunks)
  • indexer/collector/move-nft/collect.go (2 hunks)
  • indexer/collector/move-nft/prepare.go (2 hunks)
  • indexer/collector/move-nft/query.go (0 hunks)
  • indexer/collector/move-nft/submodule.go (1 hunks)
  • indexer/collector/move-nft/types.go (0 hunks)
  • indexer/collector/tx/collect.go (6 hunks)
  • indexer/collector/tx/prepare.go (2 hunks)
  • indexer/collector/tx/query.go (0 hunks)
  • indexer/collector/tx/submodule.go (2 hunks)
  • indexer/collector/tx/types.go (1 hunks)
  • indexer/collector/tx/util.go (1 hunks)
  • indexer/collector/wasm-nft/collect.go (7 hunks)
  • indexer/collector/wasm-nft/prepare.go (3 hunks)
  • indexer/collector/wasm-nft/query.go (0 hunks)
  • indexer/collector/wasm-nft/submodule.go (4 hunks)
  • indexer/collector/wasm-nft/types.go (0 hunks)
  • indexer/extension/evmret/cleanup.go (2 hunks)
  • indexer/extension/evmret/cleanup_test.go (2 hunks)
  • indexer/extension/internaltx/address.go (1 hunks)
  • indexer/extension/internaltx/address_test.go (4 hunks)
  • indexer/extension/internaltx/collect.go (4 hunks)
  • indexer/extension/internaltx/collect_test.go (9 hunks)
  • indexer/extension/internaltx/extension.go (6 hunks)
  • indexer/extension/internaltx/query.go (1 hunks)
  • indexer/extension/internaltx/types.go (0 hunks)
  • indexer/extension/internaltx/util.go (3 hunks)
  • indexer/extension/richlist/evmrichlist/jsonrpc.go (3 hunks)
  • indexer/extension/richlist/evmrichlist/run.go (2 hunks)
  • indexer/extension/richlist/extension.go (3 hunks)
  • indexer/extension/richlist/utils/db.go (3 hunks)
  • indexer/extension/richlist/utils/rest.go (0 hunks)
  • indexer/extension/richlist/utils/types.go (0 hunks)
  • indexer/extension/richlist/utils/utils.go (10 hunks)
  • indexer/indexer.go (6 hunks)
  • indexer/scraper/scrape.go (2 hunks)
  • indexer/types/types.go (2 hunks)
  • indexer/util/query.go (0 hunks)
  • patcher/v1_0_12/patch.go (1 hunks)
  • patcher/v1_0_12/wasm-nft.go (1 hunks)
  • patcher/v1_0_2/move-nft.go (3 hunks)
  • patcher/v1_0_2/patch.go (1 hunks)
  • patcher/v1_0_2/wasm-nft.go (1 hunks)
  • types/account.go (1 hunks)
  • types/cosmos.go (1 hunks)
  • types/evm.go (1 hunks)
  • types/internal_tx.go (1 hunks)
  • types/jsonrpc.go (1 hunks)
  • types/move.go (1 hunks)
  • types/validators.go (1 hunks)
  • types/wasm.go (1 hunks)
  • util/cache/cache.go (13 hunks)
  • util/common-handler/common/handler.go (4 hunks)
  • util/querier/cosmos.go (1 hunks)
  • util/querier/evm.go (1 hunks)
  • util/querier/move.go (1 hunks)
  • util/querier/querier.go (1 hunks)
  • util/querier/query.go (4 hunks)
  • util/querier/wasm.go (1 hunks)
💤 Files with no reviewable changes (10)
  • indexer/collector/move-nft/types.go
  • indexer/util/query.go
  • indexer/collector/wasm-nft/types.go
  • indexer/extension/internaltx/types.go
  • indexer/collector/tx/query.go
  • indexer/extension/richlist/utils/types.go
  • indexer/collector/evm-nft/query.go
  • indexer/collector/move-nft/query.go
  • indexer/collector/wasm-nft/query.go
  • indexer/extension/richlist/utils/rest.go
🧰 Additional context used
🧬 Code graph analysis (52)
patcher/v1_0_2/move-nft.go (1)
util/cache/cache.go (1)
  • GetOrCreateAccountIds (153-178)
indexer/collector/evm-nft/prepare.go (2)
indexer/collector/evm-nft/submodule.go (1)
  • EvmNftSubmodule (20-27)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
cmd/indexer.go (2)
util/querier/query.go (1)
  • InitUtil (101-108)
util/cache/cache.go (1)
  • InitializeCaches (47-58)
indexer/extension/internaltx/address.go (1)
types/internal_tx.go (1)
  • EvmInternalTx (42-51)
indexer/collector/wasm-nft/prepare.go (2)
indexer/collector/wasm-nft/submodule.go (1)
  • WasmNftSubmodule (20-27)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
config/chain.go (1)
types/errors.go (2)
  • NewValidationError (52-58)
  • NewInvalidValueError (60-66)
indexer/collector/tx/types.go (1)
types/account.go (1)
  • RestTx (73-77)
indexer/collector/block/submodule.go (1)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
indexer/extension/evmret/cleanup_test.go (2)
util/cache/cache.go (1)
  • InitializeCaches (47-58)
config/config.go (1)
  • CacheConfig (104-112)
indexer/extension/richlist/utils/db.go (1)
util/cache/cache.go (1)
  • GetOrCreateAccountIds (153-178)
indexer/collector/tx/collect.go (3)
indexer/collector/tx/types.go (1)
  • RestTx (24-28)
types/account.go (1)
  • RestTx (73-77)
util/cache/cache.go (3)
  • GetOrCreateMsgTypeIds (297-354)
  • GetOrCreateTypeTagIds (357-414)
  • GetOrCreateAccountIds (153-178)
api/handler/richlist/handler.go (1)
util/querier/querier.go (2)
  • Querier (20-28)
  • NewQuerier (47-57)
indexer/extension/internaltx/address_test.go (1)
types/internal_tx.go (1)
  • EvmInternalTx (42-51)
indexer/collector/tx/submodule.go (3)
indexer/types/types.go (2)
  • Submodule (11-15)
  • ScrapedBlock (17-28)
util/querier/querier.go (2)
  • Querier (20-28)
  • NewQuerier (47-57)
indexer/collector/collector.go (1)
  • New (31-54)
api/handler/tx/edge_tables_test.go (2)
util/cache/cache.go (1)
  • InitializeCaches (47-58)
config/config.go (1)
  • CacheConfig (104-112)
indexer/collector/move-nft/submodule.go (3)
indexer/collector/move-nft/types.go (1)
  • CacheData (5-7)
util/querier/querier.go (2)
  • Querier (20-28)
  • NewQuerier (47-57)
indexer/collector/collector.go (1)
  • New (31-54)
indexer/collector/collector.go (1)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
api/handler/handler.go (1)
api/handler/block/handler.go (1)
  • NewBlockHandler (21-23)
indexer/extension/internaltx/extension.go (3)
types/internal_tx.go (1)
  • DebugCallTraceBlockResponse (11-13)
util/querier/querier.go (2)
  • Querier (20-28)
  • NewQuerier (47-57)
indexer/extension/internaltx/query.go (1)
  • CheckNodeVersion (17-38)
util/querier/querier.go (3)
types/vm.go (1)
  • VMType (3-3)
config/chain.go (1)
  • ChainConfig (10-18)
sentry_integration/sentry_integration.go (1)
  • CaptureCurrentHubException (9-11)
indexer/extension/internaltx/collect.go (2)
types/internal_tx.go (3)
  • DebugCallTraceBlockResponse (11-13)
  • InternalTransaction (30-40)
  • InternalTxInfo (3-8)
util/cache/cache.go (1)
  • GetOrCreateEvmTxHashIds (416-477)
indexer/collector/evm-nft/submodule.go (2)
util/querier/querier.go (2)
  • Querier (20-28)
  • NewQuerier (47-57)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
api/handler/block/types.go (2)
types/table.go (2)
  • CollectedBlock (40-51)
  • CollectedBlock (193-195)
util/querier/querier.go (1)
  • Querier (20-28)
util/common-handler/common/handler.go (1)
util/cache/cache.go (4)
  • GetOrCreateAccountIds (153-178)
  • GetOrCreateMsgTypeIds (297-354)
  • NftKey (16-19)
  • GetOrCreateNftIds (269-294)
indexer/collector/move-nft/prepare.go (2)
indexer/collector/move-nft/submodule.go (1)
  • MoveNftSubmodule (19-25)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
util/querier/query.go (2)
types/errors.go (1)
  • NewTimeoutError (108-114)
metrics/batcher.go (1)
  • GetGlobalMetricsBatcher (297-299)
indexer/collector/tx/util.go (2)
indexer/collector/tx/types.go (2)
  • RestTx (24-28)
  • RestTxBody (30-34)
types/account.go (2)
  • RestTx (73-77)
  • RestTxBody (79-83)
types/validators.go (1)
util/cache/cache.go (2)
  • Validator (21-25)
  • ConsensusPubkey (27-30)
indexer/collector/wasm-nft/submodule.go (2)
util/querier/querier.go (1)
  • Querier (20-28)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
types/cosmos.go (1)
config/config.go (1)
  • Version (20-20)
util/querier/wasm.go (3)
types/wasm.go (2)
  • QueryContractInfoResponse (3-7)
  • QueryWasmContractResponse (9-13)
util/querier/querier.go (1)
  • Querier (20-28)
util/querier/query.go (1)
  • Get (182-189)
indexer/collector/evm-nft/collect.go (1)
util/cache/cache.go (3)
  • NftKey (16-19)
  • GetOrCreateAccountIds (153-178)
  • GetOrCreateNftIds (269-294)
api/handler/nft/tx.go (2)
util/cache/cache.go (1)
  • NftKey (16-19)
util/util.go (1)
  • BytesToHexWithPrefixIfPresent (53-58)
indexer/extension/evmret/cleanup.go (1)
util/cache/cache.go (1)
  • GetOrCreateAccountIds (153-178)
indexer/indexer.go (2)
util/querier/querier.go (2)
  • Querier (20-28)
  • NewQuerier (47-57)
metrics/error_tracker.go (1)
  • TrackError (9-11)
util/querier/move.go (3)
types/move.go (1)
  • QueryMoveResourceResponse (3-10)
util/querier/query.go (1)
  • Get (182-189)
util/querier/querier.go (1)
  • Querier (20-28)
api/handler/block/block.go (3)
api/handler/block/types.go (2)
  • ToBlocksResponse (53-63)
  • ToBlockResponse (65-91)
util/querier/querier.go (1)
  • NewQuerier (47-57)
config/config.go (1)
  • GetConfig (217-225)
indexer/extension/richlist/evmrichlist/run.go (2)
util/querier/querier.go (1)
  • NewQuerier (47-57)
indexer/extension/richlist/utils/utils.go (2)
  • InitializeBalances (324-350)
  • ProcessBalanceChanges (212-238)
indexer/collector/tx/prepare.go (4)
indexer/collector/tx/submodule.go (1)
  • TxSubmodule (20-27)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
indexer/collector/tx/types.go (1)
  • RestTx (24-28)
types/tx.go (1)
  • EvmTx (26-41)
indexer/collector/wasm-nft/collect.go (1)
util/cache/cache.go (3)
  • NftKey (16-19)
  • GetOrCreateAccountIds (153-178)
  • GetOrCreateNftIds (269-294)
indexer/extension/internaltx/util.go (2)
types/internal_tx.go (3)
  • InternalTxInfo (3-8)
  • InternalTransaction (30-40)
  • EvmInternalTx (42-51)
util/cache/cache.go (1)
  • GetOrCreateAccountIds (153-178)
indexer/collector/move-nft/collect.go (1)
util/cache/cache.go (1)
  • GetOrCreateAccountIds (153-178)
cmd/api.go (2)
util/querier/query.go (1)
  • InitUtil (101-108)
util/cache/cache.go (1)
  • InitializeCaches (47-58)
util/querier/cosmos.go (5)
types/account.go (4)
  • RestTx (73-77)
  • QueryRestTxsResponse (94-97)
  • Pagination (45-48)
  • BlockResponse (86-92)
util/querier/query.go (1)
  • Get (182-189)
api/handler/block/types.go (2)
  • BlockResponse (21-23)
  • Block (29-40)
util/util.go (1)
  • AccAddressFromString (12-31)
types/cosmos.go (1)
  • NodeInfo (3-7)
indexer/extension/richlist/extension.go (1)
util/querier/querier.go (1)
  • Querier (20-28)
indexer/extension/richlist/utils/utils.go (2)
util/querier/querier.go (1)
  • Querier (20-28)
util/cache/cache.go (1)
  • GetOrCreateAccountIds (153-178)
util/cache/cache.go (2)
types/validators.go (2)
  • ConsensusPubkey (13-16)
  • Validator (7-11)
util/util.go (3)
  • AccAddressFromString (12-31)
  • BytesToHexWithPrefix (49-51)
  • BytesToHex (45-47)
util/querier/evm.go (6)
types/errors.go (1)
  • NewInvalidValueError (60-66)
types/vm.go (1)
  • EVM (8-8)
util/cache/cache.go (2)
  • GetEvmDenomContractCache (479-484)
  • SetEvmDenomContractCache (486-488)
util/util.go (1)
  • AccAddressFromString (12-31)
types/internal_tx.go (1)
  • DebugCallTraceBlockResponse (11-13)
types/jsonrpc.go (3)
  • JSONRPCErrorResponse (24-28)
  • JSONRPCResponse (12-17)
  • JSONRPCRequest (4-9)
api/handler/block/cache.go (3)
util/querier/querier.go (1)
  • Querier (20-28)
types/validators.go (1)
  • Validator (7-11)
util/cache/cache.go (3)
  • Validator (21-25)
  • GetValidatorCache (498-503)
  • SetValidatorCache (505-507)
api/handler/block/handler.go (6)
util/common-handler/common/handler.go (2)
  • HandlerRegistrar (14-16)
  • BaseHandler (18-22)
util/querier/querier.go (2)
  • Querier (20-28)
  • NewQuerier (47-57)
config/config.go (1)
  • Config (128-152)
api/cache/cache.go (1)
  • Config (11-16)
api/handler/handler.go (1)
  • Register (18-31)
indexer/api/handler/handler.go (1)
  • Register (14-23)
types/account.go (1)
indexer/collector/tx/types.go (2)
  • Pagination (19-22)
  • RestTx (24-28)
indexer/extension/richlist/evmrichlist/jsonrpc.go (2)
util/querier/querier.go (2)
  • NewQuerier (47-57)
  • Querier (20-28)
indexer/extension/richlist/utils/types.go (1)
  • AddressWithID (9-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)

- Updated BlockHandler to utilize the existing querier instance for block responses, enhancing consistency.
- Removed unnecessary querier instantiation in internal transaction queries and improved context handling.
- Simplified EVM contract queries by reusing the querier instance, optimizing performance and reducing redundancy.
- Cleaned up unused constants and functions in the querier package for better maintainability.
- Introduced ValidatorCacheSize parameter in the cleanup_test.go file to enhance cache management during batch processing.
- This addition aims to improve performance and resource allocation in the EVM retention cleanup process.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
util/querier/cosmos.go (1)

29-41: Bound GetCosmosTxs retries to avoid a potential infinite loop.

GetCosmosTxs will loop forever if fetchAllTxsWithPagination keeps returning a length different from txCount without error (e.g. txCount is wrong or the API never matches it). There’s also no guard for len(allTxs) > txCount.

Consider bounding attempts and respecting ctx cancellation, and surfacing a clear error when expectations are not met. For example:

-func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) {
-	for {
-		allTxs, err := q.fetchAllTxsWithPagination(ctx, height)
-		if err != nil {
-			return txs, err
-		}
-
-		// If we get the expected number of transactions, return immediately
-		if len(allTxs) == txCount {
-			return allTxs, nil
-		}
-	}
-}
+func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) ([]types.RestTx, error) {
+	const maxAttempts = 5
+
+	var lastTxs []types.RestTx
+	for attempt := 0; attempt < maxAttempts; attempt++ {
+		if err := ctx.Err(); err != nil {
+			return nil, err
+		}
+
+		allTxs, err := q.fetchAllTxsWithPagination(ctx, height)
+		if err != nil {
+			return nil, err
+		}
+
+		if len(allTxs) == txCount {
+			return allTxs, nil
+		}
+		if len(allTxs) > txCount {
+			return nil, fmt.Errorf("height %d: expected %d txs, got %d", height, txCount, len(allTxs))
+		}
+
+		lastTxs = allTxs
+		time.Sleep(200 * time.Millisecond) // optional: small delay before retry
+	}
+
+	return lastTxs, fmt.Errorf(
+		"height %d: failed to fetch expected %d txs after %d attempts (last len=%d)",
+		height, txCount, maxAttempts, len(lastTxs),
+	)
+}

You can tune maxAttempts and the backoff as appropriate for your environment.

🧹 Nitpick comments (5)
indexer/extension/internaltx/query.go (1)

22-22: Consider tracking the TODO for endpoint rotation.

The TODO about rotating endpoints when semver validation fails is a reasonable fallback strategy. Consider creating an issue to track this enhancement if it's important for resilience.

Would you like me to open an issue to track this TODO for implementing endpoint rotation on semver incompatibility?

indexer/extension/richlist/evmrichlist/jsonrpc.go (3)

47-47: Variable shadows imported package name.

The variable querier on line 47 shadows the imported package querier from line 14. This can cause confusion and potential issues if the package needs to be referenced later in the function.

Apply this diff to rename the variable:

-	querier := querier.NewQuerier(cfg.GetChainConfig())
+	q := querier.NewQuerier(cfg.GetChainConfig())

And update the usage on line 59:

-			batchBalances, err := queryBatchBalances(ctx, querier, erc20Address, batchData, height)
+			batchBalances, err := queryBatchBalances(ctx, q, erc20Address, batchData, height)

57-58: Address or clarify the TODO comment.

The TODO comment "revisit" is vague and doesn't explain what needs to be revisited. Consider either:

  1. Removing the TODO if the retry handling via utils.Post is confirmed sufficient
  2. Clarifying what specific aspect needs review (e.g., retry logic, error handling strategy)

Would you like me to open an issue to track this TODO for later follow-up?


94-97: Document the ID assignment contract with QueryERC20Balances.

The 1-based ID mapping (idx+1) assumes a specific ID assignment pattern from QueryERC20Balances. This implicit contract between the two functions could break if the querier implementation changes.

Consider adding a comment clarifying this dependency:

 	// Build a map from request ID to the corresponding AddressWithID
+	// Note: IDs are 1-based because QueryERC20Balances assigns ID 0 to eth_blockNumber
+	// and uses 1-indexed IDs for balance requests
 	idToAddr := make(map[int]richlistutils.AddressWithID, len(batch))
 	for idx, addrWithID := range batch {
 		idToAddr[idx+1] = addrWithID
 	}
util/querier/evm.go (1)

305-306: Clarify or address the TODO comment.

The TODO at line 305 suggests the error handling needs improvement but doesn't specify what's inappropriate about the current approach. Consider either:

  • Removing the TODO if the current Post error handling is sufficient
  • Adding detail about what specific handling is needed
  • Opening an issue to track the improvement

Would you like me to help identify specific error-handling improvements or open an issue to track this?

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 394cf03 and c64d807.

📒 Files selected for processing (5)
  • api/handler/block/block.go (2 hunks)
  • indexer/extension/internaltx/query.go (1 hunks)
  • indexer/extension/richlist/evmrichlist/jsonrpc.go (4 hunks)
  • util/querier/cosmos.go (1 hunks)
  • util/querier/evm.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
api/handler/block/block.go (1)
api/handler/block/types.go (2)
  • ToBlocksResponse (53-63)
  • ToBlockResponse (65-91)
indexer/extension/internaltx/query.go (2)
util/querier/querier.go (1)
  • Querier (20-28)
config/config.go (1)
  • Version (20-20)
indexer/extension/richlist/evmrichlist/jsonrpc.go (2)
util/querier/querier.go (2)
  • NewQuerier (47-57)
  • Querier (20-28)
indexer/extension/richlist/utils/types.go (1)
  • AddressWithID (9-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)
🔇 Additional comments (7)
indexer/extension/internaltx/query.go (1)

16-34: LGTM! Clean refactoring to querier abstraction.

The function correctly delegates node info fetching to the querier, handles semver validation with proper "v" prefix normalization, and provides a clear error message. The previous issue about unused fiber client code has been addressed.

indexer/extension/richlist/evmrichlist/jsonrpc.go (2)

30-79: LGTM on the batch processing structure.

The parallel batch processing implementation is well-designed:

  • Querier is correctly instantiated once before the loop (addressing the previous review feedback)
  • errgroup with SetLimit provides controlled concurrency
  • Mutex properly protects the shared balances map
  • Error wrapping preserves batch context for debugging

94-124: The validation suggestion is unnecessary; the code already handles missing responses safely.

The fetchERC20Balances function already validates that the response count matches the expected count (lines 317-319 in evm.go), ensuring exactly len(addresses)+1 responses are returned. Additionally, queryBatchBalances catches all error cases and returns early—if any response has an error or fails to parse, the entire function returns an error. There is no scenario where an address can silently be missing from the result; the function either succeeds with all balances or fails explicitly.

Likely an incorrect or invalid review comment.

api/handler/block/block.go (1)

46-46: LGTM! Consistent querier usage across handler methods.

Both GetBlocks and GetBlockByHeight now correctly use h.querier and propagate the request context via c.UserContext(). This addresses the previous review concern about inconsistent querier instantiation and aligns with the PR's goal of integrating the querier across handlers.

Also applies to: 86-86

util/querier/evm.go (2)

209-212: Past review concern has been addressed.

The error from extractResponse is now properly checked before accessing errResp.Error (line 210: if err == nil && errResp.Error != nil). This prevents potential issues with zero-valued structs when parsing fails.


171-171: Past review concern has been addressed.

The code now correctly uses q.RestUrls for the GET request to the REST endpoint, matching the pattern used by other REST-style calls like fetchEvmCall.

util/querier/cosmos.go (1)

101-148: Height, validator, module-account, and node-info helpers look solid.

The helpers for latest height, validator info, minter/burner module accounts, and node info all follow a consistent pattern: centralized endpoint rotation, clear unmarshalling, and good error propagation. The filtering for minter/burner permissions combined with AccAddressFromString and skipping invalid entries is also reasonable.

No changes requested here.

Also applies to: 150-182, 331-350

- Removed the unused offset parameter from the fetchCosmosBalances function to streamline its signature.
- Updated the function calls to reflect this change, enhancing code clarity and maintainability.
- Set the pagination offset to a fixed value of "0" when the offset is not used, ensuring consistent behavior in balance queries.
- Removed the unused offset parameter from the fetchAllAccountsWithPagination function to enhance clarity.
- Set the pagination offset to a fixed value of "0" when not in use, ensuring consistent behavior in account queries.
- Improved formatting of function calls for better readability in the GetAllBalances method.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
util/querier/cosmos.go (3)

29-41: Bound retries in GetCosmosTxs to avoid infinite loop and endpoint hammering

As written, this loops forever if fetchAllTxsWithPagination keeps returning a length different from txCount (e.g., API bug, race with indexer), repeatedly re-querying the same block with no backoff or exit condition. This is a correctness and reliability issue.

Consider adding a bounded retry loop and returning a clear error when expectations are not met:

-func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) {
-	for {
-		allTxs, err := q.fetchAllTxsWithPagination(ctx, height)
-		if err != nil {
-			return txs, err
-		}
-
-		// If we get the expected number of transactions, return immediately
-		if len(allTxs) == txCount {
-			return allTxs, nil
-		}
-	}
-}
+func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) ([]types.RestTx, error) {
+	const maxRetries = 3
+
+	var lastTxs []types.RestTx
+
+	for attempt := 0; attempt < maxRetries; attempt++ {
+		allTxs, err := q.fetchAllTxsWithPagination(ctx, height)
+		if err != nil {
+			return nil, err
+		}
+
+		lastTxs = allTxs
+		if len(allTxs) == txCount {
+			return allTxs, nil
+		}
+	}
+
+	return nil, fmt.Errorf(
+		"failed to fetch expected %d txs after %d attempts (last len=%d)",
+		txCount,
+		maxRetries,
+		len(lastTxs),
+	)
+}

184-255: Offset-based fallback in GetAllBalances still re-queries the first page only

When the “broken API” fallback is triggered (NextKey == nil but Total indicates more), you flip useOffset = true, but fetchCosmosBalances always sends pagination.offset = "0" in offset mode. The result:

  • The same first page is fetched repeatedly in offset mode.
  • allBalances fills with duplicates until len(allBalances) >= total, at which point the loop breaks with incorrect data and unnecessary network/memory overhead.

To fix this, thread an offset parameter through fetchCosmosBalances and advance it each iteration, mirroring your tx pagination approach:

-func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) {
+func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, offset int, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) {
 	return func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) {
 		// Build pagination parameters
 		params := map[string]string{"pagination.limit": paginationLimit}
 		if useOffset {
-			params["pagination.offset"] = "0"
+			params["pagination.offset"] = strconv.Itoa(offset)
 		} else if len(nextKey) > 0 {
 			params["pagination.key"] = base64.StdEncoding.EncodeToString(nextKey)
 		}
@@
 func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) {
 	var allBalances []sdk.Coin
 	var nextKey []byte
 	useOffset := false
+	offset := 0
 
 	for {
-		balancesResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchCosmosBalances(address, height, useOffset, nextKey))
+		balancesResp, err := executeWithEndpointRotation(
+			ctx,
+			q.RestUrls,
+			fetchCosmosBalances(address, height, useOffset, offset, nextKey),
+		)
@@
 		for _, balance := range balancesResp.Balances {
@@
 			allBalances = append(allBalances, sdk.Coin{
 				Denom:  denom,
 				Amount: balance.Amount,
 			})
 		}
+
+		// Advance offset for potential offset-based fallback.
+		offset += len(balancesResp.Balances)

You’ll need to adjust any other call sites of fetchCosmosBalances in this package accordingly.


18-26: Fix account pagination offset fallback and replace magic 20 with a named constant

Two issues in FetchAllAccountsWithPagination:

  1. Offset fallback has the same bug as balances.
    useOffset is set to true when NextKey is empty but Total indicates more accounts, yet offset is never advanced. In offset mode you will keep re-requesting the first page while appending duplicates until len(allAddresses) >= total.

  2. Bare 20 is a magic number for EVM address length.
    len(accAddress) > 20 is non-obvious without context and should be expressed via a named constant.

Suggested changes:

 const (
 	paginationLimitInt       = 100
 	paginationLimit          = "100"
+	evmAccountAddressLen     = 20
 	cosmosTxsPath            = "/cosmos/tx/v1beta1/txs/block/%d"
@@
 func (q *Querier) FetchAllAccountsWithPagination(ctx context.Context, height int64) ([]sdk.AccAddress, error) {
 	var allAddresses []sdk.AccAddress
 	var nextKey []sdk.AccAddress
 	useOffset := false
 	offset := 0
@@
 		// Extract addresses from accounts
 		for _, account := range accountsResp.Accounts {
 			if account.Address != "" {
 				if accAddress, err := util.AccAddressFromString(account.Address); err == nil {
-					if q.VmType == types.EVM && len(accAddress) > 20 {
+					if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen {
 						continue
 					}
 					allAddresses = append(allAddresses, accAddress)
 				}
 			}
 		}
+
+		// Advance offset for potential offset-based fallback.
+		offset += len(accountsResp.Accounts)

With this, subsequent offset-based requests will correctly move through pages, and the EVM-specific filter becomes self-documenting.

Also applies to: 282-327

🧹 Nitpick comments (1)
util/querier/cosmos.go (1)

150-182: Consider surfacing invalid module account addresses instead of silently dropping

GetMinterBurnerModuleAccounts filters by "minter" / "burner" permissions and then ignores any AccAddressFromString errors. This means a misconfigured module account will just disappear from results with no signal.

Consider either returning an error on invalid module account addresses or at least logging/recording them so configuration issues don’t go unnoticed.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between c444cfb and 3cff528.

📒 Files selected for processing (1)
  • util/querier/cosmos.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests
  • GitHub Check: Run linter
  • GitHub Check: Analyze (go)
🔇 Additional comments (5)
util/querier/cosmos.go (5)

43-61: Tx pagination helper looks consistent

fetchTxs correctly toggles between offset and key-based pagination, encodes nextKey once, and plumbs the timeout through to Get. No issues from a correctness standpoint.


63-99: Tx pagination + offset fallback implementation looks correct

fetchAllTxsWithPagination appends into allTxs, keeps offset = len(allTxs) in sync, and only falls back to offset-based pagination when NextKey is empty but a full page is returned and Total indicates more. This fixes the classic “offset never advances” pitfall and looks sound.


101-126: Latest-height fetching and parsing is straightforward

GetLatestHeight wraps the REST call with endpoint rotation and parses Header.Height into int64, returning a typed invalid-value error on parse failure. This is clear and robust enough for this layer.


128-148: Validator fetch path and rotation wiring look fine

The validator query uses a single helper with executeWithEndpointRotation and extractResponse, with no hidden state. Assuming the /opinit/opchild/v1/validator/%s path is correct for your deployment, this looks good.


330-350: Node info helpers are straightforward and consistent

fetchNodeInfo and GetNodeInfo mirror the same endpoint-rotation + typed-extraction pattern as the other helpers. No issues from a correctness or style perspective.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
util/querier/cosmos.go (4)

301-301: Replace magic number with named constant.

The literal 20 for EVM address length filtering is non-obvious without context. A named constant improves clarity.

Define a constant:

 const (
 	paginationLimitInt       = 100
 	paginationLimit          = "100"
+	evmAccountAddressLen     = 20
 	cosmosTxsPath            = "/cosmos/tx/v1beta1/txs/block/%d"

Then use it:

-				if q.VmType == types.EVM && len(accAddress) > 20 {
+				if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen {
 					continue
 				}

29-41: Critical: Infinite loop when transaction count never matches expected value.

This function loops indefinitely if fetchAllTxsWithPagination never returns exactly txCount transactions. Network issues, API bugs, or race conditions between fetching the count and transactions can cause this.

Add a retry limit and respect context cancellation:

 func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) {
+	const maxAttempts = 3
+	for attempt := 0; attempt < maxAttempts; attempt++ {
+		select {
+		case <-ctx.Done():
+			return nil, ctx.Err()
+		default:
+		}
+
-	for {
 		allTxs, err := q.fetchAllTxsWithPagination(ctx, height)
 		if err != nil {
 			return txs, err
 		}
 
 		// If we get the expected number of transactions, return immediately
 		if len(allTxs) == txCount {
 			return allTxs, nil
 		}
+		
+		// Optional: add backoff between retries
+		if attempt < maxAttempts-1 {
+			time.Sleep(time.Duration(attempt+1) * 100 * time.Millisecond)
+		}
 	}
+	return nil, fmt.Errorf("failed to fetch expected %d txs after %d attempts (got %d)", txCount, maxAttempts, len(allTxs))
 }

209-258: Critical: Offset never updated in pagination fallback.

GetAllBalances doesn't track or update an offset variable. When the fallback to offset-based pagination is triggered (line 249), subsequent iterations don't advance the offset, causing duplicate data.

Track and update the offset:

 func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) {
 	var allBalances []sdk.Coin
 	var nextKey []byte
 	useOffset := false
+	offset := 0
 
 	for {
 		balancesResp, err := executeWithEndpointRotation(
 			ctx, q.RestUrls,
-			fetchCosmosBalances(address, height, useOffset, nextKey),
+			fetchCosmosBalances(address, height, useOffset, offset, nextKey),
 		)
 		if err != nil {
 			return nil, err
 		}
 
 		// Append balances from this page
 		for _, balance := range balancesResp.Balances {
 			denom := strings.ToLower(balance.Denom)
 			if q.VmType == types.EVM {
 				contract, err := q.GetEvmContractByDenom(ctx, denom)
 				if err != nil {
 					return nil, err
 				}
 				denom = contract
 			}
 
 			allBalances = append(allBalances, sdk.Coin{
 				Denom:  denom,
 				Amount: balance.Amount,
 			})
 		}
+		// Update offset for next iteration
+		offset = len(allBalances)
+
 		// Check if there are more pages
 		if len(balancesResp.Pagination.NextKey) == 0 {

285-330: Critical: Offset never updated in pagination fallback.

FetchAllAccountsWithPagination doesn't track or update an offset variable. When offset-based pagination is triggered (line 319), the offset remains at 0, causing duplicate data.

Track and update the offset:

 func (q *Querier) FetchAllAccountsWithPagination(ctx context.Context, height int64) ([]sdk.AccAddress, error) {
 	var allAddresses []sdk.AccAddress
 	var nextKey []byte
 	useOffset := false
+	offset := 0
 
 	for {
 
-		accountsResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchAllAccountsWithPagination(height, useOffset, nextKey))
+		accountsResp, err := executeWithEndpointRotation(
+			ctx,
+			q.RestUrls,
+			fetchAllAccountsWithPagination(height, useOffset, offset, nextKey),
+		)
 		if err != nil {
 			return nil, err
 		}
 
 		// Extract addresses from accounts
 		for _, account := range accountsResp.Accounts {
 			if account.Address != "" {
 				if accAddress, err := util.AccAddressFromString(account.Address); err == nil {
 					if q.VmType == types.EVM && len(accAddress) > 20 {
 						continue
 					}
 					allAddresses = append(allAddresses, accAddress)
 				}
 			}
 		}
+
+		// Update offset for next iteration
+		offset = len(allAddresses)
 
 		// Check if there are more pages
 		if len(accountsResp.Pagination.NextKey) == 0 {
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 3cff528 and be32661.

📒 Files selected for processing (1)
  • util/querier/cosmos.go (1 hunks)
🧰 Additional context used
🪛 GitHub Check: Run linter
util/querier/cosmos.go

[failure] 310-310:
if len(accountsResp.Pagination.NextKey) == 0 has complex nested blocks (complexity: 5) (nestif)


[failure] 240-240:
if len(balancesResp.Pagination.NextKey) == 0 has complex nested blocks (complexity: 5) (nestif)


[failure] 79-79:
if len(response.Pagination.NextKey) == 0 has complex nested blocks (complexity: 5) (nestif)


[failure] 285-285:
cognitive complexity 32 of func (*Querier).FetchAllAccountsWithPagination is high (> 30) (gocognit)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Analyze (go)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (8)
util/querier/query.go (1)

237-254: Timeout retries remain unbounded and metrics are misleading.

The issues flagged in previous reviews still apply:

  1. Timeout errors trigger infinite retries (line 252: continue without checking retry count)
  2. Line 243 calls RecordRateLimitHit for timeout errors, conflating two distinct failure modes
util/cache/cache.go (2)

21-30: Duplicate type definitions are unused and should be removed.

These Validator and ConsensusPubkey structs duplicate types already defined in types/validators.go. Additionally, validatorCache (line 39) uses *types.Validator, so these local definitions are entirely unused dead code.

-type Validator struct {
-	Moniker         string          `json:"moniker"`
-	OperatorAddress string          `json:"operator_address"`
-	ConsensusPubkey ConsensusPubkey `json:"consensus_pubkey"`
-}
-
-type ConsensusPubkey struct {
-	Type string `json:"@type"`
-	Key  string `json:"key"`
-}

489-495: GetAccountCache should normalize addresses to prevent cache misses.

As flagged previously, checkAccountCache (line 71) normalizes addresses to bech32 before cache lookup, but GetAccountCache queries with the raw input. Callers passing hex-formatted addresses will miss entries stored under their bech32 representation.

 func GetAccountCache(account string) (int64, bool) {
+	key, err := normalizeAccountToBech32(account)
+	if err != nil {
+		return 0, false
+	}
-	if id, ok := accountCache.Get(account); ok {
+	if id, ok := accountCache.Get(key); ok {
 		return id, true
 	}
 	return 0, false
 }
util/querier/cosmos.go (5)

56-68: Infinite loop when transaction count never matches expected value.

As flagged previously, GetCosmosTxs loops forever if the API never returns exactly txCount transactions. This can occur due to network issues, API bugs, or race conditions.

 func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) {
+	const maxRetries = 3
+	for attempt := 0; attempt < maxRetries; attempt++ {
-	for {
 		allTxs, err := q.fetchAllTxsWithPagination(ctx, height)
 		if err != nil {
 			return txs, err
 		}
 
 		if len(allTxs) == txCount {
 			return allTxs, nil
 		}
 	}
+	return nil, fmt.Errorf("failed to fetch expected %d txs after %d attempts", txCount, maxRetries)
 }

203-226: Hardcoded offset breaks pagination fallback in fetchCosmosBalances.

Line 208 hardcodes pagination.offset to "0". When offset-based pagination is triggered, every request re-fetches the first page.

The function needs an offset int parameter:

-func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, nextKey []byte) ...
+func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, offset int, nextKey []byte) ...
 		if useOffset {
-			params["pagination.offset"] = "0"
+			params["pagination.offset"] = strconv.Itoa(offset)
 		}

228-269: GetAllBalances never advances offset for pagination fallback.

When the offset fallback is triggered, the offset variable is never updated, causing duplicate data fetching.

Add offset tracking similar to fetchAllTxsWithPagination:

 func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) {
 	var allBalances []sdk.Coin
 	var nextKey []byte
 	useOffset := false
+	offset := 0

 	for {
 		balancesResp, err := executeWithEndpointRotation(
 			ctx, q.RestUrls,
-			fetchCosmosBalances(address, height, useOffset, nextKey),
+			fetchCosmosBalances(address, height, useOffset, offset, nextKey),
 		)
 		// ... append balances ...
+		offset += len(balancesResp.Balances)

271-323: Same offset pagination bugs exist in fetchAllAccountsWithPagination and FetchAllAccountsWithPagination.

Line 276 hardcodes offset to "0" and the calling function never tracks/advances offset.


325-341: Magic number 20 for EVM address length should be a named constant.

Line 335 uses len(accAddress) > 20 without context. A named constant would make this self-documenting.

+const evmAccountAddressLen = 20
+
 func (q *Querier) extractAddressesFromAccounts(accounts []types.CosmosAccount) []sdk.AccAddress {
 	// ...
-		if q.VmType == types.EVM && len(accAddress) > 20 {
+		if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen {
🧹 Nitpick comments (1)
config/chain.go (1)

111-128: Inconsistent error types based on field name reduces clarity.

The field-specific branching (lines 117-119, 123-126) uses different error constructors (NewValidationError vs NewInvalidValueError) for the same validation failure depending on field name. This makes error handling inconsistent.

Consider unifying to a single error type:

 func validateHttpUrl(fieldName, urlStr string, idx int) error {
 	if len(urlStr) == 0 {
 		return types.NewValidationError(fieldName, fmt.Sprintf("URL at index %d is empty", idx))
 	}
 	u, err := url.Parse(urlStr)
 	if err != nil {
-		if fieldName == "RPC_URL" {
-			return types.NewValidationError(fieldName, fmt.Sprintf("invalid URL format at index %d: %s", idx, urlStr))
-		}
-		return types.NewInvalidValueError(fieldName, urlStr, fmt.Sprintf("invalid URL at index %d: %v", idx, err))
+		return types.NewValidationError(fieldName, fmt.Sprintf("invalid URL format at index %d: %s", idx, urlStr))
 	}
 	if u.Scheme != "http" && u.Scheme != "https" {
-		if fieldName == "RPC_URL" {
-			return types.NewValidationError(fieldName, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme))
-		}
-		return types.NewInvalidValueError(fieldName, urlStr, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme))
+		return types.NewValidationError(fieldName, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme))
 	}
 	return nil
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between be32661 and 24c4865.

📒 Files selected for processing (6)
  • config/chain.go (1 hunks)
  • indexer/extension/richlist/evmrichlist/run.go (2 hunks)
  • util/cache/cache.go (13 hunks)
  • util/querier/cosmos.go (1 hunks)
  • util/querier/querier.go (1 hunks)
  • util/querier/query.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • indexer/extension/richlist/evmrichlist/run.go
🧰 Additional context used
🧬 Code graph analysis (2)
util/cache/cache.go (3)
types/validators.go (2)
  • ConsensusPubkey (13-16)
  • Validator (7-11)
cache/cache.go (1)
  • Cache (9-11)
util/util.go (3)
  • AccAddressFromString (12-31)
  • BytesToHexWithPrefix (49-51)
  • BytesToHex (45-47)
util/querier/cosmos.go (6)
indexer/collector/tx/types.go (3)
  • Pagination (19-22)
  • RestTx (24-28)
  • QueryRestTxsResponse (14-17)
types/account.go (6)
  • Pagination (45-48)
  • RestTx (73-77)
  • QueryRestTxsResponse (94-97)
  • QueryModuleAccountsResponse (63-65)
  • CosmosAccountsResponse (51-54)
  • CosmosAccount (14-18)
util/querier/querier.go (1)
  • Querier (20-28)
types/errors.go (1)
  • NewInvalidValueError (60-66)
util/util.go (1)
  • AccAddressFromString (12-31)
types/cosmos.go (1)
  • NodeInfo (3-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Analyze (go)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
🔇 Additional comments (12)
util/querier/query.go (2)

1-40: Constants and initialization look reasonable.

The retry constants (maxRetries=5, baseBackoffDelay=1s, maxBackoffDelay=30s) are sensible defaults. The LCG seed initialization using atomic operations is thread-safe.


293-391: HTTP request execution and error handling are well-structured.

The implementation properly:

  • Validates URLs before use
  • Tracks metrics for latency, concurrent requests, and semaphore wait times
  • Handles rate limits (429) and internal server errors appropriately
  • Uses context deadlines for timeout extraction
config/chain.go (2)

131-153: Backward compatibility accessors are well-designed.

The GetRpcUrl(), GetRestUrl(), and GetJsonRpcUrl() methods provide clean backward compatibility by returning the first element or empty string. This is a good pattern for migration.


20-40: Validation refactoring improves maintainability.

Breaking validation into dedicated methods (validateChainId, validateRpcUrls, etc.) makes the code more readable and testable.

util/cache/cache.go (2)

45-57: Cache initialization with singleton pattern is well-implemented.

Using sync.Once ensures thread-safe single initialization. The cache types and sizes are configurable through CacheConfig.


497-506: Validator cache accessors are correctly implemented.

GetValidatorCache and SetValidatorCache properly use *types.Validator and key by OperatorAddress.

util/querier/querier.go (2)

47-57: NewQuerier constructor correctly copies configuration fields.

The constructor properly initializes all fields from ChainConfig, including the new slice-based URL fields.


36-42: Generic extractResponse helper is a clean abstraction.

Using generics for JSON unmarshalling reduces boilerplate across different response types.

util/querier/cosmos.go (4)

29-54: handlePaginationNextKey helper is a good abstraction.

This centralizes the pagination logic including the workaround for broken APIs that return null next_key prematurely. The boolean return values are clearly documented.


70-118: Transaction fetching correctly implements offset-based pagination.

fetchTxs accepts an offset parameter (line 70) and fetchAllTxsWithPagination properly updates offset = len(allTxs) (line 104). This is the correct pattern that should be applied to the balance and account fetchers.


120-145: Height fetching and parsing are well-implemented.

The code properly uses endpoint rotation and handles parsing errors with appropriate error types.


343-363: Node info retrieval is cleanly implemented.

Uses the generic extractResponse helper and endpoint rotation consistently.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (6)
util/querier/cosmos.go (6)

57-69: Critical: Infinite loop when transaction count never matches expected value.

This issue was previously flagged but remains unfixed. GetCosmosTxs loops indefinitely if the API never returns exactly txCount transactions, which can occur due to network issues, API bugs, or race conditions.

Add a retry limit or respect context deadline:

 func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) {
+	const maxRetries = 3
+	for attempt := 0; attempt < maxRetries; attempt++ {
-	for {
+		select {
+		case <-ctx.Done():
+			return nil, ctx.Err()
+		default:
+		}
+
 		allTxs, err := q.fetchAllTxsWithPagination(ctx, height)
 		if err != nil {
 			return txs, err
 		}
 
 		// If we get the expected number of transactions, return immediately
 		if len(allTxs) == txCount {
 			return allTxs, nil
 		}
 	}
+	return nil, fmt.Errorf("failed to fetch expected %d txs after %d attempts (got %d)", txCount, maxRetries, len(allTxs))
 }

204-227: Critical: Hardcoded offset causes infinite loop in pagination fallback.

This issue was previously flagged as addressed but remains unfixed. Line 209 hardcodes pagination.offset to "0", so when offset-based pagination fallback is triggered, every subsequent request re-fetches the first page.

Add an offset parameter and use it:

-func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) {
+func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, offset int, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) {
 	return func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) {
 		// Build pagination parameters
 		params := map[string]string{"pagination.limit": paginationLimit}
 		if useOffset {
-			params["pagination.offset"] = "0"
+			params["pagination.offset"] = strconv.Itoa(offset)
 		} else if len(nextKey) > 0 {
 			params["pagination.key"] = base64.StdEncoding.EncodeToString(nextKey)
 		}

229-270: Critical: Offset never tracked in GetAllBalances, causing infinite loop in fallback pagination.

This issue was previously flagged as addressed but remains unfixed. When the fallback to offset-based pagination is triggered at line 260, useOffset is set to true but no offset variable exists to track position. Combined with the hardcoded "0" in fetchCosmosBalances, this causes an infinite loop re-fetching the first page.

Add offset tracking and pass it to the fetch function:

 func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) {
 	var allBalances []sdk.Coin
 	var nextKey []byte
 	useOffset := false
+	offset := 0
 
 	for {
 		balancesResp, err := executeWithEndpointRotation(
 			ctx, q.RestUrls,
-			fetchCosmosBalances(address, height, useOffset, nextKey),
+			fetchCosmosBalances(address, height, useOffset, offset, nextKey),
 		)
 		if err != nil {
 			return nil, err
 		}
 
 		// Append balances from this page
 		for _, balance := range balancesResp.Balances {
 			denom := strings.ToLower(balance.Denom)
 			if q.VmType == types.EVM {
 				contract, err := q.GetEvmContractByDenom(ctx, denom)
 				if err != nil {
 					return nil, err
 				}
 				denom = contract
 			}
 
 			allBalances = append(allBalances, sdk.Coin{
 				Denom:  denom,
 				Amount: balance.Amount,
 			})
 		}
+		offset = len(allBalances)
+
 		// Check if there are more pages
 		shouldContinue, shouldBreak := handlePaginationNextKey(balancesResp.Pagination, len(allBalances), len(balancesResp.Balances), &useOffset)

272-295: Critical: Hardcoded offset causes infinite loop in pagination fallback.

This issue was previously flagged but remains unfixed. Line 277 hardcodes pagination.offset to "0", identical to the bug in fetchCosmosBalances. When offset-based pagination is triggered, every request re-fetches the first page.

Add an offset parameter:

-func fetchAllAccountsWithPagination(height int64, useOffset bool, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.CosmosAccountsResponse, error) {
+func fetchAllAccountsWithPagination(height int64, useOffset bool, offset int, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.CosmosAccountsResponse, error) {
 	return func(ctx context.Context, endpointURL string) (*types.CosmosAccountsResponse, error) {
 		// Build pagination parameters
 		params := map[string]string{"pagination.limit": paginationLimit}
 		if useOffset {
-			params["pagination.offset"] = "0"
+			params["pagination.offset"] = strconv.Itoa(offset)
 		} else if len(nextKey) > 0 {
 			params["pagination.key"] = base64.StdEncoding.EncodeToString(nextKey)
 		}

297-324: Critical: Offset never tracked in FetchAllAccountsWithPagination, causing infinite loop in fallback pagination.

This issue was previously flagged but remains unfixed. When fallback to offset-based pagination is triggered at line 312, useOffset is set to true but no offset variable exists. Combined with the hardcoded "0" in fetchAllAccountsWithPagination, this causes an infinite loop.

Add offset tracking and pass it to the fetch function:

 func (q *Querier) FetchAllAccountsWithPagination(ctx context.Context, height int64) ([]sdk.AccAddress, error) {
 	var allAddresses []sdk.AccAddress
 	var nextKey []byte
 	useOffset := false
+	offset := 0
 
 	for {
-		accountsResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchAllAccountsWithPagination(height, useOffset, nextKey))
+		accountsResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchAllAccountsWithPagination(height, useOffset, offset, nextKey))
 		if err != nil {
 			return nil, err
 		}
 
 		extracted := q.extractAddressesFromAccounts(accountsResp.Accounts)
 		allAddresses = append(allAddresses, extracted...)
+		offset = len(allAddresses)
 
 		// Check if there are more pages

336-336: Replace magic number with named constant.

This issue was previously flagged but remains unfixed. The literal 20 for EVM address length filtering is not self-documenting.

Define a constant at the top of the file:

 const (
 	paginationLimitInt       = 100
 	paginationLimit          = "100"
+	evmAccountAddressLen     = 20
 	cosmosTxsPath            = "/cosmos/tx/v1beta1/txs/block/%d"

Then use it:

-		if q.VmType == types.EVM && len(accAddress) > 20 {
+		if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen {
 			continue
 		}
🧹 Nitpick comments (1)
util/querier/cosmos.go (1)

30-55: Consider a more expressive return type for pagination control.

Returning (bool, bool) requires callers to remember which boolean means what. A custom type or named returns would improve readability.

Example refactor:

-func handlePaginationNextKey(pagination types.Pagination, currentCount, pageSize int, useOffset *bool) (bool, bool) {
+type PaginationAction int
+
+const (
+	PaginationContinueWithNextKey PaginationAction = iota
+	PaginationContinueWithOffset
+	PaginationDone
+)
+
+func handlePaginationNextKey(pagination types.Pagination, currentCount, pageSize int, useOffset *bool) PaginationAction {
 	if len(pagination.NextKey) != 0 {
-		return false, false // Continue with next key
+		return PaginationContinueWithNextKey
 	}
 
 	// Workaround for broken API that returns null next_key prematurely.
 	if pagination.Total == "" {
-		return false, true // No more pages
+		return PaginationDone
 	}
 
 	total, err := strconv.Atoi(pagination.Total)
 	if err != nil {
-		return false, true // Can't parse total, assume done
+		return PaginationDone
 	}
 
 	if currentCount < total && pageSize == paginationLimitInt {
 		*useOffset = true
-		return true, false // Continue with offset
+		return PaginationContinueWithOffset
 	}
 
-	return false, true // No more pages
+	return PaginationDone
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 24c4865 and f215d7c.

📒 Files selected for processing (1)
  • util/querier/cosmos.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
util/querier/cosmos.go (1)

91-119: LGTM! Pagination with offset tracking implemented correctly.

The function properly maintains and updates offset on line 105, ensuring that offset-based pagination fallback works as intended.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
util/querier/querier.go (1)

62-101: Past review issue remains unaddressed: backoff calculation and endpoint exhaustion logic.

The concerns raised in the previous review comment about this section have not been resolved:

  1. Line 86: calculateBackoffDelay(retriesPerEndpoint) is called when retriesPerEndpoint >= maxRetriesPerURL (from the line 84 condition), meaning the backoff delay before endpoint rotation always uses the maximum value (at least maxRetriesPerURL). This may not reflect the intended exponential backoff behavior.

  2. Lines 97-99: The endpoint exhaustion check (currentEndpointIndex == startEndpointIndex) is positioned after rotation but occurs BEFORE actually attempting the rotated endpoint. Additionally, this check is redundant because the totalRetries check at line 114 will always trigger first after len(endpoints) * maxRetriesPerURL attempts.

Please review and address the suggestions in the previous review comment to fix both the backoff calculation (use maxRetriesPerURL-1 or track attempts correctly) and the exhaustion detection logic (track distinct endpoints tried rather than relying on index equality).

🧹 Nitpick comments (2)
util/querier/querier.go (2)

17-17: Consider adding a comment explaining queryTimeout usage.

The queryTimeout constant is defined but not referenced within this file. While it's likely used by request function implementations in other files (e.g., evm.go, wasm.go), adding a comment would clarify its purpose and prevent confusion.

 const (
+	// queryTimeout is the default timeout for individual HTTP requests made by querier methods
 	queryTimeout     = 5 * time.Second
 	maxRetriesPerURL = 5
 )

48-58: Consider adding nil check for defensive programming.

The constructor works correctly but doesn't validate the input parameter. Adding a nil check would prevent potential panics.

 func NewQuerier(cfg *config.ChainConfig) *Querier {
+	if cfg == nil {
+		return nil
+	}
 	return &Querier{
 		ChainId:              cfg.ChainId,
 		VmType:               cfg.VmType,
 		RpcUrls:              cfg.RpcUrls,
 		RestUrls:             cfg.RestUrls,
 		JsonRpcUrls:          cfg.JsonRpcUrls,
 		AccountAddressPrefix: cfg.AccountAddressPrefix,
 		Environment:          cfg.Environment,
 	}
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between f215d7c and d6396fe.

📒 Files selected for processing (1)
  • util/querier/querier.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
util/querier/querier.go (3)
types/vm.go (1)
  • VMType (3-3)
config/chain.go (1)
  • ChainConfig (10-18)
sentry_integration/sentry_integration.go (1)
  • CaptureCurrentHubException (9-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)
🔇 Additional comments (3)
util/querier/querier.go (3)

1-14: LGTM on imports and package structure.

The package declaration and imports are well-organized with appropriate dependencies for the querier functionality.


37-46: LGTM on helper function and type definitions.

The generic extractResponse function and requestFunc type alias provide clean abstractions for the endpoint rotation logic.


103-128: LGTM on request execution and failure handling logic.

The request execution flow, error handling, and backoff logic in this section work correctly:

  • Successful responses return immediately
  • Failed requests properly increment counters
  • Sentry integration captures exhaustion errors for observability
  • Context cancellation is checked appropriately
  • Error wrapping preserves the error chain

- Changed configuration keys from "RPC_URLS", "REST_URLS", and "JSON_RPC_URLS" to singular forms "RPC_URL", "REST_URL", and "JSON_RPC_URL" for consistency and clarity in configuration management.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
config/config.go (1)

305-306: Resolve TODO by implementing dedicated validator cache configuration.

The TODO comment and use of the generic CACHE_SIZE environment variable for ValidatorCacheSize creates inconsistency with the other cache configurations (lines 299-304), which all use dedicated environment variables.

This concern is addressed in the review comment for line 111, which provides a complete solution: add a DefaultValidatorCacheSize constant, set a default for VALIDATOR_CACHE_SIZE in setDefaults(), and use viper.GetInt("VALIDATOR_CACHE_SIZE") here.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between d6396fe and 2d11ebe.

📒 Files selected for processing (1)
  • config/config.go (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests
  • GitHub Check: Run linter
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
config/config.go (1)

262-264: LGTM! Remember to update README as previously noted.

The conversion from single URL strings to comma-separated URL slices is correctly implemented using splitAndTrim(), which properly handles empty values and whitespace. This enables multi-URL support for improved failover and load balancing.

As noted in the previous review, ensure the README is updated to reflect the environment variable usage pattern (comma-separated values for RPC_URL, REST_URL, and JSON_RPC_URL).

- Introduced a new querier instance in the RichListExtension struct to enhance data retrieval capabilities.
- Improved the initialization process for better integration with the existing configuration.
- Temporarily disabled error handling logic in the fastSync method to prevent panic on exceeding maximum scrape error count.
- Retained logging of errors for visibility while allowing the function to continue execution.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 9f1d0c4 and 78545e7.

📒 Files selected for processing (1)
  • indexer/scraper/sync.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests
  • GitHub Check: Run linter
  • GitHub Check: Analyze (go)

- Commented out the error return statement in the executeWithEndpointRotation function to prevent failure when all retries are exhausted.
- This change allows the function to continue execution without returning an error, while still logging the last encountered error for visibility.
… querier

- Removed a commented-out line in the fetchTraceCallByBlock function to streamline the code.
- Added logging statements for total retries and errors in the executeWithEndpointRotation function to improve visibility during execution.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
util/querier/querier.go (1)

84-101: Endpoint rotation logic has known issues - addressing past comment.

The previously identified issues with backoff calculation and exhaustion detection are still present. Additionally, there's a double-backoff scenario: after the final retry on an endpoint, backoff occurs at lines 120-125, then again at lines 86-91 during rotation check.

🧹 Nitpick comments (3)
util/querier/evm.go (3)

268-273: Consider validating address length for defensive coding.

The padding logic handles addresses shorter than 64 chars but doesn't validate oversized or malformed addresses. A standard Ethereum address is 40 hex chars (20 bytes). While callers likely provide valid addresses, adding a length check would prevent subtle bugs from propagating.

 			addressParam := strings.TrimPrefix(address, "0x")
 
+			// Validate standard Ethereum address length (40 hex chars = 20 bytes)
+			if len(addressParam) > 40 {
+				return nil, fmt.Errorf("invalid address length for %s: expected 40 hex chars, got %d", address, len(addressParam))
+			}
+
 			// Pad address to 32 bytes (64 hex chars) - efficient single allocation
 			if len(addressParam) < 64 {
 				addressParam = strings.Repeat("0", 64-len(addressParam)) + addressParam
 			}

304-305: TODO: Address incomplete error handling.

The TODO comment indicates uncertainty about proper handling here. This could be related to retry logic, partial failures in batch responses, or timeout handling.

Would you like me to help design more robust error handling for this batch request? For example, handling partial failures where some responses succeed while others fail, or implementing request chunking for large batches.


230-248: Minor: Consider checking first response directly.

Since eth_blockNumber is always added as the first request with ID 0 (line 261), checking batchResponses[0] directly would be more efficient than iterating. However, the current approach is more defensive against response reordering.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 31f9d7f and 1f859b7.

📒 Files selected for processing (2)
  • util/querier/evm.go (1 hunks)
  • util/querier/querier.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
util/querier/querier.go (3)
types/vm.go (1)
  • VMType (3-3)
config/chain.go (1)
  • ChainConfig (10-18)
sentry_integration/sentry_integration.go (1)
  • CaptureCurrentHubException (9-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)
🔇 Additional comments (4)
util/querier/evm.go (4)

24-42: LGTM! Clean ABI interaction pattern.

The method properly handles ABI packing/unpacking with appropriate error propagation. The pattern is consistent and reusable.


44-66: LGTM! Good input validation for token ID.

Using big.Int.SetString with proper validation and a custom error type provides clear error messaging for invalid inputs.


150-186: LGTM! Well-structured caching and normalization.

The method follows cache-first pattern, handles early return for hex addresses, normalizes denom formats appropriately, and validates the response before caching.


209-212: Past issue addressed - error handling is now correct.

The previous review flagged that the error from extractResponse was ignored. This has been fixed - the condition now properly checks err == nil before accessing errResp.Error.

- Updated the timeout parameter in the TraceCallByBlock method to 10 times the previous queryTimeout value, enhancing the robustness of the endpoint rotation during trace calls.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
util/querier/evm.go (4)

102-134: Confirm intended behaviour when VmType != types.EVM in GetEvmTxs.

Right now, a non‑EVM querier returns (nil, nil) because of the early if q.VmType != types.EVM { return }. That’s reasonable if callers always gate on VmType or treat “no txs & no error” as “EVM not active”, but it can also silently hide misconfiguration.

Consider either:

  • returning an explicit error (e.g. “EVM VMType not enabled”), or
  • documenting that nil, nil is the sentinel for “non‑EVM chain” and ensuring callers rely on that.

136-186: Normalize denom before cache access to avoid duplicate remote lookups.

GetEvmContractByDenom normalizes denom (ibc uppercase, gasGAS) only after checking the cache, but uses the normalized denom as the cache key on write. That means repeated calls with different cases (e.g. "ibc/abc" vs "ibc/ABC", "gas" vs "GAS") will miss the cache and re‑query MiniEVM.

Refactor to apply normalization first, then consistently use the normalized key for both cache read and write and for the REST call, e.g.:

 func (q *Querier) GetEvmContractByDenom(ctx context.Context, denom string) (string, error) {
 	if strings.HasPrefix(denom, "0x") {
 		return denom, nil
 	}
 
-	// Check cache first
-	if address, ok := cache.GetEvmDenomContractCache(denom); ok {
-		return address, nil
-	}
-
-	// ibc/UPPERCASE
-	// l2/lowercase
-	// evm/AnyCase
-	if strings.HasPrefix(denom, "ibc/") {
-		denom = fmt.Sprintf("ibc/%s", strings.ToUpper(denom[4:]))
-	} else if strings.ToLower(denom) == "gas" {
-		denom = "GAS"
-	}
+	normalized := denom
+
+	// ibc/UPPERCASE
+	// l2/lowercase
+	// evm/AnyCase
+	if strings.HasPrefix(normalized, "ibc/") {
+		normalized = fmt.Sprintf("ibc/%s", strings.ToUpper(normalized[4:]))
+	} else if strings.EqualFold(normalized, "gas") {
+		normalized = "GAS"
+	}
+
+	// Check cache with normalized denom
+	if address, ok := cache.GetEvmDenomContractCache(normalized); ok {
+		return address, nil
+	}
 
-	response, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchEvmContractByDenom(denom))
+	response, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchEvmContractByDenom(normalized))
@@
-	address := strings.ToLower(response.Address)
-	cache.SetEvmDenomContractCache(denom, address)
+	address := strings.ToLower(response.Address)
+	cache.SetEvmDenomContractCache(normalized, address)

This keeps semantics the same while improving cache hit rate and avoiding repeated network calls for equivalent denoms.


188-228: Trace JSON‑RPC error handling now looks correct; consider single unmarshal as a minor cleanup.

The updated logic correctly checks err == nil && errResp.Error != nil before surfacing an RPC error, and only then unmarshals into DebugCallTraceBlockResponse. That fixes the earlier bug where the parse error was ignored.

If you ever want to tighten this further, you could unmarshal once into a struct that has both Result and Error fields instead of decoding twice, but that’s purely a micro‑optimization.


230-341: Batch ERC‑20 balances: check caller expectations re JSON‑RPC error fields and response ordering.

The batching logic and parseLatestHeightFromBatch are solid (height consistency checks are a nice touch), but two behavioural points are worth confirming:

  1. Per‑call JSON‑RPC errors:
    You currently only inspect Error for the eth_blockNumber response (ID 0). For eth_call entries (IDs ≥ 1), any EVM/JSON‑RPC errors are returned in the types.JSONRPCResponse slice but not acted on here. Ensure all callers of QueryERC20Balances explicitly inspect resp.Error for each balance entry before treating Result as valid.

  2. Response order vs. ID mapping:
    JSON‑RPC batch responses are not guaranteed to be in the same order as the requests. You already use ID == 0 when parsing eth_blockNumber, but the rest of the code only checks len(batchResponses) and then returns the slice as‑is. Confirm that downstream consumers either:

    • rely on ID to match responses to addresses, or
    • are comfortable with any ordering assumptions baked into the current RPC implementation.

If ordering matters at the API boundary, you might want to normalize by sorting or building a map[id]response before returning.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 1f859b7 and abbfb00.

📒 Files selected for processing (1)
  • util/querier/evm.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
util/querier/evm.go (1)

24-66: ERC‑721 helpers and evmCall look correct and error‑aware.

ABI pack/unpack around evmCall is clean, errors are propagated, and invalid tokenIdStr is surfaced via NewInvalidValueError. The evmCall helper correctly decodes the hex payload and surfaces EVM call errors from QueryCallResponse. No blocking issues here.

Also applies to: 68-100

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
util/querier/querier.go (1)

21-29: Add GoDoc for Querier and polish QueryCallResponse comment.

Querier is exported but lacks a GoDoc comment, and QueryCallResponse’s comment is almost idiomatic but could be tightened. Adding proper docs will help callers and keep go doc output clean.

- type Querier struct {
+// Querier holds chain configuration and endpoint URLs used to perform queries.
+type Querier struct {
@@
-// QueryCallResponse represents the response from EVM call endpoint
+// QueryCallResponse represents the response from an EVM call endpoint.
 type QueryCallResponse struct {

Also applies to: 31-35, 48-58

🧹 Nitpick comments (2)
util/querier/querier.go (2)

37-47: Wrap JSON unmarshal errors with context for easier debugging.

extractResponse currently returns the raw json.Unmarshal error; wrapping it with context makes failures easier to trace without losing the underlying error.

 func extractResponse[T any](response []byte) (T, error) {
 	var t T
 	if err := json.Unmarshal(response, &t); err != nil {
-		return t, err
+		return t, fmt.Errorf("failed to unmarshal response: %w", err)
 	}
 	return t, nil
 }

60-125: Endpoint rotation/backoff logic looks sound; consider minor simplifications.

The loop correctly respects context cancellation, enforces maxRetriesPerURL per endpoint, rotates through all endpoints, and reports the terminal failure to Sentry; no correctness issues stand out. If you want to simplify later:

  • startEndpointIndex is always 0; you could hardcode that or derive a randomized start index and drop the separate variable.
  • loopSize / totalRetries are only used to gate Sentry reporting; alternatively, you could report in the "exhausted all endpoints" branch and drop the extra counters if you don’t need the exact attempt count.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between abbfb00 and ab5e484.

📒 Files selected for processing (2)
  • indexer/scraper/sync.go (0 hunks)
  • util/querier/querier.go (1 hunks)
💤 Files with no reviewable changes (1)
  • indexer/scraper/sync.go
🧰 Additional context used
🧬 Code graph analysis (1)
util/querier/querier.go (3)
types/vm.go (1)
  • VMType (3-3)
config/chain.go (1)
  • ChainConfig (10-18)
sentry_integration/sentry_integration.go (1)
  • CaptureCurrentHubException (9-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
util/querier/querier.go (1)

1-14: Package and imports look appropriate and consistent.

context, encoding/json, fmt, time, Sentry, and local packages are all used later in the file; no unused or missing imports and the package naming is clear.

- Added a totalDelay variable to track cumulative backoff delays during endpoint retries in the executeWithEndpointRotation function.
- Introduced a safeguard to return an error if the total delay exceeds 5 minutes, ensuring better control over execution time.
@earthfm1670
Copy link

looks good

@songwongtp songwongtp changed the title [WIP] integrate querier and cache utilities across handlers feat: integrate querier and cache utilities across handlers Dec 11, 2025
@songwongtp
Copy link
Collaborator

some coderabbit's comments are not resolved yet and maybe worth looking too

- Replaced hardcoded empty address strings with constants from the types package for better maintainability.
- Introduced a new decrementPrepareCount method to encapsulate the logic for safely decrementing the prepareCount.
- Removed unused files related to internaltx and richlist utilities to clean up the codebase.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
indexer/extension/richlist/utils/utils.go (1)

267-300: Consider adding context cancellation checks in the account loop.

The loop processes potentially many accounts without checking for context cancellation. If the context is cancelled (e.g., due to timeout or shutdown), the loop continues until completion, wasting resources.

Apply this diff to add a cancellation check:

 	for idx, address := range accounts {
+		// Check for context cancellation
+		if err := ctx.Err(); err != nil {
+			return err
+		}
+
 		if idx%100 == 0 {
 			progress := fmt.Sprintf("%d/%d", idx, len(accounts))
 			logger.Info("fetching and accumulating balances by denomination", slog.Int64("height", height), slog.String("progress", progress))
 		}
indexer/collector/evm-nft/prepare.go (1)

17-38: Consider using errgroup.WithContext so goroutines respect ctx cancellation

Right now ctx is passed into GetCollectionName / GetTokenUri, but the errgroup itself is not context-aware. Using errgroup.WithContext would ensure that:

  • all in-flight lookups are canceled promptly when ctx is canceled, and
  • other goroutines stop work once one returns an error.

A minimal change would look like:

-	var g errgroup.Group
+	g, ctx := errgroup.WithContext(ctx)

Call sites (g.Go, g.Wait) can stay the same, now using the derived ctx.

Also applies to: 78-80

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 32d8773 and 3655414.

📒 Files selected for processing (8)
  • indexer/collector/evm-nft/collect.go (6 hunks)
  • indexer/collector/evm-nft/prepare.go (5 hunks)
  • indexer/collector/evm-nft/query.go (0 hunks)
  • indexer/collector/evm-nft/util.go (1 hunks)
  • indexer/extension/richlist/utils/rest.go (0 hunks)
  • indexer/extension/richlist/utils/types.go (0 hunks)
  • indexer/extension/richlist/utils/utils.go (13 hunks)
  • indexer/indexer.go (7 hunks)
💤 Files with no reviewable changes (3)
  • indexer/collector/evm-nft/query.go
  • indexer/extension/richlist/utils/rest.go
  • indexer/extension/richlist/utils/types.go
🧰 Additional context used
🧬 Code graph analysis (3)
indexer/collector/evm-nft/prepare.go (2)
indexer/collector/evm-nft/submodule.go (1)
  • EvmNftSubmodule (20-27)
indexer/types/types.go (1)
  • ScrapedBlock (17-28)
indexer/indexer.go (2)
util/querier/querier.go (2)
  • Querier (21-29)
  • NewQuerier (48-58)
metrics/error_tracker.go (1)
  • TrackError (9-11)
indexer/collector/evm-nft/collect.go (1)
util/cache/cache.go (3)
  • NftKey (16-19)
  • GetOrCreateAccountIds (152-177)
  • GetOrCreateNftIds (268-293)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests
  • GitHub Check: Run linter
  • GitHub Check: Analyze (go)
🔇 Additional comments (17)
indexer/indexer.go (4)

45-45: LGTM - Querier integration follows established patterns.

The querier field and initialization align with the NewQuerier constructor shown in the relevant snippets and supports the PR's goal of centralizing data retrieval.

Also applies to: 59-59


207-253: Well-structured retry logic with proper cancellation handling.

The retry mechanism correctly handles:

  • Context cancellation at multiple checkpoints (before attempt, during wait)
  • Indefinite retry for DeadlineExceeded with configurable cooling period
  • Preservation of panic behavior for non-timeout errors

The separation of concerns between cancellation paths (using decrementPrepareCount()) and success path (inline decrement) prevents double-counting.


255-259: LGTM - success path correctly updates shared state under lock.


265-270: LGTM - clean helper for thread-safe counter decrement.

indexer/extension/richlist/utils/utils.go (8)

21-22: LGTM! Clean import additions.

The new imports for cache and querier support the refactored architecture with proper dependency injection and context propagation.


87-87: LGTM! Context and querier propagation improves cancellation support.

The updated signature properly supports context-aware operations and dependency injection.


127-131: Verify the silent error handling aligns with requirements.

When GetEvmContractByDenom fails, the code continues to the next denom without logging or accumulating the error. While this prevents blocking on individual lookup failures, it may silently skip valid denoms during transient network issues.

Consider whether this behavior is acceptable or if failed lookups should be retried, logged at a higher level, or accumulated for reporting.


172-172: LGTM! Centralizing constants improves maintainability.

Moving EVM-related constants to the types package ensures consistency across the codebase.

Also applies to: 188-188, 198-198


212-212: LGTM! Proper context and querier propagation.

The signature update and downstream call correctly propagate context and querier dependencies.

Also applies to: 230-230


253-260: LGTM! Context-aware balance fetching improves cancellation support.

The signature update and GetAllBalances call properly utilize the context for timeout and cancellation handling.

Also applies to: 279-279


324-324: LGTM! Proper context and querier propagation throughout initialization.

The signature update and downstream calls correctly propagate context and querier dependencies, enabling proper cancellation and dependency injection.

Also applies to: 327-327, 339-339


357-357: LGTM! Cache delegation aligns with the refactoring goals.

Moving the implementation to the cache package improves modularity and aligns with the PR's objective to streamline cache initialization.

indexer/collector/evm-nft/util.go (1)

17-19: Standardizing NFT log filtering on types.EvmTransferTopic looks good

The stricter 4-topic + empty-data check keeps this helper focused on standard EVM Transfer logs and removes local duplication of the topic constant; behavior is clear and consistent with shared types.

indexer/collector/evm-nft/prepare.go (1)

122-131: Centralizing mint/burn checks on types.EvmEmptyAddress is correct

Using types.EvmEmptyAddress for the from/to comparisons keeps mint (from == empty, to != empty) and burn (from != empty, to == empty) detection consistent with the rest of the codebase, without changing the underlying logic.

indexer/collector/evm-nft/collect.go (3)

20-21: Migration of mint/transfer/burn maps to cache.NftKey looks consistent

Switching mintMap/transferMap/burnMap to use cache.NftKey and constructing nftKey with {CollectionAddr, TokenId} preserves the previous behavior while aligning with the cache package’s key type. The mint/transfer/burn switch logic remains unchanged apart from using the shared types.EvmEmptyAddress constant.

Also applies to: 34-37, 71-91


108-151: Account ID resolution via cache.GetOrCreateAccountIds is wired correctly

mintedCollections groups NFTs by collection, and allAddresses aggregates owners (mints/transfers) plus collection creators before calling cache.GetOrCreateAccountIds. That ensures all subsequent uses (creatorId, minted owners, transfer owners) resolve against a single, cache-backed ID map, matching the prior behavior while leveraging the new cache helper.


283-292: Tx–NFT edge creation now correctly uses cache-backed NFT IDs

Building keys as []cache.NftKey from nftTxMap and resolving them via cache.GetOrCreateNftIds is consistent with the new cache-based ID flow and keeps the later nftIdMap[key] lookup straightforward. This maintains the existing edge-creation behavior while centralizing NFT ID handling in the cache layer.

- Introduced EvmEmptyAddress constant for the zero address used in EVM transfers.
- Added EvmTransferTopic constant representing the keccak256 hash of the Transfer event signature for improved clarity and maintainability.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
types/evm.go (1)

11-18: Consider adding godoc comments for exported types.

Both QueryEvmTxsResponse and EvmContractByDenomResponse are exported types but lack godoc comments (except for the inline comment on line 15). Adding godoc comments would improve documentation and discoverability.

Example:

+// QueryEvmTxsResponse wraps a list of EVM transactions from the query API.
 type QueryEvmTxsResponse struct {
 	Result []EvmTx `json:"result"`
 }
 
-// EvmContractByDenomResponse represents the response from /minievm/evm/v1/contracts/by_denom
+// EvmContractByDenomResponse represents the response from /minievm/evm/v1/contracts/by_denom.
 type EvmContractByDenomResponse struct {
 	Address string `json:"address"`
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 3655414 and f38ee79.

📒 Files selected for processing (1)
  • types/evm.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
types/evm.go (1)
types/tx.go (1)
  • EvmTx (26-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run linter
  • GitHub Check: Run tests
  • GitHub Check: Analyze (go)
🔇 Additional comments (2)
types/evm.go (2)

7-8: LGTM!

The EvmTransferTopic constant correctly represents the keccak256 hash of the Transfer event signature, which is standard for ERC20/ERC721 token transfers.


5-6: The 32-byte format of EvmEmptyAddress is correct. The code uses it to compare against log.Topics[1] and log.Topics[2] (indexed Transfer event parameters), which are stored as 32-byte values in EVM logs. No changes needed.

- Updated the GetAccountCache function to normalize account addresses to Bech32 format before retrieval.
- Removed unused Validator and ConsensusPubkey structs to streamline the codebase.
- Eliminated the acquisition and release of the Fiber client in the Run and wait methods to streamline the code.
- This change reduces unnecessary resource management and simplifies the indexer logic.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
indexer/indexer.go (1)

61-65: wait() ignores context cancellation; Run can hang on shutdown or startup failure

Run blocks in i.wait() before starting goroutines, and wait() loops forever on RPC errors:

  • It calls i.querier.GetLatestHeight(i.ctx) inside a for loop.
  • On any error (including context.Canceled / DeadlineExceeded) it just logs and time.Sleep(5 * time.Second).
  • Sleeps are not cancelable and there is no ctx.Err() check.

If the passed ctx is canceled while waiting for the chain to become ready (or while the RPC endpoint is down), wait() will keep retrying and sleeping, and Run will not return, defeating graceful shutdown.

Consider making wait() context-aware and exiting when i.ctx is done, and making the backoff sleeps cancelable:

 func (i *Indexer) wait() {
 	for {
-		chainHeight, err := i.querier.GetLatestHeight(i.ctx)
-		if err != nil {
-			i.logger.Error("failed to get chain height", slog.Any("error", err))
-			time.Sleep(5 * time.Second)
-			continue
-		}
-		if chainHeight > types.MinChainHeightToStart {
-			break
-		}
-		time.Sleep(types.ChainCheckInterval)
+		// Fast‑path: respect cancellation before doing any work.
+		if err := i.ctx.Err(); err != nil {
+			i.logger.Info("wait() context done, exiting", slog.Any("error", err))
+			return
+		}
+
+		chainHeight, err := i.querier.GetLatestHeight(i.ctx)
+		if err != nil {
+			// Treat context cancellation specially: exit instead of retrying forever.
+			if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
+				i.logger.Info("wait() cancelled while getting chain height", slog.Any("error", err))
+				return
+			}
+
+			i.logger.Error("failed to get chain height", slog.Any("error", err))
+			// Backoff sleep that can be interrupted by context cancel.
+			select {
+			case <-i.ctx.Done():
+				i.logger.Info("wait() context done during backoff, exiting", slog.Any("error", i.ctx.Err()))
+				return
+			case <-time.After(5 * time.Second):
+			}
+			continue
+		}
+
+		if chainHeight > types.MinChainHeightToStart {
+			break
+		}
+
+		// Periodic poll, also cancelable via context.
+		select {
+		case <-i.ctx.Done():
+			i.logger.Info("wait() context done while waiting for min height", slog.Any("error", i.ctx.Err()))
+			return
+		case <-time.After(types.ChainCheckInterval):
+		}
 	}
 }

This keeps the behavior the same in the healthy case but lets Run exit promptly when its context is canceled or when upstream height queries are being stopped.

Also applies to: 77-81, 149-162

🧹 Nitpick comments (1)
indexer/indexer.go (1)

172-256: Clarify Prepare’s context behavior and tighten prepareCount handling

The new retry loop in prepare() looks reasonable, but there are a couple of things worth tightening:

  1. Reliance on collector.Prepare honoring ctx

    The retry loop stops only when:

    • i.ctx.Done() is observed at the top of the loop or during the cooldown select, or
    • collector.Prepare returns context.Canceled.

    If collector.Prepare ever blocks on I/O without honoring the passed context, the goroutine can outlive shutdown even though Run has canceled i.ctx. It would be good to double‑check that collector.Prepare consistently propagates and checks the ctx it receives, especially around network/database calls.

  2. Centralizing prepareCount decrements / guarding underflow (optional)

    • On cancellation paths you use decrementPrepareCount().
    • On the success path you do i.prepareCount-- directly under the mutex.

    To make this counter safer against future changes, you could (optionally) route all decrements through the helper and add a defensive guard:

    // success path
  • i.mtx.Lock()
  • i.blockMap[b.Height] = b
  • i.prepareCount--
  • i.mtx.Unlock()
  • i.mtx.Lock()
  • i.blockMap[b.Height] = b
  • i.mtx.Unlock()
  • i.decrementPrepareCount()
    
    And in the helper:
    
    ```diff
    // decrementPrepareCount safely decrements the prepareCount counter
    func (i *Indexer) decrementPrepareCount() {
    	i.mtx.Lock()
    
  • i.prepareCount--
  • if i.prepareCount > 0 {
  •   i.prepareCount--
    
  • } else {
  •   i.logger.Warn("prepareCount underflow detected in decrementPrepareCount",
    
  •   	slog.Int("prepareCount", i.prepareCount))
    
  • }
    i.mtx.Unlock()
    }

This keeps the inflight‑count accounting in one place and will surface any future misuse via logs instead of silently drifting negative.  




Also applies to: 258-263

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used**: CodeRabbit UI

**Review profile**: CHILL

**Plan**: Pro

**Cache: Disabled due to data retention organization setting**

**Knowledge base: Disabled due to `Reviews -> Disable Knowledge Base` setting**

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between ef91f27d8d8f0016283121b4f978ff41a0e2ce50 and d2c1a1a3fb8f90c18d547cf24ff53003cb8eabae.

</details>

<details>
<summary>📒 Files selected for processing (1)</summary>

* `indexer/indexer.go` (7 hunks)

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>🧬 Code graph analysis (1)</summary>

<details>
<summary>indexer/indexer.go (2)</summary><blockquote>

<details>
<summary>util/querier/querier.go (2)</summary>

* `Querier` (21-29)
* `NewQuerier` (48-58)

</details>
<details>
<summary>metrics/error_tracker.go (1)</summary>

* `TrackError` (9-11)

</details>

</blockquote></details>

</details>

</details>

<details>
<summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)</summary>

* GitHub Check: Run tests
* GitHub Check: Run linter
* GitHub Check: Analyze (go)

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

@Benzbeeb Benzbeeb requested a review from songwongtp December 15, 2025 06:47
@songwongtp songwongtp merged commit 35159da into main Jan 5, 2026
6 checks passed
@songwongtp songwongtp deleted the multiple-query branch January 5, 2026 08:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants