-
Notifications
You must be signed in to change notification settings - Fork 0
feat: integrate querier and cache utilities across handlers #110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Updated BlockHandler and RichListHandler to utilize the new querier for improved data retrieval. - Refactored NFT handling to replace util functions with cache methods for consistency. - Enhanced context handling in various modules to support better concurrency and error management. - Removed deprecated query functions and streamlined cache initialization across the application.
WalkthroughIntroduces a reusable, context-aware Querier and splits util into util/cache and util/querier; threads context+querier through handlers, indexer, collectors, extensions; replaces many local REST/JSON‑RPC helpers with Querier calls; converts single URL config fields to slices and adds cache accessors and new shared types. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant API as API Handler
participant Q as Querier
participant C as Cache
participant EP as RPC/REST Endpoints
Client->>API: Request (GetBlocks / GetBlockByHeight)
API->>Q: Request data (ctx, params)
Q->>C: GetValidatorCache(addr)
alt validator cached
C-->>Q: validator
else not cached
Q->>EP: HTTP/JSON-RPC request (rotating endpoints, retries)
EP-->>Q: response
Q->>C: SetValidatorCache(validator)
end
Q-->>API: enriched block(s)
API-->>Client: JSON response
sequenceDiagram
participant Indexer
participant Collector
participant SubA as BlockSubmodule
participant SubB as TxSubmodule
participant SubC as NFTSubmodule
participant Q as Querier
Indexer->>Collector: Prepare(ctx, scrapedBlock)
Collector->>Collector: errgroup.WithContext(ctx)
par prepare submodules
Collector->>SubA: Prepare(gCtx, block)
Collector->>SubB: Prepare(gCtx, block)
Collector->>SubC: Prepare(gCtx, block)
and
SubA->>Q: GetCollectionData(gCtx,...)
SubB->>Q: GetCosmosTxs(gCtx,...)
SubC->>Q: GetEvmTxs(gCtx,...)
end
Collector->>Collector: wait for all
Collector-->>Indexer: result
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
util/querier/query.go (1)
259-275: Rate limit retries are also unbounded.Similar to timeout handling, 429 responses trigger infinite retries. While rate limiting is transient, a misconfigured or permanently rate-limited endpoint could cause indefinite blocking.
Consider applying the same bounded retry logic suggested for timeouts.
indexer/extension/internaltx/extension.go (1)
310-325: Remove unused fiber client.Similar to
query.go, the fiber client is acquired (lines 311-312) but never used sincei.querier.TraceCallByBlocknow handles the HTTP calls internally.Apply this diff to remove the dead code:
func (i *InternalTxExtension) scrapeHeight(ctx context.Context, height int64) (*WorkItem, error) { - client := fiber.AcquireClient() - defer fiber.ReleaseClient(client) - // Scrape internal transaction data callTraceRes, err := i.querier.TraceCallByBlock(ctx, height)After removing both usages in this file, verify if the
fiber/v2import (line 11) is still needed by other code in this file.indexer/collector/wasm-nft/submodule.go (1)
29-37: Critical:querierfield is not initialized in the constructor.The
querierfield was added toWasmNftSubmodulebut is not initialized inNew(). Theprepare.gofile callssub.querier.GetCollectionData(ctx, ...), which will cause a nil pointer dereference. Bothevm-nftandmove-nftsubmodules initialize querier in their constructors using the same pattern.func New(logger *slog.Logger, cfg *config.Config) *WasmNftSubmodule { cacheSize := cfg.GetCacheSize() return &WasmNftSubmodule{ logger: logger.With("submodule", SubmoduleName), cfg: cfg, cache: make(map[int64]CacheData), blacklist: cache.New[string, interface{}](cacheSize), + querier: querier.NewQuerier(cfg.GetChainConfig()), } }
🧹 Nitpick comments (35)
patcher/v1_0_2/patch.go (1)
31-41: WasmVM patch path is now a silent no‑op; consider making intent explicitFor
types.WasmVM,Patchno longer invokes any patch logic and just falls through to the finalreturn nil. If Wasm NFT patching is still required anywhere, this will silently skip it while reporting success; if it’s intentionally deprecated/moved, the current code doesn’t communicate that.I’d suggest either restoring explicit behavior or at least logging the decision so operators aren’t surprised:
func Patch(tx *gorm.DB, cfg *config.Config, logger *slog.Logger) error { switch cfg.GetVmType() { case types.EVM: return nil case types.WasmVM: - // return PatchWasmNFT(tx, cfg, logger) + logger.Info("Skipping v1_0_2 WasmVM NFT patch; logic handled elsewhere or deprecated") + return nil case types.MoveVM: return PatchMoveNFT(tx, cfg, logger) } return nil }If Wasm patching is still needed, instead re‑enable the appropriate patch function or return a clear error for unsupported VM types.
types/move.go (1)
3-10: Consider extracting the anonymous struct for reusability.The nested anonymous struct works, but if
Resourceneeds to be used independently elsewhere, consider extracting it as a named type:+type MoveResource struct { + Address string `json:"address"` + StructTag string `json:"struct_tag"` + MoveResource string `json:"move_resource"` + RawBytes string `json:"raw_bytes"` +} + type QueryMoveResourceResponse struct { - Resource struct { - Address string `json:"address"` - StructTag string `json:"struct_tag"` - MoveResource string `json:"move_resource"` - RawBytes string `json:"raw_bytes"` - } `json:"resource"` + Resource MoveResource `json:"resource"` }config/chain.go (1)
30-39: Inconsistent error types between RPC and REST URL validation.RPC URL validation uses
NewValidationErrorfor invalid URL format (line 35), while REST URL validation usesNewInvalidValueErrorfor the same scenario (line 50). Consider aligning them for consistency:for idx, rpcUrl := range cc.RpcUrls { if len(rpcUrl) == 0 { return types.NewValidationError("RPC_URL", fmt.Sprintf("URL at index %d is empty", idx)) } if u, err := url.Parse(rpcUrl); err != nil { - return types.NewValidationError("RPC_URL", fmt.Sprintf("invalid URL format at index %d: %s", idx, rpcUrl)) + return types.NewInvalidValueError("RPC_URL", rpcUrl, fmt.Sprintf("invalid URL at index %d: %v", idx, err)) } else if u.Scheme != "http" && u.Scheme != "https" { - return types.NewValidationError("RPC_URL", fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme)) + return types.NewInvalidValueError("RPC_URL", rpcUrl, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme)) } }Also applies to: 45-53
util/querier/query.go (2)
31-36: Global mutable state may complicate testing and concurrency.The package uses global variables (
limiter,cfg,endpointCategorizer) initialized viaInitUtil. This pattern:
- Makes unit testing harder (requires careful setup/teardown)
- Can cause race conditions if
InitUtilis called multiple timesConsider refactoring toward a
Querierstruct that holds these dependencies, allowing for easier testing and avoiding global state.
123-136: Custom LCG is acceptable for jitter but consider documenting the trade-off.The LCG implementation is thread-safe and sufficient for jitter. However, a brief comment noting that this is intentionally not cryptographically secure (and doesn't need to be for backoff jitter) would help future maintainers.
util/querier/move.go (1)
31-37: Consider simplifying the return statement.The intermediate
resvariable is unnecessary here.func (q *Querier) GetMoveResource(ctx context.Context, addr string, structTag string, height int64) (resource *types.QueryMoveResourceResponse, err error) { - res, err := executeWithEndpointRotation[types.QueryMoveResourceResponse](ctx, q.RestUrls, fetchMoveResource(addr, structTag, height, queryTimeout)) - if err != nil { - return nil, err - } - return res, nil + return executeWithEndpointRotation[types.QueryMoveResourceResponse](ctx, q.RestUrls, fetchMoveResource(addr, structTag, height, queryTimeout)) }types/jsonrpc.go (1)
24-28:JSONRPCErrorResponseappears redundant.
JSONRPCResponsealready handles error responses via theErrorfield. Consider removing this type unless there's a specific use case requiring a distinct error-only response type.util/querier/querier.go (1)
47-57: Consider adding constructor validation.Validating that at least one endpoint URL is configured would provide earlier, clearer error messages than failing at query time.
func NewQuerier(cfg *config.ChainConfig) *Querier { + if len(cfg.RestUrls) == 0 && len(cfg.RpcUrls) == 0 && len(cfg.JsonRpcUrls) == 0 { + // Consider logging a warning or returning an error + } return &Querier{ ChainId: cfg.ChainId,types/account.go (2)
20-42: CosmosAccount.UnmarshalJSON drops Type/Permissions and only populates AddressBecause this custom
UnmarshalJSONworks off amap[string]any, it bypasses the normal struct decoding and only ever setsAddress.TypeandPermissionsremain at their zero values even if present in the JSON, which is surprising given the exported fields and tags.Consider decoding into an alias + helper struct so all fields are populated while still supporting the
base_accountfallback for address:func (a *CosmosAccount) UnmarshalJSON(data []byte) error { - // Try to unmarshal as a map to check the structure - var raw map[string]any - if err := json.Unmarshal(data, &raw); err != nil { - return err - } - - // Check if this is a non BaseAccount - if baseAccount, ok := raw["base_account"].(map[string]any); ok { - if address, ok := baseAccount["address"].(string); ok { - a.Address = address - return nil - } - } - - // Otherwise, try direct address field (BaseAccount) - if address, ok := raw["address"].(string); ok { - a.Address = address - return nil - } - - return fmt.Errorf("no address field found in account JSON") + type alias CosmosAccount + var aux struct { + alias + BaseAccount *struct { + Address string `json:"address"` + } `json:"base_account"` + } + + if err := json.Unmarshal(data, &aux); err != nil { + return err + } + + *a = CosmosAccount(aux.alias) + + // Prefer base_account.address if present + if aux.BaseAccount != nil && aux.BaseAccount.Address != "" { + a.Address = aux.BaseAccount.Address + } + + if a.Address == "" { + return fmt.Errorf("no address field found in account JSON") + } + + return nil }This keeps
Type/Permissionspopulated while preserving the flexible address extraction logic.
44-48: Avoid duplicating RestTx / RestTxBody / Pagination definitions across packages
Pagination,RestTx,RestTxBody, andQueryRestTxsResponseare now defined intypes, while very similar types still exist inindexer/collector/tx/types.go. This split makes it easy to accidentally mix thetx.RestTxandtypes.RestTxvariants or let them drift structurally over time.Given
CacheData.RestTxsand helpers likegrepMsgTypesFromRestTxalready usetypes.RestTx/types.RestTxBody, consider fully centralizing these contracts in thetypespackage and updatingindexer/collector/txto reuse them (or removing the local duplicates there). This will simplify conversions and avoid subtle type mismatches.Also applies to: 73-83, 94-97
api/handler/richlist/richlist.go (1)
37-43: Double‑check the context type passed into GetEvmContractByDenomThe switch to
h.querier.GetEvmContractByDenom(c.Context(), denom)looks good conceptually, but in Fiber v2Ctx.Context()returns the underlying fasthttp context, not acontext.Context. IfGetEvmContractByDenomexpects acontext.Context, you likely want to pass something likec.UserContext()or a request‑scoped context propagated from middleware instead.Please verify the expected parameter type on
querier.GetEvmContractByDenomand adjust the call accordingly to avoid type mismatches or losing cancellation/timeouts.cmd/api.go (1)
41-46: Querier and cache initialization order looks correctInitializing the querier globals (
querier.InitUtil(cfg)) and then dictionary caches (cache.InitializeCaches(cfg.GetCacheConfig())) before opening the DB and starting the API server is a sensible ordering and keeps shared state ready for handlers.Optionally, once the TODO is revisited, consider renaming
InitUtilto something likeInitQuerierorInitEndpointsto better reflect its purpose.indexer/collector/tx/types.go (1)
9-12: Consolidate on a single RestTx type to avoid conversions and confusion
CacheData.RestTxsnow uses[]types.RestTx, but this file still defines its ownRestTx/RestTxBodyandQueryRestTxsResponse.Txs []RestTx. That means any code that unmarshals intoQueryRestTxsResponseand then wants to populateCacheData.RestTxseither has to convert element‑wise or will hit type mismatches.To simplify things, consider:
- Changing
QueryRestTxsResponse.Txsto[]types.RestTx, and- Removing or inlining the local
RestTx/RestTxBodydefinitions in favor of the shared ones in thetypespackage.That keeps all REST tx contracts in one place and avoids accidental mixing of structurally identical but distinct types.
Also applies to: 14-28
api/handler/richlist/handler.go (1)
11-18: RichListHandler querier wiring is fine; consider injecting for testabilityStoring
cfgand a*querier.QuerieronRichListHandlerand initializing it inNewRichListHandlercleanly supports the new querier‑based calls in the richlist routes.If you anticipate unit‑testing or swapping implementations (e.g., mocks, different endpoint policies), you might later want to inject a
Querierinterface from the caller instead of constructing it directly here, but that can be deferred.Also applies to: 22-27
indexer/scraper/scrape.go (1)
61-75: Using GetRpcUrl accessor is consistent; just confirm endpoint selection semanticsSwitching to:
url := fmt.Sprintf("%s/block?height=%d", cfg.GetChainConfig().GetRpcUrl(), height) ... url := fmt.Sprintf("%s/block_results?height=%d", cfg.GetChainConfig().GetRpcUrl(), height)properly routes both calls through the new chain‑config accessor.
If
GetRpcUrl()can return different endpoints on successive calls (e.g., round‑robin or random load‑balancing), there’s a small risk of fetching/blockand/block_resultsfor the same height from different nodes that are briefly out of sync. If that’s a concern, consider havingscrapeBlockobtain a single base RPC URL once and pass it into both helpers so they always hit the same node for a given height.Also applies to: 77-91
indexer/collector/tx/util.go (1)
13-31: Aligning grepMsgTypesFromRestTx with types.RestTx/RestTxBody is good; small cleanups possibleSwitching
grepMsgTypesFromRestTxto accepttypes.RestTxand unmarshal intotypes.RestTxBodyis consistent with the move toward shared tx types in thetypespackage and avoids yet another local struct.Two optional nits you can consider later:
Use
map[string]struct{}instead ofmap[string]interface{}formsgTypeMapto avoid allocating empty interface values:msgTypeMap := make(map[string]interface{})
- msgTypeMap := make(map[string]struct{})
...
- msgTypeMap[msgType] = nil
- msgTypeMap[msgType] = struct{}{}
- Once
indexer/collector/tx/types.gois fully migrated totypes.RestTxBody, you can delete its duplicateRestTxBodydefinition to keep a single source of truth.indexer/extension/internaltx/collect.go (1)
35-47: Consider reusing the same hex canonicalization helper used in the cacheYou build
hashesfromtrace.TxHashviautil.HexToBytes, whileGetOrCreateEvmTxHashIdsinternally keys byutil.BytesToHex(hash). Later you derivehashHexwithstrings.ToLower(strings.TrimPrefix(trace.TxHash, "0x"))to index intohashIdMap.To avoid any subtle mismatches (e.g., if
BytesToHexformatting ever changes), consider deriving the lookup key from the same canonical source as the cache, e.g. by reusing the hex produced from the bytes instead of recomputing from the original string.Also applies to: 65-71
indexer/collector/wasm-nft/prepare.go (1)
3-5: Context-aware wasm-NFT prepare flow looks good; considererrgroup.WithContextUpdating
prepareto acceptctxand passing it intosub.querier.GetCollectionData(ctx, addr, block.Height)is a solid step toward proper cancellation/timeouts. The concurrency pattern witherrgroup.Groupand a local mutex aroundcolInfosis sound.Since you now have a context, you might optionally switch to
errgroup.WithContext(ctx)so that ifctxis cancelled or one goroutine fails, the remaining goroutines can stop work early.Also applies to: 26-27, 39-58
types/wasm.go (1)
1-13: New shared WASM query response types look fine; consider brief doc commentsCentralizing
QueryContractInfoResponseandQueryWasmContractResponseintypesis sensible and matches their reuse across modules. To improve discoverability, consider adding short doc comments on each exported type describing which query/endpoint they model.indexer/collector/evm-nft/prepare.go (1)
25-27: Consider usingerrgroup.WithContextfor cancellation propagation.The plain
errgroup.Groupdoesn't cancel sibling goroutines when one fails or when the parent context is cancelled. IfGetCollectionNameorGetTokenUriencounters an error or the parentctxis cancelled, remaining goroutines will continue running unnecessarily.- var g errgroup.Group + g, gCtx := errgroup.WithContext(ctx) var nameMtx sync.Mutex var uriMtx sync.MutexThen pass
gCtxto the querier calls instead ofctx:- name, err := sub.querier.GetCollectionName(ctx, addr, block.Height) + name, err := sub.querier.GetCollectionName(gCtx, addr, block.Height)- tokenUri, err := sub.querier.GetTokenUri(ctx, addr, id, block.Height) + tokenUri, err := sub.querier.GetTokenUri(gCtx, addr, id, block.Height)util/querier/wasm.go (1)
46-58: Consider parallelizing the two independent queries for improved performance.
fetchContractInfoandfetchWasmContractInfoare independent queries. Executing them sequentially doubles the latency. Usingerrgroupcould halve the query time.func (q *Querier) GetCollectionData(ctx context.Context, collectionAddr string, height int64) (name string, creator string, err error) { - contractInfo, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchContractInfo(collectionAddr, height, queryTimeout)) - if err != nil { - return "", "", fmt.Errorf("failed to query contract info: %w", err) - } - - wasmContractInfo, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchWasmContractInfo(collectionAddr, height, queryTimeout)) - if err != nil { - return "", "", fmt.Errorf("failed to query wasm contract info: %w", err) - } - - return contractInfo.Data.Name, wasmContractInfo.ContractInfo.Creator, nil + var contractInfo *types.QueryContractInfoResponse + var wasmContractInfo *types.QueryWasmContractResponse + + g, gCtx := errgroup.WithContext(ctx) + g.Go(func() error { + var err error + contractInfo, err = executeWithEndpointRotation(gCtx, q.RestUrls, fetchContractInfo(collectionAddr, height, queryTimeout)) + if err != nil { + return fmt.Errorf("failed to query contract info: %w", err) + } + return nil + }) + g.Go(func() error { + var err error + wasmContractInfo, err = executeWithEndpointRotation(gCtx, q.RestUrls, fetchWasmContractInfo(collectionAddr, height, queryTimeout)) + if err != nil { + return fmt.Errorf("failed to query wasm contract info: %w", err) + } + return nil + }) + + if err := g.Wait(); err != nil { + return "", "", err + } + + return contractInfo.Data.Name, wasmContractInfo.ContractInfo.Creator, nil }indexer/collector/move-nft/prepare.go (1)
24-41: LGTM with optional note.The concurrent pattern using
errgroup.WithContextis well-implemented. The local mutex properly protects thenftResourcesmap from concurrent writes, andgCtxis correctly passed to the querier call for cancellation propagation.Note: In Go 1.22+, loop variable capture (
addr := nftAddr) is no longer necessary as the loop variable semantics changed. This can be removed if the project uses Go 1.22 or later.api/handler/block/cache.go (1)
11-11: Parameter shadows package name.The parameter
queriershadows the imported packagequerier, which can lead to confusion and potential issues if you need to access the package within this function.Consider renaming the parameter:
-func getValidator(ctx context.Context, querier *querier.Querier, validatorAddr string) (*types.Validator, error) { +func getValidator(ctx context.Context, q *querier.Querier, validatorAddr string) (*types.Validator, error) {Then update the usage on line 16:
- validator, err := querier.GetValidator(ctx, validatorAddr) + validator, err := q.GetValidator(ctx, validatorAddr)indexer/collector/tx/prepare.go (1)
13-39: Consider usingerrgroup.WithContextfor proper context cancellation.The current implementation uses a plain
errgroup.Group. While context is passed to the querier calls, usingerrgroup.WithContext(ctx)would ensure that when one goroutine fails, the derived context is cancelled, allowing the other goroutine to exit early if it checks the context.- var g errgroup.Group + g, ctx := errgroup.WithContext(ctx) var restTxs []types.RestTx var evmTxs []types.EvmTx g.Go(func() error { - txs, err := sub.querier.GetCosmosTxs(ctx, block.Height, len(block.Txs)) + txs, err := sub.querier.GetCosmosTxs(ctx, block.Height, len(block.Txs))This pattern is already used in similar code (e.g.,
indexer/collector/collector.goper the AI summary).config/config.go (1)
305-306: Address the TODO: ValidatorCacheSize reuses generic CACHE_SIZE.The TODO comment indicates this needs revisiting.
ValidatorCacheSizecurrently inherits fromCACHE_SIZE(default 1000) rather than having its own dedicated environment variable like other cache sizes (e.g.,ACCOUNT_CACHE_SIZE,NFT_CACHE_SIZE).Consider adding a dedicated environment variable for consistency:
viper.SetDefault("EVM_DENOM_CONTRACT_CACHE_SIZE", DefaultEvmDenomContractCacheSize) + viper.SetDefault("VALIDATOR_CACHE_SIZE", DefaultCacheSize)And in the config initialization:
- // TODO: revisit - ValidatorCacheSize: viper.GetInt("CACHE_SIZE"), + ValidatorCacheSize: viper.GetInt("VALIDATOR_CACHE_SIZE"),indexer/extension/richlist/evmrichlist/jsonrpc.go (1)
83-97: Parameter shadowing and ID mapping assumption.
- The parameter
queriershadows the imported package name.- The ID mapping
idToAddr[idx+1]assumes the querier returns responses with IDs starting at 1. Verify this matches the implementation inquerier.QueryERC20Balances.Rename parameter to avoid shadowing:
-func queryBatchBalances(ctx context.Context, querier *querier.Querier, erc20Address string, batch []richlistutils.AddressWithID, height int64) (map[richlistutils.AddressWithID]sdkmath.Int, error) { +func queryBatchBalances(ctx context.Context, q *querier.Querier, erc20Address string, batch []richlistutils.AddressWithID, height int64) (map[richlistutils.AddressWithID]sdkmath.Int, error) {Verify ID assignment in querier:
#!/bin/bash # Check how QueryERC20Balances assigns request IDs ast-grep --pattern $'func ($_ *Querier) QueryERC20Balances($$$) { $$$ }'api/handler/block/types.go (1)
53-63: Parameter name shadows package import.The parameter
querier *querier.Queriershadows the imported package namequerier, which can lead to confusion and prevents accessing other package-level functions within this scope.Consider renaming the parameter to avoid shadowing:
-func ToBlocksResponse(ctx context.Context, cbs []types.CollectedBlock, querier *querier.Querier) ([]Block, error) { +func ToBlocksResponse(ctx context.Context, cbs []types.CollectedBlock, q *querier.Querier) ([]Block, error) { blocks := make([]Block, 0, len(cbs)) for _, cb := range cbs { - block, err := ToBlockResponse(ctx, cb, querier) + block, err := ToBlockResponse(ctx, cb, q) if err != nil { return nil, err }indexer/extension/richlist/evmrichlist/run.go (1)
18-18: Variable name shadows package import.Similar to the block handler,
querier := querier.NewQuerier(...)shadows the imported package name.- querier := querier.NewQuerier(cfg.GetChainConfig()) + q := querier.NewQuerier(cfg.GetChainConfig())Then update usages on lines 22 and 53 to use
q.indexer/indexer.go (3)
154-169: Unused fiber client acquisition.The
fiber.AcquireClient()andfiber.ReleaseClient(client)calls appear to be unused sincei.querier.GetLatestHeight(i.ctx)doesn't use the acquired client. This is likely leftover from the previous implementation.func (i *Indexer) wait() { - client := fiber.AcquireClient() - defer fiber.ReleaseClient(client) for { chainHeight, err := i.querier.GetLatestHeight(i.ctx)
206-263: Well-implemented retry logic with proper cancellation handling.The retry mechanism correctly:
- Checks context cancellation before each attempt
- Handles
context.Canceledby stopping immediately- Handles
context.DeadlineExceededwith retries and cooling duration- Properly decrements
prepareCountin all exit paths- Checks for cancellation during the cooling wait
One consideration: the indefinite retry on timeouts may mask persistent endpoint failures. If this becomes an operational concern, adding observability (e.g., retry count metrics) or a maximum retry threshold could help.
80-86: Same unused fiber client issue.Similar to
wait(), the fiber client is acquired but not used sincei.querier.GetLatestHeight(ctx)handles HTTP internally.// get the current chain height for validation/clamping - client := fiber.AcquireClient() - defer fiber.ReleaseClient(client) chainHeight, err := i.querier.GetLatestHeight(ctx)types/internal_tx.go (1)
3-8: Consider adding JSON tags toInternalTxInfofor consistency.Other structs in this file include JSON tags. If
InternalTxInfois serialized/deserialized (e.g., for caching or API responses), adding tags would be beneficial. If it's strictly internal, this is fine as-is.util/querier/evm.go (1)
154-188: Consider thread-safety for cache operations.
GetEvmContractByDenomreads then writes to the cache without synchronization. If this method is called concurrently for the same denom, multiple goroutines may query the endpoint redundantly.This is a minor concern since the cache hit path is checked first, and redundant writes would produce the same value. Consider this if performance under concurrent load becomes an issue.
indexer/extension/richlist/utils/utils.go (1)
125-125: Redundant doubleToLowercall.
strings.ToLoweris called twice on the same value.- denom := strings.ToLower(strings.ToLower(coin.Denom)) + denom := strings.ToLower(coin.Denom)util/querier/cosmos.go (1)
79-92: Consider extracting pagination workaround into a helper.The same offset-fallback pagination logic is repeated in three places (
fetchAllTxsWithPagination,GetAllBalances,FetchAllAccountsWithPagination). This could be centralized to reduce duplication and improve maintainability.Also applies to: 238-251, 311-324
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (77)
api/handler/block/block.go(3 hunks)api/handler/block/cache.go(1 hunks)api/handler/block/handler.go(1 hunks)api/handler/block/types.go(3 hunks)api/handler/handler.go(1 hunks)api/handler/nft/tx.go(2 hunks)api/handler/richlist/handler.go(2 hunks)api/handler/richlist/richlist.go(1 hunks)api/handler/tx/edge_tables_test.go(1 hunks)cmd/api.go(2 hunks)cmd/indexer.go(2 hunks)config/chain.go(3 hunks)config/config.go(3 hunks)indexer/collector/block/submodule.go(2 hunks)indexer/collector/collector.go(2 hunks)indexer/collector/evm-nft/collect.go(6 hunks)indexer/collector/evm-nft/prepare.go(4 hunks)indexer/collector/evm-nft/query.go(0 hunks)indexer/collector/evm-nft/submodule.go(4 hunks)indexer/collector/move-nft/collect.go(2 hunks)indexer/collector/move-nft/prepare.go(2 hunks)indexer/collector/move-nft/query.go(0 hunks)indexer/collector/move-nft/submodule.go(1 hunks)indexer/collector/move-nft/types.go(0 hunks)indexer/collector/tx/collect.go(6 hunks)indexer/collector/tx/prepare.go(2 hunks)indexer/collector/tx/query.go(0 hunks)indexer/collector/tx/submodule.go(2 hunks)indexer/collector/tx/types.go(1 hunks)indexer/collector/tx/util.go(1 hunks)indexer/collector/wasm-nft/collect.go(7 hunks)indexer/collector/wasm-nft/prepare.go(3 hunks)indexer/collector/wasm-nft/query.go(0 hunks)indexer/collector/wasm-nft/submodule.go(4 hunks)indexer/collector/wasm-nft/types.go(0 hunks)indexer/extension/evmret/cleanup.go(2 hunks)indexer/extension/evmret/cleanup_test.go(2 hunks)indexer/extension/internaltx/address.go(1 hunks)indexer/extension/internaltx/address_test.go(4 hunks)indexer/extension/internaltx/collect.go(4 hunks)indexer/extension/internaltx/collect_test.go(9 hunks)indexer/extension/internaltx/extension.go(6 hunks)indexer/extension/internaltx/query.go(1 hunks)indexer/extension/internaltx/types.go(0 hunks)indexer/extension/internaltx/util.go(3 hunks)indexer/extension/richlist/evmrichlist/jsonrpc.go(3 hunks)indexer/extension/richlist/evmrichlist/run.go(2 hunks)indexer/extension/richlist/extension.go(3 hunks)indexer/extension/richlist/utils/db.go(3 hunks)indexer/extension/richlist/utils/rest.go(0 hunks)indexer/extension/richlist/utils/types.go(0 hunks)indexer/extension/richlist/utils/utils.go(10 hunks)indexer/indexer.go(6 hunks)indexer/scraper/scrape.go(2 hunks)indexer/types/types.go(2 hunks)indexer/util/query.go(0 hunks)patcher/v1_0_12/patch.go(1 hunks)patcher/v1_0_12/wasm-nft.go(1 hunks)patcher/v1_0_2/move-nft.go(3 hunks)patcher/v1_0_2/patch.go(1 hunks)patcher/v1_0_2/wasm-nft.go(1 hunks)types/account.go(1 hunks)types/cosmos.go(1 hunks)types/evm.go(1 hunks)types/internal_tx.go(1 hunks)types/jsonrpc.go(1 hunks)types/move.go(1 hunks)types/validators.go(1 hunks)types/wasm.go(1 hunks)util/cache/cache.go(13 hunks)util/common-handler/common/handler.go(4 hunks)util/querier/cosmos.go(1 hunks)util/querier/evm.go(1 hunks)util/querier/move.go(1 hunks)util/querier/querier.go(1 hunks)util/querier/query.go(4 hunks)util/querier/wasm.go(1 hunks)
💤 Files with no reviewable changes (10)
- indexer/collector/move-nft/types.go
- indexer/util/query.go
- indexer/collector/wasm-nft/types.go
- indexer/extension/internaltx/types.go
- indexer/collector/tx/query.go
- indexer/extension/richlist/utils/types.go
- indexer/collector/evm-nft/query.go
- indexer/collector/move-nft/query.go
- indexer/collector/wasm-nft/query.go
- indexer/extension/richlist/utils/rest.go
🧰 Additional context used
🧬 Code graph analysis (52)
patcher/v1_0_2/move-nft.go (1)
util/cache/cache.go (1)
GetOrCreateAccountIds(153-178)
indexer/collector/evm-nft/prepare.go (2)
indexer/collector/evm-nft/submodule.go (1)
EvmNftSubmodule(20-27)indexer/types/types.go (1)
ScrapedBlock(17-28)
cmd/indexer.go (2)
util/querier/query.go (1)
InitUtil(101-108)util/cache/cache.go (1)
InitializeCaches(47-58)
indexer/extension/internaltx/address.go (1)
types/internal_tx.go (1)
EvmInternalTx(42-51)
indexer/collector/wasm-nft/prepare.go (2)
indexer/collector/wasm-nft/submodule.go (1)
WasmNftSubmodule(20-27)indexer/types/types.go (1)
ScrapedBlock(17-28)
config/chain.go (1)
types/errors.go (2)
NewValidationError(52-58)NewInvalidValueError(60-66)
indexer/collector/tx/types.go (1)
types/account.go (1)
RestTx(73-77)
indexer/collector/block/submodule.go (1)
indexer/types/types.go (1)
ScrapedBlock(17-28)
indexer/extension/evmret/cleanup_test.go (2)
util/cache/cache.go (1)
InitializeCaches(47-58)config/config.go (1)
CacheConfig(104-112)
indexer/extension/richlist/utils/db.go (1)
util/cache/cache.go (1)
GetOrCreateAccountIds(153-178)
indexer/collector/tx/collect.go (3)
indexer/collector/tx/types.go (1)
RestTx(24-28)types/account.go (1)
RestTx(73-77)util/cache/cache.go (3)
GetOrCreateMsgTypeIds(297-354)GetOrCreateTypeTagIds(357-414)GetOrCreateAccountIds(153-178)
api/handler/richlist/handler.go (1)
util/querier/querier.go (2)
Querier(20-28)NewQuerier(47-57)
indexer/extension/internaltx/address_test.go (1)
types/internal_tx.go (1)
EvmInternalTx(42-51)
indexer/collector/tx/submodule.go (3)
indexer/types/types.go (2)
Submodule(11-15)ScrapedBlock(17-28)util/querier/querier.go (2)
Querier(20-28)NewQuerier(47-57)indexer/collector/collector.go (1)
New(31-54)
api/handler/tx/edge_tables_test.go (2)
util/cache/cache.go (1)
InitializeCaches(47-58)config/config.go (1)
CacheConfig(104-112)
indexer/collector/move-nft/submodule.go (3)
indexer/collector/move-nft/types.go (1)
CacheData(5-7)util/querier/querier.go (2)
Querier(20-28)NewQuerier(47-57)indexer/collector/collector.go (1)
New(31-54)
indexer/collector/collector.go (1)
indexer/types/types.go (1)
ScrapedBlock(17-28)
api/handler/handler.go (1)
api/handler/block/handler.go (1)
NewBlockHandler(21-23)
indexer/extension/internaltx/extension.go (3)
types/internal_tx.go (1)
DebugCallTraceBlockResponse(11-13)util/querier/querier.go (2)
Querier(20-28)NewQuerier(47-57)indexer/extension/internaltx/query.go (1)
CheckNodeVersion(17-38)
util/querier/querier.go (3)
types/vm.go (1)
VMType(3-3)config/chain.go (1)
ChainConfig(10-18)sentry_integration/sentry_integration.go (1)
CaptureCurrentHubException(9-11)
indexer/extension/internaltx/collect.go (2)
types/internal_tx.go (3)
DebugCallTraceBlockResponse(11-13)InternalTransaction(30-40)InternalTxInfo(3-8)util/cache/cache.go (1)
GetOrCreateEvmTxHashIds(416-477)
indexer/collector/evm-nft/submodule.go (2)
util/querier/querier.go (2)
Querier(20-28)NewQuerier(47-57)indexer/types/types.go (1)
ScrapedBlock(17-28)
api/handler/block/types.go (2)
types/table.go (2)
CollectedBlock(40-51)CollectedBlock(193-195)util/querier/querier.go (1)
Querier(20-28)
util/common-handler/common/handler.go (1)
util/cache/cache.go (4)
GetOrCreateAccountIds(153-178)GetOrCreateMsgTypeIds(297-354)NftKey(16-19)GetOrCreateNftIds(269-294)
indexer/collector/move-nft/prepare.go (2)
indexer/collector/move-nft/submodule.go (1)
MoveNftSubmodule(19-25)indexer/types/types.go (1)
ScrapedBlock(17-28)
util/querier/query.go (2)
types/errors.go (1)
NewTimeoutError(108-114)metrics/batcher.go (1)
GetGlobalMetricsBatcher(297-299)
indexer/collector/tx/util.go (2)
indexer/collector/tx/types.go (2)
RestTx(24-28)RestTxBody(30-34)types/account.go (2)
RestTx(73-77)RestTxBody(79-83)
types/validators.go (1)
util/cache/cache.go (2)
Validator(21-25)ConsensusPubkey(27-30)
indexer/collector/wasm-nft/submodule.go (2)
util/querier/querier.go (1)
Querier(20-28)indexer/types/types.go (1)
ScrapedBlock(17-28)
types/cosmos.go (1)
config/config.go (1)
Version(20-20)
util/querier/wasm.go (3)
types/wasm.go (2)
QueryContractInfoResponse(3-7)QueryWasmContractResponse(9-13)util/querier/querier.go (1)
Querier(20-28)util/querier/query.go (1)
Get(182-189)
indexer/collector/evm-nft/collect.go (1)
util/cache/cache.go (3)
NftKey(16-19)GetOrCreateAccountIds(153-178)GetOrCreateNftIds(269-294)
api/handler/nft/tx.go (2)
util/cache/cache.go (1)
NftKey(16-19)util/util.go (1)
BytesToHexWithPrefixIfPresent(53-58)
indexer/extension/evmret/cleanup.go (1)
util/cache/cache.go (1)
GetOrCreateAccountIds(153-178)
indexer/indexer.go (2)
util/querier/querier.go (2)
Querier(20-28)NewQuerier(47-57)metrics/error_tracker.go (1)
TrackError(9-11)
util/querier/move.go (3)
types/move.go (1)
QueryMoveResourceResponse(3-10)util/querier/query.go (1)
Get(182-189)util/querier/querier.go (1)
Querier(20-28)
api/handler/block/block.go (3)
api/handler/block/types.go (2)
ToBlocksResponse(53-63)ToBlockResponse(65-91)util/querier/querier.go (1)
NewQuerier(47-57)config/config.go (1)
GetConfig(217-225)
indexer/extension/richlist/evmrichlist/run.go (2)
util/querier/querier.go (1)
NewQuerier(47-57)indexer/extension/richlist/utils/utils.go (2)
InitializeBalances(324-350)ProcessBalanceChanges(212-238)
indexer/collector/tx/prepare.go (4)
indexer/collector/tx/submodule.go (1)
TxSubmodule(20-27)indexer/types/types.go (1)
ScrapedBlock(17-28)indexer/collector/tx/types.go (1)
RestTx(24-28)types/tx.go (1)
EvmTx(26-41)
indexer/collector/wasm-nft/collect.go (1)
util/cache/cache.go (3)
NftKey(16-19)GetOrCreateAccountIds(153-178)GetOrCreateNftIds(269-294)
indexer/extension/internaltx/util.go (2)
types/internal_tx.go (3)
InternalTxInfo(3-8)InternalTransaction(30-40)EvmInternalTx(42-51)util/cache/cache.go (1)
GetOrCreateAccountIds(153-178)
indexer/collector/move-nft/collect.go (1)
util/cache/cache.go (1)
GetOrCreateAccountIds(153-178)
cmd/api.go (2)
util/querier/query.go (1)
InitUtil(101-108)util/cache/cache.go (1)
InitializeCaches(47-58)
util/querier/cosmos.go (5)
types/account.go (4)
RestTx(73-77)QueryRestTxsResponse(94-97)Pagination(45-48)BlockResponse(86-92)util/querier/query.go (1)
Get(182-189)api/handler/block/types.go (2)
BlockResponse(21-23)Block(29-40)util/util.go (1)
AccAddressFromString(12-31)types/cosmos.go (1)
NodeInfo(3-7)
indexer/extension/richlist/extension.go (1)
util/querier/querier.go (1)
Querier(20-28)
indexer/extension/richlist/utils/utils.go (2)
util/querier/querier.go (1)
Querier(20-28)util/cache/cache.go (1)
GetOrCreateAccountIds(153-178)
util/cache/cache.go (2)
types/validators.go (2)
ConsensusPubkey(13-16)Validator(7-11)util/util.go (3)
AccAddressFromString(12-31)BytesToHexWithPrefix(49-51)BytesToHex(45-47)
util/querier/evm.go (6)
types/errors.go (1)
NewInvalidValueError(60-66)types/vm.go (1)
EVM(8-8)util/cache/cache.go (2)
GetEvmDenomContractCache(479-484)SetEvmDenomContractCache(486-488)util/util.go (1)
AccAddressFromString(12-31)types/internal_tx.go (1)
DebugCallTraceBlockResponse(11-13)types/jsonrpc.go (3)
JSONRPCErrorResponse(24-28)JSONRPCResponse(12-17)JSONRPCRequest(4-9)
api/handler/block/cache.go (3)
util/querier/querier.go (1)
Querier(20-28)types/validators.go (1)
Validator(7-11)util/cache/cache.go (3)
Validator(21-25)GetValidatorCache(498-503)SetValidatorCache(505-507)
api/handler/block/handler.go (6)
util/common-handler/common/handler.go (2)
HandlerRegistrar(14-16)BaseHandler(18-22)util/querier/querier.go (2)
Querier(20-28)NewQuerier(47-57)config/config.go (1)
Config(128-152)api/cache/cache.go (1)
Config(11-16)api/handler/handler.go (1)
Register(18-31)indexer/api/handler/handler.go (1)
Register(14-23)
types/account.go (1)
indexer/collector/tx/types.go (2)
Pagination(19-22)RestTx(24-28)
indexer/extension/richlist/evmrichlist/jsonrpc.go (2)
util/querier/querier.go (2)
NewQuerier(47-57)Querier(20-28)indexer/extension/richlist/utils/types.go (1)
AddressWithID(9-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
- Updated BlockHandler to utilize the existing querier instance for block responses, enhancing consistency. - Removed unnecessary querier instantiation in internal transaction queries and improved context handling. - Simplified EVM contract queries by reusing the querier instance, optimizing performance and reducing redundancy. - Cleaned up unused constants and functions in the querier package for better maintainability.
- Introduced ValidatorCacheSize parameter in the cleanup_test.go file to enhance cache management during batch processing. - This addition aims to improve performance and resource allocation in the EVM retention cleanup process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
util/querier/cosmos.go (1)
29-41: BoundGetCosmosTxsretries to avoid a potential infinite loop.
GetCosmosTxswill loop forever iffetchAllTxsWithPaginationkeeps returning a length different fromtxCountwithout error (e.g.txCountis wrong or the API never matches it). There’s also no guard forlen(allTxs) > txCount.Consider bounding attempts and respecting
ctxcancellation, and surfacing a clear error when expectations are not met. For example:-func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) { - for { - allTxs, err := q.fetchAllTxsWithPagination(ctx, height) - if err != nil { - return txs, err - } - - // If we get the expected number of transactions, return immediately - if len(allTxs) == txCount { - return allTxs, nil - } - } -} +func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) ([]types.RestTx, error) { + const maxAttempts = 5 + + var lastTxs []types.RestTx + for attempt := 0; attempt < maxAttempts; attempt++ { + if err := ctx.Err(); err != nil { + return nil, err + } + + allTxs, err := q.fetchAllTxsWithPagination(ctx, height) + if err != nil { + return nil, err + } + + if len(allTxs) == txCount { + return allTxs, nil + } + if len(allTxs) > txCount { + return nil, fmt.Errorf("height %d: expected %d txs, got %d", height, txCount, len(allTxs)) + } + + lastTxs = allTxs + time.Sleep(200 * time.Millisecond) // optional: small delay before retry + } + + return lastTxs, fmt.Errorf( + "height %d: failed to fetch expected %d txs after %d attempts (last len=%d)", + height, txCount, maxAttempts, len(lastTxs), + ) +}You can tune
maxAttemptsand the backoff as appropriate for your environment.
🧹 Nitpick comments (5)
indexer/extension/internaltx/query.go (1)
22-22: Consider tracking the TODO for endpoint rotation.The TODO about rotating endpoints when semver validation fails is a reasonable fallback strategy. Consider creating an issue to track this enhancement if it's important for resilience.
Would you like me to open an issue to track this TODO for implementing endpoint rotation on semver incompatibility?
indexer/extension/richlist/evmrichlist/jsonrpc.go (3)
47-47: Variable shadows imported package name.The variable
querieron line 47 shadows the imported packagequerierfrom line 14. This can cause confusion and potential issues if the package needs to be referenced later in the function.Apply this diff to rename the variable:
- querier := querier.NewQuerier(cfg.GetChainConfig()) + q := querier.NewQuerier(cfg.GetChainConfig())And update the usage on line 59:
- batchBalances, err := queryBatchBalances(ctx, querier, erc20Address, batchData, height) + batchBalances, err := queryBatchBalances(ctx, q, erc20Address, batchData, height)
57-58: Address or clarify the TODO comment.The TODO comment "revisit" is vague and doesn't explain what needs to be revisited. Consider either:
- Removing the TODO if the retry handling via
utils.Postis confirmed sufficient- Clarifying what specific aspect needs review (e.g., retry logic, error handling strategy)
Would you like me to open an issue to track this TODO for later follow-up?
94-97: Document the ID assignment contract with QueryERC20Balances.The 1-based ID mapping (
idx+1) assumes a specific ID assignment pattern fromQueryERC20Balances. This implicit contract between the two functions could break if the querier implementation changes.Consider adding a comment clarifying this dependency:
// Build a map from request ID to the corresponding AddressWithID + // Note: IDs are 1-based because QueryERC20Balances assigns ID 0 to eth_blockNumber + // and uses 1-indexed IDs for balance requests idToAddr := make(map[int]richlistutils.AddressWithID, len(batch)) for idx, addrWithID := range batch { idToAddr[idx+1] = addrWithID }util/querier/evm.go (1)
305-306: Clarify or address the TODO comment.The TODO at line 305 suggests the error handling needs improvement but doesn't specify what's inappropriate about the current approach. Consider either:
- Removing the TODO if the current
Posterror handling is sufficient- Adding detail about what specific handling is needed
- Opening an issue to track the improvement
Would you like me to help identify specific error-handling improvements or open an issue to track this?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (5)
api/handler/block/block.go(2 hunks)indexer/extension/internaltx/query.go(1 hunks)indexer/extension/richlist/evmrichlist/jsonrpc.go(4 hunks)util/querier/cosmos.go(1 hunks)util/querier/evm.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
api/handler/block/block.go (1)
api/handler/block/types.go (2)
ToBlocksResponse(53-63)ToBlockResponse(65-91)
indexer/extension/internaltx/query.go (2)
util/querier/querier.go (1)
Querier(20-28)config/config.go (1)
Version(20-20)
indexer/extension/richlist/evmrichlist/jsonrpc.go (2)
util/querier/querier.go (2)
NewQuerier(47-57)Querier(20-28)indexer/extension/richlist/utils/types.go (1)
AddressWithID(9-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
🔇 Additional comments (7)
indexer/extension/internaltx/query.go (1)
16-34: LGTM! Clean refactoring to querier abstraction.The function correctly delegates node info fetching to the querier, handles semver validation with proper "v" prefix normalization, and provides a clear error message. The previous issue about unused fiber client code has been addressed.
indexer/extension/richlist/evmrichlist/jsonrpc.go (2)
30-79: LGTM on the batch processing structure.The parallel batch processing implementation is well-designed:
- Querier is correctly instantiated once before the loop (addressing the previous review feedback)
- errgroup with SetLimit provides controlled concurrency
- Mutex properly protects the shared balances map
- Error wrapping preserves batch context for debugging
94-124: The validation suggestion is unnecessary; the code already handles missing responses safely.The
fetchERC20Balancesfunction already validates that the response count matches the expected count (lines 317-319 in evm.go), ensuring exactlylen(addresses)+1responses are returned. Additionally,queryBatchBalancescatches all error cases and returns early—if any response has an error or fails to parse, the entire function returns an error. There is no scenario where an address can silently be missing from the result; the function either succeeds with all balances or fails explicitly.Likely an incorrect or invalid review comment.
api/handler/block/block.go (1)
46-46: LGTM! Consistent querier usage across handler methods.Both
GetBlocksandGetBlockByHeightnow correctly useh.querierand propagate the request context viac.UserContext(). This addresses the previous review concern about inconsistent querier instantiation and aligns with the PR's goal of integrating the querier across handlers.Also applies to: 86-86
util/querier/evm.go (2)
209-212: Past review concern has been addressed.The error from
extractResponseis now properly checked before accessingerrResp.Error(line 210:if err == nil && errResp.Error != nil). This prevents potential issues with zero-valued structs when parsing fails.
171-171: Past review concern has been addressed.The code now correctly uses
q.RestUrlsfor the GET request to the REST endpoint, matching the pattern used by other REST-style calls likefetchEvmCall.util/querier/cosmos.go (1)
101-148: Height, validator, module-account, and node-info helpers look solid.The helpers for latest height, validator info, minter/burner module accounts, and node info all follow a consistent pattern: centralized endpoint rotation, clear unmarshalling, and good error propagation. The filtering for minter/burner permissions combined with
AccAddressFromStringand skipping invalid entries is also reasonable.No changes requested here.
Also applies to: 150-182, 331-350
- Removed the unused offset parameter from the fetchCosmosBalances function to streamline its signature. - Updated the function calls to reflect this change, enhancing code clarity and maintainability. - Set the pagination offset to a fixed value of "0" when the offset is not used, ensuring consistent behavior in balance queries.
- Removed the unused offset parameter from the fetchAllAccountsWithPagination function to enhance clarity. - Set the pagination offset to a fixed value of "0" when not in use, ensuring consistent behavior in account queries. - Improved formatting of function calls for better readability in the GetAllBalances method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
util/querier/cosmos.go (3)
29-41: Bound retries inGetCosmosTxsto avoid infinite loop and endpoint hammeringAs written, this loops forever if
fetchAllTxsWithPaginationkeeps returning a length different fromtxCount(e.g., API bug, race with indexer), repeatedly re-querying the same block with no backoff or exit condition. This is a correctness and reliability issue.Consider adding a bounded retry loop and returning a clear error when expectations are not met:
-func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) { - for { - allTxs, err := q.fetchAllTxsWithPagination(ctx, height) - if err != nil { - return txs, err - } - - // If we get the expected number of transactions, return immediately - if len(allTxs) == txCount { - return allTxs, nil - } - } -} +func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) ([]types.RestTx, error) { + const maxRetries = 3 + + var lastTxs []types.RestTx + + for attempt := 0; attempt < maxRetries; attempt++ { + allTxs, err := q.fetchAllTxsWithPagination(ctx, height) + if err != nil { + return nil, err + } + + lastTxs = allTxs + if len(allTxs) == txCount { + return allTxs, nil + } + } + + return nil, fmt.Errorf( + "failed to fetch expected %d txs after %d attempts (last len=%d)", + txCount, + maxRetries, + len(lastTxs), + ) +}
184-255: Offset-based fallback inGetAllBalancesstill re-queries the first page onlyWhen the “broken API” fallback is triggered (
NextKey == nilbutTotalindicates more), you flipuseOffset = true, butfetchCosmosBalancesalways sendspagination.offset = "0"in offset mode. The result:
- The same first page is fetched repeatedly in offset mode.
allBalancesfills with duplicates untillen(allBalances) >= total, at which point the loop breaks with incorrect data and unnecessary network/memory overhead.To fix this, thread an
offsetparameter throughfetchCosmosBalancesand advance it each iteration, mirroring your tx pagination approach:-func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) { +func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, offset int, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) { return func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) { // Build pagination parameters params := map[string]string{"pagination.limit": paginationLimit} if useOffset { - params["pagination.offset"] = "0" + params["pagination.offset"] = strconv.Itoa(offset) } else if len(nextKey) > 0 { params["pagination.key"] = base64.StdEncoding.EncodeToString(nextKey) } @@ func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) { var allBalances []sdk.Coin var nextKey []byte useOffset := false + offset := 0 for { - balancesResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchCosmosBalances(address, height, useOffset, nextKey)) + balancesResp, err := executeWithEndpointRotation( + ctx, + q.RestUrls, + fetchCosmosBalances(address, height, useOffset, offset, nextKey), + ) @@ for _, balance := range balancesResp.Balances { @@ allBalances = append(allBalances, sdk.Coin{ Denom: denom, Amount: balance.Amount, }) } + + // Advance offset for potential offset-based fallback. + offset += len(balancesResp.Balances)You’ll need to adjust any other call sites of
fetchCosmosBalancesin this package accordingly.
18-26: Fix account pagination offset fallback and replace magic20with a named constantTwo issues in
FetchAllAccountsWithPagination:
Offset fallback has the same bug as balances.
useOffsetis set totruewhenNextKeyis empty butTotalindicates more accounts, yetoffsetis never advanced. In offset mode you will keep re-requesting the first page while appending duplicates untillen(allAddresses) >= total.Bare
20is a magic number for EVM address length.
len(accAddress) > 20is non-obvious without context and should be expressed via a named constant.Suggested changes:
const ( paginationLimitInt = 100 paginationLimit = "100" + evmAccountAddressLen = 20 cosmosTxsPath = "/cosmos/tx/v1beta1/txs/block/%d" @@ func (q *Querier) FetchAllAccountsWithPagination(ctx context.Context, height int64) ([]sdk.AccAddress, error) { var allAddresses []sdk.AccAddress var nextKey []sdk.AccAddress useOffset := false offset := 0 @@ // Extract addresses from accounts for _, account := range accountsResp.Accounts { if account.Address != "" { if accAddress, err := util.AccAddressFromString(account.Address); err == nil { - if q.VmType == types.EVM && len(accAddress) > 20 { + if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen { continue } allAddresses = append(allAddresses, accAddress) } } } + + // Advance offset for potential offset-based fallback. + offset += len(accountsResp.Accounts)With this, subsequent offset-based requests will correctly move through pages, and the EVM-specific filter becomes self-documenting.
Also applies to: 282-327
🧹 Nitpick comments (1)
util/querier/cosmos.go (1)
150-182: Consider surfacing invalid module account addresses instead of silently dropping
GetMinterBurnerModuleAccountsfilters by"minter"/"burner"permissions and then ignores anyAccAddressFromStringerrors. This means a misconfigured module account will just disappear from results with no signal.Consider either returning an error on invalid module account addresses or at least logging/recording them so configuration issues don’t go unnoticed.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
util/querier/cosmos.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run tests
- GitHub Check: Run linter
- GitHub Check: Analyze (go)
🔇 Additional comments (5)
util/querier/cosmos.go (5)
43-61: Tx pagination helper looks consistent
fetchTxscorrectly toggles between offset and key-based pagination, encodesnextKeyonce, and plumbs the timeout through toGet. No issues from a correctness standpoint.
63-99: Tx pagination + offset fallback implementation looks correct
fetchAllTxsWithPaginationappends intoallTxs, keepsoffset = len(allTxs)in sync, and only falls back to offset-based pagination whenNextKeyis empty but a full page is returned andTotalindicates more. This fixes the classic “offset never advances” pitfall and looks sound.
101-126: Latest-height fetching and parsing is straightforward
GetLatestHeightwraps the REST call with endpoint rotation and parsesHeader.Heightintoint64, returning a typed invalid-value error on parse failure. This is clear and robust enough for this layer.
128-148: Validator fetch path and rotation wiring look fineThe validator query uses a single helper with
executeWithEndpointRotationandextractResponse, with no hidden state. Assuming the/opinit/opchild/v1/validator/%spath is correct for your deployment, this looks good.
330-350: Node info helpers are straightforward and consistent
fetchNodeInfoandGetNodeInfomirror the same endpoint-rotation + typed-extraction pattern as the other helpers. No issues from a correctness or style perspective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (4)
util/querier/cosmos.go (4)
301-301: Replace magic number with named constant.The literal
20for EVM address length filtering is non-obvious without context. A named constant improves clarity.Define a constant:
const ( paginationLimitInt = 100 paginationLimit = "100" + evmAccountAddressLen = 20 cosmosTxsPath = "/cosmos/tx/v1beta1/txs/block/%d"Then use it:
- if q.VmType == types.EVM && len(accAddress) > 20 { + if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen { continue }
29-41: Critical: Infinite loop when transaction count never matches expected value.This function loops indefinitely if
fetchAllTxsWithPaginationnever returns exactlytxCounttransactions. Network issues, API bugs, or race conditions between fetching the count and transactions can cause this.Add a retry limit and respect context cancellation:
func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) { + const maxAttempts = 3 + for attempt := 0; attempt < maxAttempts; attempt++ { + select { + case <-ctx.Done(): + return nil, ctx.Err() + default: + } + - for { allTxs, err := q.fetchAllTxsWithPagination(ctx, height) if err != nil { return txs, err } // If we get the expected number of transactions, return immediately if len(allTxs) == txCount { return allTxs, nil } + + // Optional: add backoff between retries + if attempt < maxAttempts-1 { + time.Sleep(time.Duration(attempt+1) * 100 * time.Millisecond) + } } + return nil, fmt.Errorf("failed to fetch expected %d txs after %d attempts (got %d)", txCount, maxAttempts, len(allTxs)) }
209-258: Critical: Offset never updated in pagination fallback.
GetAllBalancesdoesn't track or update anoffsetvariable. When the fallback to offset-based pagination is triggered (line 249), subsequent iterations don't advance the offset, causing duplicate data.Track and update the offset:
func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) { var allBalances []sdk.Coin var nextKey []byte useOffset := false + offset := 0 for { balancesResp, err := executeWithEndpointRotation( ctx, q.RestUrls, - fetchCosmosBalances(address, height, useOffset, nextKey), + fetchCosmosBalances(address, height, useOffset, offset, nextKey), ) if err != nil { return nil, err } // Append balances from this page for _, balance := range balancesResp.Balances { denom := strings.ToLower(balance.Denom) if q.VmType == types.EVM { contract, err := q.GetEvmContractByDenom(ctx, denom) if err != nil { return nil, err } denom = contract } allBalances = append(allBalances, sdk.Coin{ Denom: denom, Amount: balance.Amount, }) } + // Update offset for next iteration + offset = len(allBalances) + // Check if there are more pages if len(balancesResp.Pagination.NextKey) == 0 {
285-330: Critical: Offset never updated in pagination fallback.
FetchAllAccountsWithPaginationdoesn't track or update anoffsetvariable. When offset-based pagination is triggered (line 319), the offset remains at 0, causing duplicate data.Track and update the offset:
func (q *Querier) FetchAllAccountsWithPagination(ctx context.Context, height int64) ([]sdk.AccAddress, error) { var allAddresses []sdk.AccAddress var nextKey []byte useOffset := false + offset := 0 for { - accountsResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchAllAccountsWithPagination(height, useOffset, nextKey)) + accountsResp, err := executeWithEndpointRotation( + ctx, + q.RestUrls, + fetchAllAccountsWithPagination(height, useOffset, offset, nextKey), + ) if err != nil { return nil, err } // Extract addresses from accounts for _, account := range accountsResp.Accounts { if account.Address != "" { if accAddress, err := util.AccAddressFromString(account.Address); err == nil { if q.VmType == types.EVM && len(accAddress) > 20 { continue } allAddresses = append(allAddresses, accAddress) } } } + + // Update offset for next iteration + offset = len(allAddresses) // Check if there are more pages if len(accountsResp.Pagination.NextKey) == 0 {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
util/querier/cosmos.go(1 hunks)
🧰 Additional context used
🪛 GitHub Check: Run linter
util/querier/cosmos.go
[failure] 310-310:
if len(accountsResp.Pagination.NextKey) == 0 has complex nested blocks (complexity: 5) (nestif)
[failure] 240-240:
if len(balancesResp.Pagination.NextKey) == 0 has complex nested blocks (complexity: 5) (nestif)
[failure] 79-79:
if len(response.Pagination.NextKey) == 0 has complex nested blocks (complexity: 5) (nestif)
[failure] 285-285:
cognitive complexity 32 of func (*Querier).FetchAllAccountsWithPagination is high (> 30) (gocognit)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (8)
util/querier/query.go (1)
237-254: Timeout retries remain unbounded and metrics are misleading.The issues flagged in previous reviews still apply:
- Timeout errors trigger infinite retries (line 252:
continuewithout checking retry count)- Line 243 calls
RecordRateLimitHitfor timeout errors, conflating two distinct failure modesutil/cache/cache.go (2)
21-30: Duplicate type definitions are unused and should be removed.These
ValidatorandConsensusPubkeystructs duplicate types already defined intypes/validators.go. Additionally,validatorCache(line 39) uses*types.Validator, so these local definitions are entirely unused dead code.-type Validator struct { - Moniker string `json:"moniker"` - OperatorAddress string `json:"operator_address"` - ConsensusPubkey ConsensusPubkey `json:"consensus_pubkey"` -} - -type ConsensusPubkey struct { - Type string `json:"@type"` - Key string `json:"key"` -}
489-495:GetAccountCacheshould normalize addresses to prevent cache misses.As flagged previously,
checkAccountCache(line 71) normalizes addresses to bech32 before cache lookup, butGetAccountCachequeries with the raw input. Callers passing hex-formatted addresses will miss entries stored under their bech32 representation.func GetAccountCache(account string) (int64, bool) { + key, err := normalizeAccountToBech32(account) + if err != nil { + return 0, false + } - if id, ok := accountCache.Get(account); ok { + if id, ok := accountCache.Get(key); ok { return id, true } return 0, false }util/querier/cosmos.go (5)
56-68: Infinite loop when transaction count never matches expected value.As flagged previously,
GetCosmosTxsloops forever if the API never returns exactlytxCounttransactions. This can occur due to network issues, API bugs, or race conditions.func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) { + const maxRetries = 3 + for attempt := 0; attempt < maxRetries; attempt++ { - for { allTxs, err := q.fetchAllTxsWithPagination(ctx, height) if err != nil { return txs, err } if len(allTxs) == txCount { return allTxs, nil } } + return nil, fmt.Errorf("failed to fetch expected %d txs after %d attempts", txCount, maxRetries) }
203-226: Hardcoded offset breaks pagination fallback infetchCosmosBalances.Line 208 hardcodes
pagination.offsetto"0". When offset-based pagination is triggered, every request re-fetches the first page.The function needs an
offset intparameter:-func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, nextKey []byte) ... +func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, offset int, nextKey []byte) ... if useOffset { - params["pagination.offset"] = "0" + params["pagination.offset"] = strconv.Itoa(offset) }
228-269:GetAllBalancesnever advances offset for pagination fallback.When the offset fallback is triggered, the
offsetvariable is never updated, causing duplicate data fetching.Add offset tracking similar to
fetchAllTxsWithPagination:func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) { var allBalances []sdk.Coin var nextKey []byte useOffset := false + offset := 0 for { balancesResp, err := executeWithEndpointRotation( ctx, q.RestUrls, - fetchCosmosBalances(address, height, useOffset, nextKey), + fetchCosmosBalances(address, height, useOffset, offset, nextKey), ) // ... append balances ... + offset += len(balancesResp.Balances)
271-323: Same offset pagination bugs exist infetchAllAccountsWithPaginationandFetchAllAccountsWithPagination.Line 276 hardcodes offset to
"0"and the calling function never tracks/advances offset.
325-341: Magic number20for EVM address length should be a named constant.Line 335 uses
len(accAddress) > 20without context. A named constant would make this self-documenting.+const evmAccountAddressLen = 20 + func (q *Querier) extractAddressesFromAccounts(accounts []types.CosmosAccount) []sdk.AccAddress { // ... - if q.VmType == types.EVM && len(accAddress) > 20 { + if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen {
🧹 Nitpick comments (1)
config/chain.go (1)
111-128: Inconsistent error types based on field name reduces clarity.The field-specific branching (lines 117-119, 123-126) uses different error constructors (
NewValidationErrorvsNewInvalidValueError) for the same validation failure depending on field name. This makes error handling inconsistent.Consider unifying to a single error type:
func validateHttpUrl(fieldName, urlStr string, idx int) error { if len(urlStr) == 0 { return types.NewValidationError(fieldName, fmt.Sprintf("URL at index %d is empty", idx)) } u, err := url.Parse(urlStr) if err != nil { - if fieldName == "RPC_URL" { - return types.NewValidationError(fieldName, fmt.Sprintf("invalid URL format at index %d: %s", idx, urlStr)) - } - return types.NewInvalidValueError(fieldName, urlStr, fmt.Sprintf("invalid URL at index %d: %v", idx, err)) + return types.NewValidationError(fieldName, fmt.Sprintf("invalid URL format at index %d: %s", idx, urlStr)) } if u.Scheme != "http" && u.Scheme != "https" { - if fieldName == "RPC_URL" { - return types.NewValidationError(fieldName, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme)) - } - return types.NewInvalidValueError(fieldName, urlStr, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme)) + return types.NewValidationError(fieldName, fmt.Sprintf("URL at index %d must use http or https scheme, got: %s", idx, u.Scheme)) } return nil }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (6)
config/chain.go(1 hunks)indexer/extension/richlist/evmrichlist/run.go(2 hunks)util/cache/cache.go(13 hunks)util/querier/cosmos.go(1 hunks)util/querier/querier.go(1 hunks)util/querier/query.go(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- indexer/extension/richlist/evmrichlist/run.go
🧰 Additional context used
🧬 Code graph analysis (2)
util/cache/cache.go (3)
types/validators.go (2)
ConsensusPubkey(13-16)Validator(7-11)cache/cache.go (1)
Cache(9-11)util/util.go (3)
AccAddressFromString(12-31)BytesToHexWithPrefix(49-51)BytesToHex(45-47)
util/querier/cosmos.go (6)
indexer/collector/tx/types.go (3)
Pagination(19-22)RestTx(24-28)QueryRestTxsResponse(14-17)types/account.go (6)
Pagination(45-48)RestTx(73-77)QueryRestTxsResponse(94-97)QueryModuleAccountsResponse(63-65)CosmosAccountsResponse(51-54)CosmosAccount(14-18)util/querier/querier.go (1)
Querier(20-28)types/errors.go (1)
NewInvalidValueError(60-66)util/util.go (1)
AccAddressFromString(12-31)types/cosmos.go (1)
NodeInfo(3-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Analyze (go)
- GitHub Check: Run linter
- GitHub Check: Run tests
🔇 Additional comments (12)
util/querier/query.go (2)
1-40: Constants and initialization look reasonable.The retry constants (
maxRetries=5,baseBackoffDelay=1s,maxBackoffDelay=30s) are sensible defaults. The LCG seed initialization using atomic operations is thread-safe.
293-391: HTTP request execution and error handling are well-structured.The implementation properly:
- Validates URLs before use
- Tracks metrics for latency, concurrent requests, and semaphore wait times
- Handles rate limits (429) and internal server errors appropriately
- Uses context deadlines for timeout extraction
config/chain.go (2)
131-153: Backward compatibility accessors are well-designed.The
GetRpcUrl(),GetRestUrl(), andGetJsonRpcUrl()methods provide clean backward compatibility by returning the first element or empty string. This is a good pattern for migration.
20-40: Validation refactoring improves maintainability.Breaking validation into dedicated methods (
validateChainId,validateRpcUrls, etc.) makes the code more readable and testable.util/cache/cache.go (2)
45-57: Cache initialization with singleton pattern is well-implemented.Using
sync.Onceensures thread-safe single initialization. The cache types and sizes are configurable throughCacheConfig.
497-506: Validator cache accessors are correctly implemented.
GetValidatorCacheandSetValidatorCacheproperly use*types.Validatorand key byOperatorAddress.util/querier/querier.go (2)
47-57:NewQuerierconstructor correctly copies configuration fields.The constructor properly initializes all fields from
ChainConfig, including the new slice-based URL fields.
36-42: GenericextractResponsehelper is a clean abstraction.Using generics for JSON unmarshalling reduces boilerplate across different response types.
util/querier/cosmos.go (4)
29-54:handlePaginationNextKeyhelper is a good abstraction.This centralizes the pagination logic including the workaround for broken APIs that return null
next_keyprematurely. The boolean return values are clearly documented.
70-118: Transaction fetching correctly implements offset-based pagination.
fetchTxsaccepts anoffsetparameter (line 70) andfetchAllTxsWithPaginationproperly updatesoffset = len(allTxs)(line 104). This is the correct pattern that should be applied to the balance and account fetchers.
120-145: Height fetching and parsing are well-implemented.The code properly uses endpoint rotation and handles parsing errors with appropriate error types.
343-363: Node info retrieval is cleanly implemented.Uses the generic
extractResponsehelper and endpoint rotation consistently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (6)
util/querier/cosmos.go (6)
57-69: Critical: Infinite loop when transaction count never matches expected value.This issue was previously flagged but remains unfixed.
GetCosmosTxsloops indefinitely if the API never returns exactlytxCounttransactions, which can occur due to network issues, API bugs, or race conditions.Add a retry limit or respect context deadline:
func (q *Querier) GetCosmosTxs(ctx context.Context, height int64, txCount int) (txs []types.RestTx, err error) { + const maxRetries = 3 + for attempt := 0; attempt < maxRetries; attempt++ { - for { + select { + case <-ctx.Done(): + return nil, ctx.Err() + default: + } + allTxs, err := q.fetchAllTxsWithPagination(ctx, height) if err != nil { return txs, err } // If we get the expected number of transactions, return immediately if len(allTxs) == txCount { return allTxs, nil } } + return nil, fmt.Errorf("failed to fetch expected %d txs after %d attempts (got %d)", txCount, maxRetries, len(allTxs)) }
204-227: Critical: Hardcoded offset causes infinite loop in pagination fallback.This issue was previously flagged as addressed but remains unfixed. Line 209 hardcodes
pagination.offsetto"0", so when offset-based pagination fallback is triggered, every subsequent request re-fetches the first page.Add an
offsetparameter and use it:-func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) { +func fetchCosmosBalances(address sdk.AccAddress, height int64, useOffset bool, offset int, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) { return func(ctx context.Context, endpointURL string) (*types.QueryAllBalancesResponse, error) { // Build pagination parameters params := map[string]string{"pagination.limit": paginationLimit} if useOffset { - params["pagination.offset"] = "0" + params["pagination.offset"] = strconv.Itoa(offset) } else if len(nextKey) > 0 { params["pagination.key"] = base64.StdEncoding.EncodeToString(nextKey) }
229-270: Critical: Offset never tracked inGetAllBalances, causing infinite loop in fallback pagination.This issue was previously flagged as addressed but remains unfixed. When the fallback to offset-based pagination is triggered at line 260,
useOffsetis set totruebut nooffsetvariable exists to track position. Combined with the hardcoded"0"infetchCosmosBalances, this causes an infinite loop re-fetching the first page.Add offset tracking and pass it to the fetch function:
func (q *Querier) GetAllBalances(ctx context.Context, address sdk.AccAddress, height int64) ([]sdk.Coin, error) { var allBalances []sdk.Coin var nextKey []byte useOffset := false + offset := 0 for { balancesResp, err := executeWithEndpointRotation( ctx, q.RestUrls, - fetchCosmosBalances(address, height, useOffset, nextKey), + fetchCosmosBalances(address, height, useOffset, offset, nextKey), ) if err != nil { return nil, err } // Append balances from this page for _, balance := range balancesResp.Balances { denom := strings.ToLower(balance.Denom) if q.VmType == types.EVM { contract, err := q.GetEvmContractByDenom(ctx, denom) if err != nil { return nil, err } denom = contract } allBalances = append(allBalances, sdk.Coin{ Denom: denom, Amount: balance.Amount, }) } + offset = len(allBalances) + // Check if there are more pages shouldContinue, shouldBreak := handlePaginationNextKey(balancesResp.Pagination, len(allBalances), len(balancesResp.Balances), &useOffset)
272-295: Critical: Hardcoded offset causes infinite loop in pagination fallback.This issue was previously flagged but remains unfixed. Line 277 hardcodes
pagination.offsetto"0", identical to the bug infetchCosmosBalances. When offset-based pagination is triggered, every request re-fetches the first page.Add an
offsetparameter:-func fetchAllAccountsWithPagination(height int64, useOffset bool, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.CosmosAccountsResponse, error) { +func fetchAllAccountsWithPagination(height int64, useOffset bool, offset int, nextKey []byte) func(ctx context.Context, endpointURL string) (*types.CosmosAccountsResponse, error) { return func(ctx context.Context, endpointURL string) (*types.CosmosAccountsResponse, error) { // Build pagination parameters params := map[string]string{"pagination.limit": paginationLimit} if useOffset { - params["pagination.offset"] = "0" + params["pagination.offset"] = strconv.Itoa(offset) } else if len(nextKey) > 0 { params["pagination.key"] = base64.StdEncoding.EncodeToString(nextKey) }
297-324: Critical: Offset never tracked inFetchAllAccountsWithPagination, causing infinite loop in fallback pagination.This issue was previously flagged but remains unfixed. When fallback to offset-based pagination is triggered at line 312,
useOffsetis set totruebut nooffsetvariable exists. Combined with the hardcoded"0"infetchAllAccountsWithPagination, this causes an infinite loop.Add offset tracking and pass it to the fetch function:
func (q *Querier) FetchAllAccountsWithPagination(ctx context.Context, height int64) ([]sdk.AccAddress, error) { var allAddresses []sdk.AccAddress var nextKey []byte useOffset := false + offset := 0 for { - accountsResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchAllAccountsWithPagination(height, useOffset, nextKey)) + accountsResp, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchAllAccountsWithPagination(height, useOffset, offset, nextKey)) if err != nil { return nil, err } extracted := q.extractAddressesFromAccounts(accountsResp.Accounts) allAddresses = append(allAddresses, extracted...) + offset = len(allAddresses) // Check if there are more pages
336-336: Replace magic number with named constant.This issue was previously flagged but remains unfixed. The literal
20for EVM address length filtering is not self-documenting.Define a constant at the top of the file:
const ( paginationLimitInt = 100 paginationLimit = "100" + evmAccountAddressLen = 20 cosmosTxsPath = "/cosmos/tx/v1beta1/txs/block/%d"Then use it:
- if q.VmType == types.EVM && len(accAddress) > 20 { + if q.VmType == types.EVM && len(accAddress) > evmAccountAddressLen { continue }
🧹 Nitpick comments (1)
util/querier/cosmos.go (1)
30-55: Consider a more expressive return type for pagination control.Returning
(bool, bool)requires callers to remember which boolean means what. A custom type or named returns would improve readability.Example refactor:
-func handlePaginationNextKey(pagination types.Pagination, currentCount, pageSize int, useOffset *bool) (bool, bool) { +type PaginationAction int + +const ( + PaginationContinueWithNextKey PaginationAction = iota + PaginationContinueWithOffset + PaginationDone +) + +func handlePaginationNextKey(pagination types.Pagination, currentCount, pageSize int, useOffset *bool) PaginationAction { if len(pagination.NextKey) != 0 { - return false, false // Continue with next key + return PaginationContinueWithNextKey } // Workaround for broken API that returns null next_key prematurely. if pagination.Total == "" { - return false, true // No more pages + return PaginationDone } total, err := strconv.Atoi(pagination.Total) if err != nil { - return false, true // Can't parse total, assume done + return PaginationDone } if currentCount < total && pageSize == paginationLimitInt { *useOffset = true - return true, false // Continue with offset + return PaginationContinueWithOffset } - return false, true // No more pages + return PaginationDone }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
util/querier/cosmos.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
util/querier/cosmos.go (1)
91-119: LGTM! Pagination with offset tracking implemented correctly.The function properly maintains and updates
offseton line 105, ensuring that offset-based pagination fallback works as intended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
util/querier/querier.go (1)
62-101: Past review issue remains unaddressed: backoff calculation and endpoint exhaustion logic.The concerns raised in the previous review comment about this section have not been resolved:
Line 86:
calculateBackoffDelay(retriesPerEndpoint)is called whenretriesPerEndpoint >= maxRetriesPerURL(from the line 84 condition), meaning the backoff delay before endpoint rotation always uses the maximum value (at leastmaxRetriesPerURL). This may not reflect the intended exponential backoff behavior.Lines 97-99: The endpoint exhaustion check (
currentEndpointIndex == startEndpointIndex) is positioned after rotation but occurs BEFORE actually attempting the rotated endpoint. Additionally, this check is redundant because thetotalRetriescheck at line 114 will always trigger first afterlen(endpoints) * maxRetriesPerURLattempts.Please review and address the suggestions in the previous review comment to fix both the backoff calculation (use
maxRetriesPerURL-1or track attempts correctly) and the exhaustion detection logic (track distinct endpoints tried rather than relying on index equality).
🧹 Nitpick comments (2)
util/querier/querier.go (2)
17-17: Consider adding a comment explaining queryTimeout usage.The
queryTimeoutconstant is defined but not referenced within this file. While it's likely used by request function implementations in other files (e.g., evm.go, wasm.go), adding a comment would clarify its purpose and prevent confusion.const ( + // queryTimeout is the default timeout for individual HTTP requests made by querier methods queryTimeout = 5 * time.Second maxRetriesPerURL = 5 )
48-58: Consider adding nil check for defensive programming.The constructor works correctly but doesn't validate the input parameter. Adding a nil check would prevent potential panics.
func NewQuerier(cfg *config.ChainConfig) *Querier { + if cfg == nil { + return nil + } return &Querier{ ChainId: cfg.ChainId, VmType: cfg.VmType, RpcUrls: cfg.RpcUrls, RestUrls: cfg.RestUrls, JsonRpcUrls: cfg.JsonRpcUrls, AccountAddressPrefix: cfg.AccountAddressPrefix, Environment: cfg.Environment, } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
util/querier/querier.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
util/querier/querier.go (3)
types/vm.go (1)
VMType(3-3)config/chain.go (1)
ChainConfig(10-18)sentry_integration/sentry_integration.go (1)
CaptureCurrentHubException(9-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
🔇 Additional comments (3)
util/querier/querier.go (3)
1-14: LGTM on imports and package structure.The package declaration and imports are well-organized with appropriate dependencies for the querier functionality.
37-46: LGTM on helper function and type definitions.The generic
extractResponsefunction andrequestFunctype alias provide clean abstractions for the endpoint rotation logic.
103-128: LGTM on request execution and failure handling logic.The request execution flow, error handling, and backoff logic in this section work correctly:
- Successful responses return immediately
- Failed requests properly increment counters
- Sentry integration captures exhaustion errors for observability
- Context cancellation is checked appropriately
- Error wrapping preserves the error chain
- Changed configuration keys from "RPC_URLS", "REST_URLS", and "JSON_RPC_URLS" to singular forms "RPC_URL", "REST_URL", and "JSON_RPC_URL" for consistency and clarity in configuration management.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
config/config.go (1)
305-306: Resolve TODO by implementing dedicated validator cache configuration.The TODO comment and use of the generic
CACHE_SIZEenvironment variable forValidatorCacheSizecreates inconsistency with the other cache configurations (lines 299-304), which all use dedicated environment variables.This concern is addressed in the review comment for line 111, which provides a complete solution: add a
DefaultValidatorCacheSizeconstant, set a default forVALIDATOR_CACHE_SIZEinsetDefaults(), and useviper.GetInt("VALIDATOR_CACHE_SIZE")here.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
config/config.go(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run tests
- GitHub Check: Run linter
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
config/config.go (1)
262-264: LGTM! Remember to update README as previously noted.The conversion from single URL strings to comma-separated URL slices is correctly implemented using
splitAndTrim(), which properly handles empty values and whitespace. This enables multi-URL support for improved failover and load balancing.As noted in the previous review, ensure the README is updated to reflect the environment variable usage pattern (comma-separated values for
RPC_URL,REST_URL, andJSON_RPC_URL).
- Introduced a new querier instance in the RichListExtension struct to enhance data retrieval capabilities. - Improved the initialization process for better integration with the existing configuration.
- Temporarily disabled error handling logic in the fastSync method to prevent panic on exceeding maximum scrape error count. - Retained logging of errors for visibility while allowing the function to continue execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
indexer/scraper/sync.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run tests
- GitHub Check: Run linter
- GitHub Check: Analyze (go)
- Commented out the error return statement in the executeWithEndpointRotation function to prevent failure when all retries are exhausted. - This change allows the function to continue execution without returning an error, while still logging the last encountered error for visibility.
… querier - Removed a commented-out line in the fetchTraceCallByBlock function to streamline the code. - Added logging statements for total retries and errors in the executeWithEndpointRotation function to improve visibility during execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
util/querier/querier.go (1)
84-101: Endpoint rotation logic has known issues - addressing past comment.The previously identified issues with backoff calculation and exhaustion detection are still present. Additionally, there's a double-backoff scenario: after the final retry on an endpoint, backoff occurs at lines 120-125, then again at lines 86-91 during rotation check.
🧹 Nitpick comments (3)
util/querier/evm.go (3)
268-273: Consider validating address length for defensive coding.The padding logic handles addresses shorter than 64 chars but doesn't validate oversized or malformed addresses. A standard Ethereum address is 40 hex chars (20 bytes). While callers likely provide valid addresses, adding a length check would prevent subtle bugs from propagating.
addressParam := strings.TrimPrefix(address, "0x") + // Validate standard Ethereum address length (40 hex chars = 20 bytes) + if len(addressParam) > 40 { + return nil, fmt.Errorf("invalid address length for %s: expected 40 hex chars, got %d", address, len(addressParam)) + } + // Pad address to 32 bytes (64 hex chars) - efficient single allocation if len(addressParam) < 64 { addressParam = strings.Repeat("0", 64-len(addressParam)) + addressParam }
304-305: TODO: Address incomplete error handling.The TODO comment indicates uncertainty about proper handling here. This could be related to retry logic, partial failures in batch responses, or timeout handling.
Would you like me to help design more robust error handling for this batch request? For example, handling partial failures where some responses succeed while others fail, or implementing request chunking for large batches.
230-248: Minor: Consider checking first response directly.Since
eth_blockNumberis always added as the first request with ID 0 (line 261), checkingbatchResponses[0]directly would be more efficient than iterating. However, the current approach is more defensive against response reordering.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (2)
util/querier/evm.go(1 hunks)util/querier/querier.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
util/querier/querier.go (3)
types/vm.go (1)
VMType(3-3)config/chain.go (1)
ChainConfig(10-18)sentry_integration/sentry_integration.go (1)
CaptureCurrentHubException(9-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
🔇 Additional comments (4)
util/querier/evm.go (4)
24-42: LGTM! Clean ABI interaction pattern.The method properly handles ABI packing/unpacking with appropriate error propagation. The pattern is consistent and reusable.
44-66: LGTM! Good input validation for token ID.Using
big.Int.SetStringwith proper validation and a custom error type provides clear error messaging for invalid inputs.
150-186: LGTM! Well-structured caching and normalization.The method follows cache-first pattern, handles early return for hex addresses, normalizes denom formats appropriately, and validates the response before caching.
209-212: Past issue addressed - error handling is now correct.The previous review flagged that the error from
extractResponsewas ignored. This has been fixed - the condition now properly checkserr == nilbefore accessingerrResp.Error.
- Updated the timeout parameter in the TraceCallByBlock method to 10 times the previous queryTimeout value, enhancing the robustness of the endpoint rotation during trace calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
util/querier/evm.go (4)
102-134: Confirm intended behaviour whenVmType != types.EVMinGetEvmTxs.Right now, a non‑EVM querier returns
(nil, nil)because of the earlyif q.VmType != types.EVM { return }. That’s reasonable if callers always gate onVmTypeor treat “no txs & no error” as “EVM not active”, but it can also silently hide misconfiguration.Consider either:
- returning an explicit error (e.g. “EVM VMType not enabled”), or
- documenting that
nil, nilis the sentinel for “non‑EVM chain” and ensuring callers rely on that.
136-186: Normalize denom before cache access to avoid duplicate remote lookups.
GetEvmContractByDenomnormalizesdenom(ibc uppercase,gas→GAS) only after checking the cache, but uses the normalized denom as the cache key on write. That means repeated calls with different cases (e.g."ibc/abc"vs"ibc/ABC","gas"vs"GAS") will miss the cache and re‑query MiniEVM.Refactor to apply normalization first, then consistently use the normalized key for both cache read and write and for the REST call, e.g.:
func (q *Querier) GetEvmContractByDenom(ctx context.Context, denom string) (string, error) { if strings.HasPrefix(denom, "0x") { return denom, nil } - // Check cache first - if address, ok := cache.GetEvmDenomContractCache(denom); ok { - return address, nil - } - - // ibc/UPPERCASE - // l2/lowercase - // evm/AnyCase - if strings.HasPrefix(denom, "ibc/") { - denom = fmt.Sprintf("ibc/%s", strings.ToUpper(denom[4:])) - } else if strings.ToLower(denom) == "gas" { - denom = "GAS" - } + normalized := denom + + // ibc/UPPERCASE + // l2/lowercase + // evm/AnyCase + if strings.HasPrefix(normalized, "ibc/") { + normalized = fmt.Sprintf("ibc/%s", strings.ToUpper(normalized[4:])) + } else if strings.EqualFold(normalized, "gas") { + normalized = "GAS" + } + + // Check cache with normalized denom + if address, ok := cache.GetEvmDenomContractCache(normalized); ok { + return address, nil + } - response, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchEvmContractByDenom(denom)) + response, err := executeWithEndpointRotation(ctx, q.RestUrls, fetchEvmContractByDenom(normalized)) @@ - address := strings.ToLower(response.Address) - cache.SetEvmDenomContractCache(denom, address) + address := strings.ToLower(response.Address) + cache.SetEvmDenomContractCache(normalized, address)This keeps semantics the same while improving cache hit rate and avoiding repeated network calls for equivalent denoms.
188-228: Trace JSON‑RPC error handling now looks correct; consider single unmarshal as a minor cleanup.The updated logic correctly checks
err == nil && errResp.Error != nilbefore surfacing an RPC error, and only then unmarshals intoDebugCallTraceBlockResponse. That fixes the earlier bug where the parse error was ignored.If you ever want to tighten this further, you could unmarshal once into a struct that has both
ResultandErrorfields instead of decoding twice, but that’s purely a micro‑optimization.
230-341: Batch ERC‑20 balances: check caller expectations re JSON‑RPC error fields and response ordering.The batching logic and
parseLatestHeightFromBatchare solid (height consistency checks are a nice touch), but two behavioural points are worth confirming:
Per‑call JSON‑RPC errors:
You currently only inspectErrorfor theeth_blockNumberresponse (ID 0). Foreth_callentries (IDs ≥ 1), any EVM/JSON‑RPC errors are returned in thetypes.JSONRPCResponseslice but not acted on here. Ensure all callers ofQueryERC20Balancesexplicitly inspectresp.Errorfor each balance entry before treatingResultas valid.Response order vs. ID mapping:
JSON‑RPC batch responses are not guaranteed to be in the same order as the requests. You already useID == 0when parsingeth_blockNumber, but the rest of the code only checkslen(batchResponses)and then returns the slice as‑is. Confirm that downstream consumers either:
- rely on
IDto match responses to addresses, or- are comfortable with any ordering assumptions baked into the current RPC implementation.
If ordering matters at the API boundary, you might want to normalize by sorting or building a
map[id]responsebefore returning.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
util/querier/evm.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
util/querier/evm.go (1)
24-66: ERC‑721 helpers andevmCalllook correct and error‑aware.ABI pack/unpack around
evmCallis clean, errors are propagated, and invalidtokenIdStris surfaced viaNewInvalidValueError. TheevmCallhelper correctly decodes the hex payload and surfaces EVM call errors fromQueryCallResponse. No blocking issues here.Also applies to: 68-100
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
util/querier/querier.go (1)
21-29: Add GoDoc forQuerierand polishQueryCallResponsecomment.
Querieris exported but lacks a GoDoc comment, andQueryCallResponse’s comment is almost idiomatic but could be tightened. Adding proper docs will help callers and keepgo docoutput clean.- type Querier struct { +// Querier holds chain configuration and endpoint URLs used to perform queries. +type Querier struct { @@ -// QueryCallResponse represents the response from EVM call endpoint +// QueryCallResponse represents the response from an EVM call endpoint. type QueryCallResponse struct {Also applies to: 31-35, 48-58
🧹 Nitpick comments (2)
util/querier/querier.go (2)
37-47: Wrap JSON unmarshal errors with context for easier debugging.
extractResponsecurrently returns the rawjson.Unmarshalerror; wrapping it with context makes failures easier to trace without losing the underlying error.func extractResponse[T any](response []byte) (T, error) { var t T if err := json.Unmarshal(response, &t); err != nil { - return t, err + return t, fmt.Errorf("failed to unmarshal response: %w", err) } return t, nil }
60-125: Endpoint rotation/backoff logic looks sound; consider minor simplifications.The loop correctly respects context cancellation, enforces
maxRetriesPerURLper endpoint, rotates through all endpoints, and reports the terminal failure to Sentry; no correctness issues stand out. If you want to simplify later:
startEndpointIndexis always0; you could hardcode that or derive a randomized start index and drop the separate variable.loopSize/totalRetriesare only used to gate Sentry reporting; alternatively, you could report in the"exhausted all endpoints"branch and drop the extra counters if you don’t need the exact attempt count.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (2)
indexer/scraper/sync.go(0 hunks)util/querier/querier.go(1 hunks)
💤 Files with no reviewable changes (1)
- indexer/scraper/sync.go
🧰 Additional context used
🧬 Code graph analysis (1)
util/querier/querier.go (3)
types/vm.go (1)
VMType(3-3)config/chain.go (1)
ChainConfig(10-18)sentry_integration/sentry_integration.go (1)
CaptureCurrentHubException(9-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
util/querier/querier.go (1)
1-14: Package and imports look appropriate and consistent.
context,encoding/json,fmt,time, Sentry, and local packages are all used later in the file; no unused or missing imports and the package naming is clear.
- Added a totalDelay variable to track cumulative backoff delays during endpoint retries in the executeWithEndpointRotation function. - Introduced a safeguard to return an error if the total delay exceeds 5 minutes, ensuring better control over execution time.
|
looks good |
|
some coderabbit's comments are not resolved yet and maybe worth looking too |
- Replaced hardcoded empty address strings with constants from the types package for better maintainability. - Introduced a new decrementPrepareCount method to encapsulate the logic for safely decrementing the prepareCount. - Removed unused files related to internaltx and richlist utilities to clean up the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
indexer/extension/richlist/utils/utils.go (1)
267-300: Consider adding context cancellation checks in the account loop.The loop processes potentially many accounts without checking for context cancellation. If the context is cancelled (e.g., due to timeout or shutdown), the loop continues until completion, wasting resources.
Apply this diff to add a cancellation check:
for idx, address := range accounts { + // Check for context cancellation + if err := ctx.Err(); err != nil { + return err + } + if idx%100 == 0 { progress := fmt.Sprintf("%d/%d", idx, len(accounts)) logger.Info("fetching and accumulating balances by denomination", slog.Int64("height", height), slog.String("progress", progress)) }indexer/collector/evm-nft/prepare.go (1)
17-38: Consider usingerrgroup.WithContextso goroutines respectctxcancellationRight now
ctxis passed intoGetCollectionName/GetTokenUri, but theerrgroupitself is not context-aware. Usingerrgroup.WithContextwould ensure that:
- all in-flight lookups are canceled promptly when
ctxis canceled, and- other goroutines stop work once one returns an error.
A minimal change would look like:
- var g errgroup.Group + g, ctx := errgroup.WithContext(ctx)Call sites (
g.Go,g.Wait) can stay the same, now using the derivedctx.Also applies to: 78-80
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (8)
indexer/collector/evm-nft/collect.go(6 hunks)indexer/collector/evm-nft/prepare.go(5 hunks)indexer/collector/evm-nft/query.go(0 hunks)indexer/collector/evm-nft/util.go(1 hunks)indexer/extension/richlist/utils/rest.go(0 hunks)indexer/extension/richlist/utils/types.go(0 hunks)indexer/extension/richlist/utils/utils.go(13 hunks)indexer/indexer.go(7 hunks)
💤 Files with no reviewable changes (3)
- indexer/collector/evm-nft/query.go
- indexer/extension/richlist/utils/rest.go
- indexer/extension/richlist/utils/types.go
🧰 Additional context used
🧬 Code graph analysis (3)
indexer/collector/evm-nft/prepare.go (2)
indexer/collector/evm-nft/submodule.go (1)
EvmNftSubmodule(20-27)indexer/types/types.go (1)
ScrapedBlock(17-28)
indexer/indexer.go (2)
util/querier/querier.go (2)
Querier(21-29)NewQuerier(48-58)metrics/error_tracker.go (1)
TrackError(9-11)
indexer/collector/evm-nft/collect.go (1)
util/cache/cache.go (3)
NftKey(16-19)GetOrCreateAccountIds(152-177)GetOrCreateNftIds(268-293)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run tests
- GitHub Check: Run linter
- GitHub Check: Analyze (go)
🔇 Additional comments (17)
indexer/indexer.go (4)
45-45: LGTM - Querier integration follows established patterns.The querier field and initialization align with the
NewQuerierconstructor shown in the relevant snippets and supports the PR's goal of centralizing data retrieval.Also applies to: 59-59
207-253: Well-structured retry logic with proper cancellation handling.The retry mechanism correctly handles:
- Context cancellation at multiple checkpoints (before attempt, during wait)
- Indefinite retry for
DeadlineExceededwith configurable cooling period- Preservation of panic behavior for non-timeout errors
The separation of concerns between cancellation paths (using
decrementPrepareCount()) and success path (inline decrement) prevents double-counting.
255-259: LGTM - success path correctly updates shared state under lock.
265-270: LGTM - clean helper for thread-safe counter decrement.indexer/extension/richlist/utils/utils.go (8)
21-22: LGTM! Clean import additions.The new imports for
cacheandqueriersupport the refactored architecture with proper dependency injection and context propagation.
87-87: LGTM! Context and querier propagation improves cancellation support.The updated signature properly supports context-aware operations and dependency injection.
127-131: Verify the silent error handling aligns with requirements.When
GetEvmContractByDenomfails, the code continues to the next denom without logging or accumulating the error. While this prevents blocking on individual lookup failures, it may silently skip valid denoms during transient network issues.Consider whether this behavior is acceptable or if failed lookups should be retried, logged at a higher level, or accumulated for reporting.
172-172: LGTM! Centralizing constants improves maintainability.Moving EVM-related constants to the
typespackage ensures consistency across the codebase.Also applies to: 188-188, 198-198
212-212: LGTM! Proper context and querier propagation.The signature update and downstream call correctly propagate context and querier dependencies.
Also applies to: 230-230
253-260: LGTM! Context-aware balance fetching improves cancellation support.The signature update and
GetAllBalancescall properly utilize the context for timeout and cancellation handling.Also applies to: 279-279
324-324: LGTM! Proper context and querier propagation throughout initialization.The signature update and downstream calls correctly propagate context and querier dependencies, enabling proper cancellation and dependency injection.
Also applies to: 327-327, 339-339
357-357: LGTM! Cache delegation aligns with the refactoring goals.Moving the implementation to the
cachepackage improves modularity and aligns with the PR's objective to streamline cache initialization.indexer/collector/evm-nft/util.go (1)
17-19: Standardizing NFT log filtering ontypes.EvmTransferTopiclooks goodThe stricter 4-topic + empty-data check keeps this helper focused on standard EVM Transfer logs and removes local duplication of the topic constant; behavior is clear and consistent with shared types.
indexer/collector/evm-nft/prepare.go (1)
122-131: Centralizing mint/burn checks ontypes.EvmEmptyAddressis correctUsing
types.EvmEmptyAddressfor thefrom/tocomparisons keeps mint (from == empty,to != empty) and burn (from != empty,to == empty) detection consistent with the rest of the codebase, without changing the underlying logic.indexer/collector/evm-nft/collect.go (3)
20-21: Migration of mint/transfer/burn maps tocache.NftKeylooks consistentSwitching
mintMap/transferMap/burnMapto usecache.NftKeyand constructingnftKeywith{CollectionAddr, TokenId}preserves the previous behavior while aligning with the cache package’s key type. The mint/transfer/burn switch logic remains unchanged apart from using the sharedtypes.EvmEmptyAddressconstant.Also applies to: 34-37, 71-91
108-151: Account ID resolution viacache.GetOrCreateAccountIdsis wired correctly
mintedCollectionsgroups NFTs by collection, andallAddressesaggregates owners (mints/transfers) plus collection creators before callingcache.GetOrCreateAccountIds. That ensures all subsequent uses (creatorId, minted owners, transfer owners) resolve against a single, cache-backed ID map, matching the prior behavior while leveraging the new cache helper.
283-292: Tx–NFT edge creation now correctly uses cache-backed NFT IDsBuilding
keysas[]cache.NftKeyfromnftTxMapand resolving them viacache.GetOrCreateNftIdsis consistent with the new cache-based ID flow and keeps the laternftIdMap[key]lookup straightforward. This maintains the existing edge-creation behavior while centralizing NFT ID handling in the cache layer.
- Introduced EvmEmptyAddress constant for the zero address used in EVM transfers. - Added EvmTransferTopic constant representing the keccak256 hash of the Transfer event signature for improved clarity and maintainability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
types/evm.go (1)
11-18: Consider adding godoc comments for exported types.Both
QueryEvmTxsResponseandEvmContractByDenomResponseare exported types but lack godoc comments (except for the inline comment on line 15). Adding godoc comments would improve documentation and discoverability.Example:
+// QueryEvmTxsResponse wraps a list of EVM transactions from the query API. type QueryEvmTxsResponse struct { Result []EvmTx `json:"result"` } -// EvmContractByDenomResponse represents the response from /minievm/evm/v1/contracts/by_denom +// EvmContractByDenomResponse represents the response from /minievm/evm/v1/contracts/by_denom. type EvmContractByDenomResponse struct { Address string `json:"address"` }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
types/evm.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
types/evm.go (1)
types/tx.go (1)
EvmTx(26-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run linter
- GitHub Check: Run tests
- GitHub Check: Analyze (go)
🔇 Additional comments (2)
types/evm.go (2)
7-8: LGTM!The
EvmTransferTopicconstant correctly represents the keccak256 hash of the Transfer event signature, which is standard for ERC20/ERC721 token transfers.
5-6: The 32-byte format ofEvmEmptyAddressis correct. The code uses it to compare againstlog.Topics[1]andlog.Topics[2](indexed Transfer event parameters), which are stored as 32-byte values in EVM logs. No changes needed.
- Updated the GetAccountCache function to normalize account addresses to Bech32 format before retrieval. - Removed unused Validator and ConsensusPubkey structs to streamline the codebase.
- Eliminated the acquisition and release of the Fiber client in the Run and wait methods to streamline the code. - This change reduces unnecessary resource management and simplifies the indexer logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
indexer/indexer.go (1)
61-65:wait()ignores context cancellation;Runcan hang on shutdown or startup failure
Runblocks ini.wait()before starting goroutines, andwait()loops forever on RPC errors:
- It calls
i.querier.GetLatestHeight(i.ctx)inside aforloop.- On any error (including
context.Canceled/DeadlineExceeded) it just logs andtime.Sleep(5 * time.Second).- Sleeps are not cancelable and there is no
ctx.Err()check.If the passed
ctxis canceled while waiting for the chain to become ready (or while the RPC endpoint is down),wait()will keep retrying and sleeping, andRunwill not return, defeating graceful shutdown.Consider making
wait()context-aware and exiting wheni.ctxis done, and making the backoff sleeps cancelable:func (i *Indexer) wait() { for { - chainHeight, err := i.querier.GetLatestHeight(i.ctx) - if err != nil { - i.logger.Error("failed to get chain height", slog.Any("error", err)) - time.Sleep(5 * time.Second) - continue - } - if chainHeight > types.MinChainHeightToStart { - break - } - time.Sleep(types.ChainCheckInterval) + // Fast‑path: respect cancellation before doing any work. + if err := i.ctx.Err(); err != nil { + i.logger.Info("wait() context done, exiting", slog.Any("error", err)) + return + } + + chainHeight, err := i.querier.GetLatestHeight(i.ctx) + if err != nil { + // Treat context cancellation specially: exit instead of retrying forever. + if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) { + i.logger.Info("wait() cancelled while getting chain height", slog.Any("error", err)) + return + } + + i.logger.Error("failed to get chain height", slog.Any("error", err)) + // Backoff sleep that can be interrupted by context cancel. + select { + case <-i.ctx.Done(): + i.logger.Info("wait() context done during backoff, exiting", slog.Any("error", i.ctx.Err())) + return + case <-time.After(5 * time.Second): + } + continue + } + + if chainHeight > types.MinChainHeightToStart { + break + } + + // Periodic poll, also cancelable via context. + select { + case <-i.ctx.Done(): + i.logger.Info("wait() context done while waiting for min height", slog.Any("error", i.ctx.Err())) + return + case <-time.After(types.ChainCheckInterval): + } } }This keeps the behavior the same in the healthy case but lets
Runexit promptly when its context is canceled or when upstream height queries are being stopped.Also applies to: 77-81, 149-162
🧹 Nitpick comments (1)
indexer/indexer.go (1)
172-256: ClarifyPrepare’s context behavior and tightenprepareCounthandlingThe new retry loop in
prepare()looks reasonable, but there are a couple of things worth tightening:
Reliance on
collector.PreparehonoringctxThe retry loop stops only when:
i.ctx.Done()is observed at the top of the loop or during the cooldownselect, orcollector.Preparereturnscontext.Canceled.If
collector.Prepareever blocks on I/O without honoring the passed context, the goroutine can outlive shutdown even thoughRunhas canceledi.ctx. It would be good to double‑check thatcollector.Prepareconsistently propagates and checks thectxit receives, especially around network/database calls.Centralizing
prepareCountdecrements / guarding underflow (optional)
- On cancellation paths you use
decrementPrepareCount().- On the success path you do
i.prepareCount--directly under the mutex.To make this counter safer against future changes, you could (optionally) route all decrements through the helper and add a defensive guard:
// success path
- i.mtx.Lock()
- i.blockMap[b.Height] = b
- i.prepareCount--
- i.mtx.Unlock()
- i.mtx.Lock()
- i.blockMap[b.Height] = b
- i.mtx.Unlock()
- i.decrementPrepareCount()
And in the helper: ```diff // decrementPrepareCount safely decrements the prepareCount counter func (i *Indexer) decrementPrepareCount() { i.mtx.Lock()
- i.prepareCount--
- if i.prepareCount > 0 {
i.prepareCount--- } else {
i.logger.Warn("prepareCount underflow detected in decrementPrepareCount",slog.Int("prepareCount", i.prepareCount))- }
i.mtx.Unlock()
}This keeps the inflight‑count accounting in one place and will surface any future misuse via logs instead of silently drifting negative. Also applies to: 258-263 </blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used**: CodeRabbit UI **Review profile**: CHILL **Plan**: Pro **Cache: Disabled due to data retention organization setting** **Knowledge base: Disabled due to `Reviews -> Disable Knowledge Base` setting** <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between ef91f27d8d8f0016283121b4f978ff41a0e2ce50 and d2c1a1a3fb8f90c18d547cf24ff53003cb8eabae. </details> <details> <summary>📒 Files selected for processing (1)</summary> * `indexer/indexer.go` (7 hunks) </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>🧬 Code graph analysis (1)</summary> <details> <summary>indexer/indexer.go (2)</summary><blockquote> <details> <summary>util/querier/querier.go (2)</summary> * `Querier` (21-29) * `NewQuerier` (48-58) </details> <details> <summary>metrics/error_tracker.go (1)</summary> * `TrackError` (9-11) </details> </blockquote></details> </details> </details> <details> <summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)</summary> * GitHub Check: Run tests * GitHub Check: Run linter * GitHub Check: Analyze (go) </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
Summary by CodeRabbit
New Features
Improvements
Configuration Changes
✏️ Tip: You can customize this high-level summary in your review settings.