Remove zombie node logic and add out-of-sync fallback selection#85
Remove zombie node logic and add out-of-sync fallback selection#85poopoothegorilla wants to merge 4 commits intomainfrom
Conversation
|
👋 poopoothegorilla, thanks for creating this pull request! To help reviewers, please consider creating future PRs as drafts first. This allows you to self-review and make any final changes before notifying the team. Once you're ready, you can mark it as "Ready for review" to request feedback. Thanks! |
| // it will try to upgrade to an alive node when one becomes available. | ||
| func (c *MultiNode[CHAIN_ID, RPC]) selectNode(ctx context.Context) (node Node[CHAIN_ID, RPC], err error) { | ||
| if c.selectionMode == NodeSelectionModeRandomRPC { | ||
| return c.awaitNodeSelection(ctx) |
There was a problem hiding this comment.
We also need to adjust random node selection. Previously, we could abort all connections if the node was not alive. However, we now need to keep the connection to usable non alive node if there are no alive nodes.
There was a problem hiding this comment.
Could you clarify what specific adjustment is needed here? With zombie logic retained, the last out-of-sync node is forcibly kept alive, so the existing random selection via awaitNodeSelection should work as before. Are you looking for something beyond what the zombie guards already provide?
| // threshold amount of time, mark it broken | ||
| lggr.Errorw(fmt.Sprintf("RPC endpoint detected out of sync; no new heads received for %s (last head received was %v)", noNewHeadsTimeoutThreshold, localHighestChainInfo.BlockNumber), "nodeState", n.getCachedState(), "latestReceivedBlockNumber", localHighestChainInfo.BlockNumber, "noNewHeadsTimeoutThreshold", noNewHeadsTimeoutThreshold) | ||
| if n.poolInfoProvider != nil { | ||
| // if its the only node and its not a proxy, keep waiting for sync (check LatestChainInfo) |
There was a problem hiding this comment.
Why remove the comments?
There was a problem hiding this comment.
Addressed — restored all removed explanatory comments in aliveLoop (zombie guards for no-new-heads and no-new-finalized-heads sections, including proxy notes and check interval notes) and in outOfSyncLoop (load-balanced RPC note).
| tests.AssertLogEventually(t, observedLogs, fmt.Sprintf("RPC endpoint failed to respond to %d consecutive polls", pollFailureThreshold)) | ||
| assert.Equal(t, nodeStateAlive, node.State()) | ||
| }) | ||
| t.Run("with threshold poll failures, we are the last node alive, but is a proxy, transitions to unreachable", func(t *testing.T) { |
There was a problem hiding this comment.
why remove this tests?
There was a problem hiding this comment.
Addressed — restored both zombie behavior tests: 'forcibly keeps it alive' (with poolInfo mock returning 0 live nodes excluding caller) and 'but is a proxy, transitions to unreachable'. Updated mocks to use LatestChainInfo(mock.Anything).
multinode/multi_node_test.go
Outdated
| node1.AssertNotCalled(t, "UnsubscribeAllExceptAliveLoop") | ||
| node2.AssertNotCalled(t, "UnsubscribeAllExceptAliveLoop") | ||
| }) | ||
| t.Run("RandomRPC falls back to out-of-sync node when no alive nodes available", func(t *testing.T) { |
There was a problem hiding this comment.
no longer needed. We do not need to fallback to out-of-sync node, since the last out-of-sync node will be kept in "zombie" state.
There was a problem hiding this comment.
Addressed — removed this test and all other out-of-sync fallback tests. Zombie state is kept as-is.
multinode/multi_node.go
Outdated
| TotalDifficulty: big.NewInt(0), | ||
| } | ||
| for _, n := range c.primaryNodes { | ||
| if n.Name() == callerName { |
There was a problem hiding this comment.
Only exclude from nLiveNodes calculations. We still want to get full ChainInfo
There was a problem hiding this comment.
Addressed — LatestChainInfo now only excludes the caller from nLiveNodes count while still aggregating ChainInfo from all alive nodes including the caller.
multinode/multi_node.go
Outdated
| // Return highest ChainInfo most recently received by the alive nodes. | ||
| // selectOutOfSyncNode picks the best out-of-sync node by highest block number. | ||
| // Returns nil if no out-of-sync nodes are available. | ||
| func (c *MultiNode[CHAIN_ID, RPC]) selectOutOfSyncNode() Node[CHAIN_ID, RPC] { |
There was a problem hiding this comment.
Addressed — removed selectOutOfSyncNode() and isUsableState() entirely, along with all fallback logic in selectNode, awaitNodeSelection, and checkLease.
multinode/multi_node.go
Outdated
| } | ||
|
|
||
| // If the current active node is out-of-sync, try to find an alive one first | ||
| if node != nil && isUsableState(node.State()) { |
There was a problem hiding this comment.
Addressed — removed the out-of-sync upgrade block from selectNode. The function now only uses the normal nodeSelector path.
Description
Remove "zombie" node logic that incorrectly kept unhealthy nodes alive when they were the last node in the pool, which caused missed state transitions and silent metric gaps (PLEX-2538)
Add fallback node selection that uses out-of-sync nodes when no alive nodes are available, with automatic upgrade back to alive nodes when they recover
Problem
When two nodes (A and B) exist and node A is in FinalizedBlockOutOfSync state, LatestChainInfo() does not count it as alive (since State() returns nodeStateFinalizedBlockOutOfSync). If node B then fails health checks, the l < 2 guard in the alive loop mistakenly treats B as the "last alive node" and keeps it in a zombie state instead of transitioning it to Unreachable or OutOfSync. These zombie nodes are invisible to metrics, preventing proper alerting.
Changes
node_lifecycle.go: Remove all zombie node guards from aliveLoop (4 locations) and outOfSyncLoop (1 location). Nodes now always transition to their correct state regardless of pool size. Remove zombieNodeCheckInterval(), msgCannotDisable, and msgDegradedState.
multi_node.go: Add selectOutOfSyncNode() fallback that picks the best out-of-sync node by highest block number. Update awaitNodeSelection() to fall back to out-of-sync nodes when no alive nodes exist. Update selectNode() to upgrade from an out-of-sync active node to an alive one when available. Update checkLease() to handle nil selector results with out-of-sync fallback.
Tests: Update 5 zombie behavior tests to verify correct state transitions. Add 6 new tests for out-of-sync fallback selection, best-node picking, and alive-node upgrade.
Test plan
[x] All existing multinode tests pass (50/50)
[x] Zombie tests updated to verify nodes transition to correct unhealthy state
[x] New tests cover: fallback to OutOfSync node, fallback to FinalizedBlockOutOfSync node, best-node selection by block height, keeping out-of-sync active when no alive available, upgrading from out-of-sync to alive