[Access] Add index for account transactions#8381
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis pull request adds account-transaction indexing: a new AccountTransaction model, storage interfaces and mocks, an account-transaction extended indexer with event-based address extraction (FT/NFT transfers), wiring in node builders and indexer core, metrics for extended indexing, and comprehensive tests for indexing and the extended indexer. Changes
Sequence Diagram(s)sequenceDiagram
participant Indexer as IndexerCore
participant EventDecoder as Event Decoder
participant Extended as ExtendedIndexer
participant Storage as AccountTx Storage
participant Metrics as Extended Metrics
Indexer->>Indexer: Iterate block chunks & transactions
Note over Indexer: Collect payer, proposer, authorizers
Indexer->>EventDecoder: Decode FT/NFT transfer events (CCF → Cadence)
EventDecoder-->>Indexer: Return extracted addresses
Indexer->>Indexer: Build AccountTransaction entries (Address, Height, TxID, TxIndex, IsAuthorizer)
Indexer->>Storage: Store(blockHeight, entries)
Storage-->>Indexer: OK / Error
Indexer->>Extended: Notify/forward block (when extended indexers present)
Extended-->>Indexer: OK / Error
Indexer->>Metrics: BlockIndexedExtended(indexer, height)
Metrics-->>Indexer: Recorded
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@storage/account_transactions.go`:
- Around line 44-53: The interface comment for AccountTransactionsWriter and the
implementation storage/pebble/AccountTransactions.Store disagree about whether
repeated calls at the same blockHeight are allowed; reconcile by either (A)
updating the AccountTransactionsWriter.Store doc to state it permits idempotent
no-op calls when blockHeight == latestHeight and is only required to advance
when blockHeight == latestHeight+1 (mention AccountTransactionsWriter and
storage/pebble/AccountTransactions.Store), or (B) enforcing the stricter
contract in storage/pebble/AccountTransactions.Store by validating the incoming
blockHeight against the writer's latest height and returning an error when
blockHeight is not exactly latestHeight+1 (or when blockHeight < latestHeight),
plus add a clear error message describing the violation; pick one approach and
apply it consistently.
- Around line 56-62: The new AccountTransactions interface (which embeds
AccountTransactionsReader and AccountTransactionsWriter) requires updated mock
implementations; regenerate the test mocks by running the project's mock
generator command (e.g., run make generate-mocks) so the generated mocks for
AccountTransactions, AccountTransactionsReader, and AccountTransactionsWriter
are updated and committed.
In `@storage/pebble/account_transactions.go`:
- Around line 314-328: decodeAccountTxValue currently accepts empty or
multi-byte values and silently returns false; change its signature to return
(bool, error), validate that len(value)==1, return an error for any other length
(including zero) to avoid silent corruption, and preserve current boolean
decoding when value[0]!=0; update all callers (and any related function
decodeAccountTxKey usages) to propagate the error instead of assuming a default,
and adjust tests that expected nil/empty to assert the new error behavior.
- Around line 63-78: Replace uses of fmt.Errorf in NewAccountTransactions and
the other storage functions (InitAccountTransactions, TransactionsByAddress,
Store, decodeAccountTxKey, accountTxHeightLookup) with the project’s
irrecoverable error helper: import the irrecoverable package and call
irrecoverable.NewExceptionf(...) instead of fmt.Errorf(...) so these
initialization/access errors are classified as irrecoverable exceptions; update
the import block to include the irrecoverable package and keep error message
formatting/wrapping consistent with the original messages.
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Scanned FilesNone |
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@module/state_synchronization/indexer/indexer_core.go`:
- Around line 520-574: The code uses a global txIndex across chunks when looking
up events in the chunk-local eventsByTxIndex map, which will miss events if
event.TransactionIndex is chunk-local; modify the logic in the loop over
data.ChunkExecutionDatas so that you either reset txIndex to 0 at the start of
each chunk (before iterating chunk.Collection.Transactions) or introduce a new
chunkTxIndex counter used for the eventsByTxIndex lookup and for entries'
TransactionIndex, ensuring consistency between eventsByTxIndex[chunkTxIndex] and
the values stored in access.AccountTransaction; update references to txIndex in
that chunk to use the per-chunk counter while leaving other symbols like
markAddress and extractAddressFromTransferEvent unchanged.
🧹 Nitpick comments (2)
module/state_synchronization/indexer/indexer_core.go (1)
554-561: Error from event decoding stops indexing the entire block.When
extractAddressFromTransferEventreturns an error (e.g., CCF decode failure), the entirebuildAccountTxIndexEntriesfunction returns an error, which then fails the block indexing. This is a conservative approach that ensures data integrity, but consider whether a single malformed event should prevent indexing the rest of the block's account transactions.The current behavior is acceptable if you prefer strict consistency over availability, but you may want to log and skip malformed events instead to avoid blocking the indexer on edge cases.
module/state_synchronization/indexer/indexer_core_test.go (1)
941-953: Consider using actual system contract addresses in event type location.The helper creates event types with an empty
common.Address{}in the location:location := common.NewAddressLocation(nil, common.Address{}, "FungibleToken")However, the
indexer.ftDepositedTypeis constructed from actual system contract addresses. While this may work because the eventTypefield onflow.Eventis what's matched (not the Cadence type's location), this inconsistency could be confusing. Consider documenting this intentionally or using the actual address for clarity.
There was a problem hiding this comment.
Actionable comments posted: 17
🤖 Fix all issues with AI agents
In `@cmd/access/node_builder/access_node_builder.go`:
- Around line 1501-1518: Add a runtime validation after flag parsing that checks
builder.extendedIndexingEnabled against builder.executionDataIndexingEnabled (or
the equivalent executionDataIndexingEnabled flag variable) and fail fast with a
clear error if extended-indexing-enabled is true while execution-data-indexing
is false; specifically, in the same builder initialization/validation path where
flags.BoolVar sets builder.extendedIndexingEnabled, add: if
builder.extendedIndexingEnabled && !builder.executionDataIndexingEnabled {
return fmt.Errorf("extended-indexing-enabled requires execution-data-indexing to
be enabled; set --execution-data-indexing-enabled or disable
--extended-indexing-enabled") } so users get an explicit validation error
(alternatively you may auto-enable builder.executionDataIndexingEnabled instead,
but pick one consistent behavior and implement it in the builder
validation/Build method).
- Around line 964-1004: The account transactions index is bootstrapped using
builder.SealedRootBlock.Height which can be below the register DB's first
height; compute the consumer progress/startHeight from indexedBlockHeight (via
indexedBlockHeight.Initialize(...) and progress.ProcessedIndex()) first and then
call indexes.NewAccountTransactions with that startHeight instead of
builder.SealedRootBlock.Height so the initial accountTxStore aligns with the
ExtendedIndexer startHeight used in extended.NewExtendedIndexer.
In `@cmd/observer/node_builder/observer_builder.go`:
- Around line 1633-1644: The "extended indexer" Component currently reads
builder.ExtendedIndexer (used in the Component factory for "extended indexer")
but has no dependency on the DependableComponent that sets it (the "execution
data indexer"), causing a race; fix by converting this Component into a
DependableComponent that declares the execution data indexer as a dependency (so
the factory runs only after the execution data indexer has initialized and set
builder.ExtendedIndexer), or alternatively move the initialization logic for
builder.ExtendedIndexer into the "extended indexer" factory itself so the nil
check cannot fail—update the registration call where builder.Component("extended
indexer", ...) is invoked to either use builder.DependableComponent(...) with
the execution data indexer dependency name or to perform initialization of
builder.ExtendedIndexer inside that factory before returning it.
In `@module/metrics/extended_indexing.go`:
- Around line 29-34: The godoc for
ExtendedIndexingCollector.InitializeLatestHeightExtended is out of sync: it
starts with "InitializeLatestHeight" but must match the exported method name;
update the comment to begin with "InitializeLatestHeightExtended" and describe
its behavior (calls BlockIndexedExtended during startup) so it follows Go's
godoc convention and clearly references InitializeLatestHeightExtended and
BlockIndexedExtended.
In `@module/state_synchronization/indexer/extended/account_transactions_test.go`:
- Around line 36-43: The test helper newAccountTxIndexerForTest calls
NewAccountTransactions with three args but the constructor now requires a
storage.LockManager; fix by creating and passing a mock lock manager in the test
helper (e.g., call storagemock.NewLockManager(t) or equivalent mock constructor)
and supply that mock as the additional argument to NewAccountTransactions so the
call becomes NewAccountTransactions(zerolog.Nop(), store, lockManager, chainID);
ensure the mock is created before calling LatestIndexedHeight on store.
- Around line 53-60: The Store mock call uses the wrong argument count and
index: update every mock expectation for storage.AccountTransactions.Store
(e.g., the On("Store", ...) in account_transactions_test.go) to match the
signature (lockctx.Proof, storage.ReaderBatchWriter, blockHeight,
[]access.AccountTransaction) by adding placeholders for lctx and rw (use
mock.AnythingOfType("lockctx.Proof") or mock.Anything, and
mock.AnythingOfType("storage.ReaderBatchWriter") or mock.Anything), keep
testHeight as the third arg, and change the capture to
args.Get(3).([]access.AccountTransaction) (since transactions are now the fourth
argument); apply the same change to all ~13 occurrences.
In `@module/state_synchronization/indexer/extended/account_transactions.go`:
- Around line 61-71: The height validation in AccountTransactions.IndexBlockData
is inverted and always rejects the only valid height; change the future-height
check from using >= to > so that data.Header.Height == latest+1 is allowed: call
LatestIndexedHeight() as before, then if data.Header.Height > latest+1 return
ErrFutureHeight, and if data.Header.Height <= latest return ErrAlreadyIndexed
(preserve ErrFutureHeight and ErrAlreadyIndexed symbols and the
BlockData.Header.Height reference). Ensure the logic permits exactly latest+1 to
proceed.
In `@module/state_synchronization/indexer/extended/extended_indexer_test.go`:
- Around line 71-76: The mock expectations for idx1 and idx2 call IndexBlockData
with only two matchers but the real signature is IndexBlockData(lctx
lockctx.Proof, data extended.BlockData, batch storage.ReaderBatchWriter); update
both On("IndexBlockData", ...) setups to include a first argument matcher (e.g.,
mock.Anything) for the lctx parameter so the call signatures match what
ExtendedIndexer/backfillHeight actually invoke; keep the existing mock.MatchedBy
for BlockData and mock.Anything for the batch.
In `@module/state_synchronization/indexer/extended/extended_indexer.go`:
- Around line 131-191: The code currently treats a failure to update
c.latestHeight (via latest := c.latestHeight.Value() and later
c.latestHeight.Set(header.Height) in IndexBlockData) as a hard error and
returns, which wrongly fails valid concurrent/reindex cases; modify the
post-indexing branch so that when c.latestHeight.Set(header.Height) returns
false you do NOT return an error but instead log a warning (e.g., use
c.log.Warn() including the attempted header.Height and the current latest value
from c.latestHeight.Value()) and continue returning nil; keep the existing error
path if Set panics or an actual write error occurs, but treat a false return as
a non-fatal race/reindex condition.
- Around line 297-315: The doc mentions ErrFutureHeight as an expected error but
backfillHeight only tolerates ErrAlreadyIndexed; update the error handling in
backfillHeight so that after calling indexer.IndexBlockData it treats
errors.Is(err, ErrFutureHeight) the same as ErrAlreadyIndexed (i.e., do not
wrap/return them) — or if ErrFutureHeight is not expected, remove it from the
comment; specifically modify the error check around indexer.IndexBlockData in
backfillHeight to include ErrFutureHeight alongside ErrAlreadyIndexed when
deciding to skip/ignore the error.
- Around line 155-156: Remove the unused batch allocation: delete the call to
c.db.NewBatch() and the deferred batch.Close() at the start of the function (the
local variable batch) because writes are done via c.db.WithReaderBatchWriter
which manages its own batch; update any references to the removed batch (none
expected) and leave WithReaderBatchWriter usage intact to perform the actual
writes.
In `@module/state_synchronization/indexer/indexer_core_test.go`:
- Around line 725-742: The test double testExtendedIndexer has a mismatched
IndexBlockData signature; change its method to match the extended.Indexer
interface by adding the first parameter lctx lockctx.Proof and using
storage.ReaderBatchWriter for the batch parameter (i.e., implement
IndexBlockData(lctx lockctx.Proof, data extended.BlockData, batch
storage.ReaderBatchWriter) error), and update any internal uses (currently
ignoring the batch with _ and lctx) accordingly so the test type satisfies
extended.Indexer.
In `@storage/account_transactions.go`:
- Around line 14-28: The doc for TransactionsByAddress contradicts itself about
out-of-range endHeight: resolve and make behavior explicit by updating the
comment and/or implementation to match the real contract; either (A) state that
TransactionsByAddress will clamp endHeight to the latest indexed height and
remove ErrHeightNotIndexed from the "expected" list, or (B) state that
TransactionsByAddress returns ErrHeightNotIndexed when endHeight > latest
indexed height and remove the clamping sentence—ensure the comment references
the same behavior as the implementation that uses TransactionsByAddress and
ErrHeightNotIndexed so callers know whether to expect a clamp or an error.
- Around line 43-57: Update the AccountTransactions type comment to remove the
reference to the non-existent AccountTransactionsWriter interface and describe
the actual API: state that AccountTransactions provides read access via
AccountTransactionsReader and write access via the Store method defined on
AccountTransactions (reference symbols: AccountTransactions,
AccountTransactionsReader, Store). Keep the comment concise and accurate about
Store being called once per block and the sequential-height/caller-commit
responsibilities already present in the doc.
In `@storage/indexes/account_transactions_test.go`:
- Around line 379-391: The helper encodeAccountTxValue currently ignores the
error from msgpack.Marshal; change it to handle and propagate the error by
updating its signature to return ([]byte, error) (or accept testing.T and fail
on error), check the err returned from msgpack.Marshal, and return the error
instead of nil; update all callers/tests that use encodeAccountTxValue to handle
the new error (or rely on t.Fatalf when using testing.T) so encoding failures
are surfaced rather than silently ignored.
In `@storage/indexes/account_transactions.go`:
- Around line 283-286: The TODO asks for an explicit bound check or
documentation for encoding height into the 8-byte descending key: either
enforce/validate that the incoming height fits into 64 bits and then cast (e.g.,
compute onesComplement := ^uint64(height)) before calling
binary.BigEndian.PutUint64, or add a short comment explaining that the key uses
a fixed 8-byte uint64 slot so heights are implicitly bounded to [0, 2^64-1] and
therefore no additional upper bound is required; update the code around
onesComplement, binary.BigEndian.PutUint64 and the height producer to perform
the cast/validation or add the explanatory comment accordingly.
In `@storage/mock/account_transactions_writer.go`:
- Around line 1-46: The mock is stale: update AccountTransactionsWriter to match
the current interface by regenerating mocks (run make generate-mocks) so the
Store method signature becomes Store(lctx lockctx.Proof, rw ReaderBatchWriter,
blockHeight uint64, txData []accessmodel.AccountTransaction) error and any
generated constructor (NewAccountTransactionsWriter) and imports reflect
lockctx, ReaderBatchWriter, and accessmodel types; replace the old
Store(blockHeight uint64, txData []access.AccountTransaction, batch
storage.Batch) and ensure the mock package/type names match the current
interface definitions.
🧹 Nitpick comments (7)
storage/indexes/helpers.go (1)
8-12: Consider renaming the parameter fromheighttovalue.This is a generic uint64 encoding helper, but the parameter name
heightimplies it's specific to block heights. A more descriptive name likevalueorvwould better reflect its general purpose.Also,
make([]byte, 8)withbinary.BigEndian.PutUint64is slightly more idiomatic than allocating a zero-length slice and appending:♻️ Suggested simplification
-func encodedUint64(height uint64) []byte { - payload := make([]byte, 0, 8) - return binary.BigEndian.AppendUint64(payload, height) +func encodedUint64(v uint64) []byte { + payload := make([]byte, 8) + binary.BigEndian.PutUint64(payload, v) + return payload }module/state_synchronization/indexer/extended/indexer.go (2)
11-37: Clean up design/planning comments before merging.Lines 11–37 contain brainstorming notes, API endpoint sketches (
/accounts/v1/account/{address}/transactions), and design thoughts that read like working notes rather than package documentation. Consider either converting these into a proper package-level doc comment or removing them to keep the source clean.
44-48: Document theEventsmap key.The
Eventsfield is typedmap[uint32][]flow.Eventbut the meaning of theuint32key (presumably transaction index within the block) is not documented. A brief comment would improve clarity.♻️ Suggested documentation
type BlockData struct { Header *flow.Header Transactions []*flow.TransactionBody - Events map[uint32][]flow.Event + // Events maps transaction index (within the block) to its emitted events. + Events map[uint32][]flow.Event }module/state_synchronization/indexer/extended/account_transactions.go (1)
96-112: Consider whetherParseTransferEventerrors should be fatal.If a single event fails to parse, the entire block indexing fails. Depending on the robustness requirements, you may want to log and skip unparseable events rather than aborting. If transfer events can have unexpected formats (e.g., from older protocol versions or third-party contracts), this could block indexing progress.
module/state_synchronization/indexer/extended/extended_indexer_test.go (1)
105-179: Verbose mock sequencing inTestExtendedIndexer_AllBackfilling.The 12 sequential
LatestIndexedHeightexpectations (lines 113-125) are brittle—any change to backfill cadence will require re-counting. Consider using a stateful mock return function instead, though this is a stylistic preference for test maintainability.module/state_synchronization/indexer/indexer_core.go (1)
263-264: Consider pre-allocating thetxsandeventsslices.You know the data is coming from
data.ChunkExecutionDatas, so you could estimate capacity to reduce allocations. This is a minor optimization for the hot indexing path.module/state_synchronization/indexer/extended/extended_indexer.go (1)
320-362:blockDatareconstructs full block data from storage — looks correct.System collection handling via
c.systemCollections.ByHeight(...).SystemCollection(...)correctly appends system transactions. Event grouping byTransactionIndexmirrors the same logic inIndexBlockData.One minor note: the event-grouping logic (Lines 335–338) is duplicated between here and
IndexBlockData(Lines 145–148). Consider extracting a small helper to reduce duplication.Optional: extract event-grouping helper
func groupEventsByTxIndex(events []flow.Event) map[uint32][]flow.Event { m := make(map[uint32][]flow.Event) for _, event := range events { m[event.TransactionIndex] = append(m[event.TransactionIndex], event) } return m }
module/state_synchronization/indexer/extended/account_transactions_test.go
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cmd/access/node_builder/access_node_builder.go`:
- Around line 1510-1513: Validate that builder.extendedIndexingBackfillWorkers
is a positive integer in the existing ValidateFlags method: check the value set
by flags.IntVar for "extended-indexing-backfill-workers"
(builder.extendedIndexingBackfillWorkers) and return an error (or adjust/clip
and log) if it's <= 0 so backfill logic never runs with zero or negative
workers; update ValidateFlags to perform this check and produce a clear error
message referencing the flag name and the invalid value.
- Around line 1093-1104: The extended indexer factory can race and observe
builder.ExtendedIndexer == nil because it runs as a normal Component; change the
registration to a DependableComponent that depends on "execution data indexer"
(or merge its ReadyDoneAware return into the existing
DependableComponent("execution data indexer") factory) so the ExtendedIndexer is
assigned before its factory runs; update the Component("extended indexer") call
to DependableComponent("extended indexer", []string{"execution data indexer"},
...) or move its return into the DependableComponent at "execution data indexer"
and remove the separate Component to eliminate the nil race on
builder.ExtendedIndexer.
🧹 Nitpick comments (1)
cmd/observer/node_builder/observer_builder.go (1)
731-748: Minor: flag description could be more specific.Line 735: the description says "whether to enable account data indexing" but the flag is
extended-indexing-enabled. Consider aligning the description to mention "extended indexing" for consistency with the other extended indexing flags.📝 Suggested fix
flags.BoolVar(&builder.extendedIndexingEnabled, "extended-indexing-enabled", defaultConfig.extendedIndexingEnabled, - "whether to enable account data indexing") + "whether to enable extended indexing (e.g. account transaction indexing)")
Co-authored-by: Leo Zhang <zhangchiqing@gmail.com>
There was a problem hiding this comment.
can we add comments about the groupLookup and latestBlockData? I don't quite get why storing multiple indexer. Is it ever growing? If so, how to prevent memory leak?
There was a problem hiding this comment.
I will be adding another indexer right away for account transfers (NFT/FT transfers per account). then we're planning indexes for a few other types of things like epoche rewards payments, account creation details, etc. the intention is for this to be a growing list of special indexes that an AN operator could enable. so we need for them to be easy to add later and gracefully backfill. I will add a comment
module/state_synchronization/indexer/extended/extended_indexer.go
Outdated
Show resolved
Hide resolved
module/state_synchronization/indexer/extended/extended_indexer.go
Outdated
Show resolved
Hide resolved
module/state_synchronization/indexer/extended/extended_indexer.go
Outdated
Show resolved
Hide resolved
module/state_synchronization/indexer/extended/extended_indexer.go
Outdated
Show resolved
Hide resolved
module/state_synchronization/indexer/extended/extended_indexer.go
Outdated
Show resolved
Hide resolved
Co-authored-by: Leo Zhang <zhangchiqing@gmail.com>
Co-authored-by: Leo Zhang <zhangchiqing@gmail.com>
zhangchiqing
left a comment
There was a problem hiding this comment.
Looks good. Just 1 minor suggestion
module/state_synchronization/indexer/extended/extended_indexer.go
Outdated
Show resolved
Hide resolved
module/state_synchronization/indexer/extended/extended_indexer.go
Outdated
Show resolved
Hide resolved
tim-barry
left a comment
There was a problem hiding this comment.
LGTM overall, just one question relating to guaranteeing forward progress of the extended indexers in the presence of potentially missing data - do we know that the block data will not be partially missing (ie only fully indexed blocks will ever be reachable during backfill, or an irrecoverable error in some other component would have to occur to allow missing data)
| offset += flow.AddressLength | ||
|
|
||
| // Decode height (one's complement) | ||
| onesComplement := binary.BigEndian.Uint64(key[offset:]) |
There was a problem hiding this comment.
| onesComplement := binary.BigEndian.Uint64(key[offset:]) | |
| onesComplement := binary.BigEndian.Uint64(key[offset : offset+8]) |
| return address, height, txIndex, nil | ||
| } | ||
|
|
||
| // accountTxHeightLookup reads a height value from the database. |
There was a problem hiding this comment.
| // accountTxHeightLookup reads a height value from the database. | |
| // accountTxHeightLookup reads a height boundary (first/last) for the account transactions index from the database. |
Clarify that the height here is for the index as a whole, since the function body is pretty generic.
| return c, nil | ||
| } | ||
|
|
||
| // IndexBlockData captures the block data and makes it available to the indexers. |
There was a problem hiding this comment.
| // IndexBlockData captures the block data and makes it available to the indexers. | |
| // IndexBlockExecutionData captures the block data and makes it available to the indexers. |
| data, err := c.blockDataFromStorage(height, latestBlockData) | ||
| if err != nil { | ||
| if errors.Is(err, storage.ErrNotFound) { | ||
| continue // skip group for this iteration | ||
| } |
There was a problem hiding this comment.
In this case we are backfilling and some part of the block data is not found, the indexers waiting for that height will get skipped and remain waiting for that height. Do we ever expect this to happen (a non-live block to be not fully indexed yet and part of the backfill group), and if so can we safely make the assumption that either the necessary data will be eventually filled in or some other component/engine on the node will throw an error?
Summary by CodeRabbit
New Features
Tests
Chores