Conversation
Introduce AppCache<K, V> generic wrapper over moka::future::Cache with TTL support, CacheGroup for type-erased bulk invalidation, and get_or_try_insert for atomic cache-miss-fetch patterns. Register order and swap caches plus CacheGroup as Rocket managed state.
Cache GET /v1/tokens responses using get_or_try_insert so errors are never cached. Register TokenCache via the existing tokens fairing.
Cache GET /v1/order/<hash> (60s TTL, 1000 capacity) and POST /v1/swap/quote (5s TTL, 500 capacity). Each route module owns its cache type alias and configuration.
Use CacheGroup to bulk-invalidate all registry-dependent caches when PUT /admin/registry succeeds. Token cache is excluded since it fetches from GitHub, not the registry.
📝 WalkthroughWalkthroughIntroduces a crate-local caching subsystem (moka) with AppCache, Invalidatable, and CacheGroup; integrates caches into order, swap, and tokens endpoints; registers caches in Rocket state; and invalidates caches on registry updates. Includes async tests for cache behaviors. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,230,255,0.5)
participant Client
participant Rocket as API
participant Cache
participant RaindexDB as Raindex/DB
end
Client->>API: Request (get_order / post_swap_quote / get_tokens)
API->>Cache: get_or_try_insert(key, fetch_fn)
alt cache hit
Cache-->>API: value (cached)
else cache miss
Cache->>RaindexDB: fetch data (via shared_raindex / upstream)
RaindexDB-->>Cache: data
Cache-->>API: data
end
API-->>Client: Response (data)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/cache.rs`:
- Around line 33-45: Replace the manual check-then-fetch-then-insert in
get_or_try_insert with moka's single-flight API: call self.0.try_get_with(key,
|| async { fetch().await.map_err(Arc::new) }) so concurrent misses for the same
key are coalesced and errors are not cached; update the function
signature/generic bound so the error returned is Arc<E> (Result<V, Arc<E>>) and
ensure returned V is cloned or owned as moka requires. Keep the function name
get_or_try_insert and use self.0.try_get_with to locate the change.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
Cargo.tomlsrc/cache.rssrc/main.rssrc/routes/admin.rssrc/routes/order/get_order.rssrc/routes/order/mod.rssrc/routes/swap/mod.rssrc/routes/swap/quote.rssrc/routes/tokens.rs
src/cache.rs
Outdated
| pub(crate) async fn get_or_try_insert<F, Fut, E>(&self, key: K, fetch: F) -> Result<V, E> | ||
| where | ||
| F: FnOnce() -> Fut, | ||
| Fut: Future<Output = Result<V, E>>, | ||
| { | ||
| if let Some(cached) = self.0.get(&key).await { | ||
| tracing::info!("cache hit"); | ||
| return Ok(cached); | ||
| } | ||
| let value = fetch().await?; | ||
| self.0.insert(key, value.clone()).await; | ||
| Ok(value) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find Cargo.toml to check moka version
find . -name "Cargo.toml" -type f | head -5Repository: ST0x-Technology/st0x.rest.api
Length of output: 85
🏁 Script executed:
# Search for moka dependency version
rg "moka" --type toml | head -20Repository: ST0x-Technology/st0x.rest.api
Length of output: 134
🌐 Web query:
moka 0.12 try_get_with single-flight cache API documentation
💡 Result:
In Moka v0.12.x, the “single-flight” (request coalescing) API is try_get_with on Cache:
-
Sync cache:
moka::sync::Cache::try_get_with(key, init) -> Result<V, Arc<E>>- If
keyis missing, it runsinit: FnOnce() -> Result<V, E>and inserts the value only onOk(V). OnErr(E), nothing is inserted and you getErr(Arc<E>). - Single-flight guarantee: concurrent calls for the same missing key are coalesced so only one
initruns; the rest wait (as long as the error typeEmatches).
Docs: [1]
- If
-
Async cache:
moka::future::Cache::try_get_with(key, init).await -> Result<V, Arc<E>>- Same behavior, but
initis aFuture<Output = Result<V, E>>. - Same single-flight/coalescing guarantee for concurrent calls on the same missing key.
Docs: [2]
- Same behavior, but
Note: v0.12.0 introduced major API/internal changes (notably, removing background threads); see the migration notes/changelog if you’re upgrading. [3]
Sources: [1] [2] [3]
🏁 Script executed:
# Check the actual src/cache.rs file to verify the snippet and any other issues
cat -n src/cache.rsRepository: ST0x-Technology/st0x.rest.api
Length of output: 6154
🏁 Script executed:
# Also check if there are imports and overall structure to understand full context
head -40 src/cache.rsRepository: ST0x-Technology/st0x.rest.api
Length of output: 1178
Fix get_or_try_insert to eliminate concurrent cache stampedes using moka's single-flight API.
Lines 38–44 implement a manual check-then-fetch-then-insert pattern. Under concurrent misses for the same key, multiple tasks can all pass the cache check and independently execute fetch(), defeating single-flight semantics and amplifying upstream calls.
Use moka's try_get_with instead, which coalesces concurrent misses and does not cache errors. Note: the return type changes from Result<V, E> to Result<V, Arc<E>>.
💡 Suggested fix
- pub(crate) async fn get_or_try_insert<F, Fut, E>(&self, key: K, fetch: F) -> Result<V, E>
+ pub(crate) async fn get_or_try_insert<F, Fut, E>(
+ &self,
+ key: K,
+ fetch: F,
+ ) -> Result<V, Arc<E>>
where
F: FnOnce() -> Fut,
Fut: Future<Output = Result<V, E>>,
+ E: Send + Sync + 'static,
{
- if let Some(cached) = self.0.get(&key).await {
- tracing::info!("cache hit");
- return Ok(cached);
- }
- let value = fetch().await?;
- self.0.insert(key, value.clone()).await;
- Ok(value)
+ self.0.try_get_with(key, fetch).await
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub(crate) async fn get_or_try_insert<F, Fut, E>(&self, key: K, fetch: F) -> Result<V, E> | |
| where | |
| F: FnOnce() -> Fut, | |
| Fut: Future<Output = Result<V, E>>, | |
| { | |
| if let Some(cached) = self.0.get(&key).await { | |
| tracing::info!("cache hit"); | |
| return Ok(cached); | |
| } | |
| let value = fetch().await?; | |
| self.0.insert(key, value.clone()).await; | |
| Ok(value) | |
| } | |
| pub(crate) async fn get_or_try_insert<F, Fut, E>( | |
| &self, | |
| key: K, | |
| fetch: F, | |
| ) -> Result<V, Arc<E>> | |
| where | |
| F: FnOnce() -> Fut, | |
| Fut: Future<Output = Result<V, E>>, | |
| E: Send + Sync + 'static, | |
| { | |
| self.0.try_get_with(key, fetch).await | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cache.rs` around lines 33 - 45, Replace the manual
check-then-fetch-then-insert in get_or_try_insert with moka's single-flight API:
call self.0.try_get_with(key, || async { fetch().await.map_err(Arc::new) }) so
concurrent misses for the same key are coalesced and errors are not cached;
update the function signature/generic bound so the error returned is Arc<E>
(Result<V, Arc<E>>) and ensure returned V is cloned or owned as moka requires.
Keep the function name get_or_try_insert and use self.0.try_get_with to locate
the change.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/cache.rs (2)
113-140: Add a concurrent-miss test to lock in single-flight behavior.The tests cover hit/miss/error semantics well, but a multi-task same-key miss test would protect the anti-stampede guarantee this PR depends on.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache.rs` around lines 113 - 140, Add a concurrent-miss test to verify single-flight behavior by creating a new async test (e.g., test_concurrent_miss_single_flight) that spawns multiple tasks which concurrently call AppCache::get_or_try_insert on the same key; use a shared counter (AtomicUsize or Mutex) inside the fetch closure that sleeps/delays briefly and increments the counter so you can assert the fetch ran only once, ensure all tasks receive the same Ok(value), and confirm the cache.get(&"key").await returns the cached value; reference AppCache::get_or_try_insert, AppCache::get, and AppCache::insert behavior in the test.
39-39: Deferfetch()call by wrapping in async block.At Line 39,
fetch()is invoked eagerly beforetry_get_withdetermines if initialization is needed. Any synchronous work in the factory or before the first.awaitruns even on cache hits. Wrap the call inasync move { ... }to defer execution until a cache miss:♻️ Proposed fix
- self.0.try_get_with(key, fetch()).await + self.0.try_get_with(key, async move { fetch().await }).await🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache.rs` at line 39, The call to fetch() is executed eagerly in the expression self.0.try_get_with(key, fetch()).await; change the second argument to a deferred async block so the factory runs only on a cache miss, e.g. pass async move { fetch().await } (or async move { fetch() } if fetch already returns the final value) instead of fetch(), keeping the call site self.0.try_get_with(key, ...).await and preserving ownership by using async move.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/cache.rs`:
- Around line 113-140: Add a concurrent-miss test to verify single-flight
behavior by creating a new async test (e.g., test_concurrent_miss_single_flight)
that spawns multiple tasks which concurrently call AppCache::get_or_try_insert
on the same key; use a shared counter (AtomicUsize or Mutex) inside the fetch
closure that sleeps/delays briefly and increments the counter so you can assert
the fetch ran only once, ensure all tasks receive the same Ok(value), and
confirm the cache.get(&"key").await returns the cached value; reference
AppCache::get_or_try_insert, AppCache::get, and AppCache::insert behavior in the
test.
- Line 39: The call to fetch() is executed eagerly in the expression
self.0.try_get_with(key, fetch()).await; change the second argument to a
deferred async block so the factory runs only on a cache miss, e.g. pass async
move { fetch().await } (or async move { fetch() } if fetch already returns the
final value) instead of fetch(), keeping the call site self.0.try_get_with(key,
...).await and preserving ownership by using async move.
Motivation
Every read endpoint hits external sources (GitHub for tokens, Raindex subgraph for orders/swaps) on every request with zero caching. The token list is static (pinned GitHub commit), order data changes infrequently, and swap quotes tolerate a few seconds of staleness. This adds in-process TTL caches to eliminate redundant network calls. Redis is unnecessary given the single-instance deployment with SQLite backend.
Solution
Introduces a generic
AppCache<K, V>wrapper overmoka::future::Cacheinsrc/cache.rs, providingget,insert,get_or_try_insert(atomic miss-fetch that never caches errors), andinvalidate_all. Each route module owns its cache type, TTL, and capacity:GET /v1/tokens): 10-minute TTL, single entry — pinned GitHub URL rarely changesGET /v1/order/<hash>): 60s TTL, 1000 capacity — keyed by order hashPOST /v1/swap/quote): 5s TTL, 500 capacity — keyed by (input, output, amount) tupleA
CacheGroupwith type-erasedArc<dyn Invalidatable>handles bulk invalidation —PUT /admin/registryclears all registry-dependent caches (order + swap) through a single&State<CacheGroup>parameter. Token cache is excluded since it fetches from GitHub, not the registry.Not cached:
POST /v1/swap/calldata(must produce fresh tx data), write operations (deploy, cancel),GET /registry(already in-memory).Checks
By submitting this for review, I'm confirming I've done the following:
Summary by CodeRabbit