Skip to content

Add in-process endpoint caching with moka#54

Open
findolor wants to merge 5 commits intomainfrom
add-endpoint-caching
Open

Add in-process endpoint caching with moka#54
findolor wants to merge 5 commits intomainfrom
add-endpoint-caching

Conversation

@findolor
Copy link
Collaborator

@findolor findolor commented Feb 27, 2026

Motivation

Every read endpoint hits external sources (GitHub for tokens, Raindex subgraph for orders/swaps) on every request with zero caching. The token list is static (pinned GitHub commit), order data changes infrequently, and swap quotes tolerate a few seconds of staleness. This adds in-process TTL caches to eliminate redundant network calls. Redis is unnecessary given the single-instance deployment with SQLite backend.

Solution

Introduces a generic AppCache<K, V> wrapper over moka::future::Cache in src/cache.rs, providing get, insert, get_or_try_insert (atomic miss-fetch that never caches errors), and invalidate_all. Each route module owns its cache type, TTL, and capacity:

  • Tokens (GET /v1/tokens): 10-minute TTL, single entry — pinned GitHub URL rarely changes
  • Order detail (GET /v1/order/<hash>): 60s TTL, 1000 capacity — keyed by order hash
  • Swap quote (POST /v1/swap/quote): 5s TTL, 500 capacity — keyed by (input, output, amount) tuple

A CacheGroup with type-erased Arc<dyn Invalidatable> handles bulk invalidation — PUT /admin/registry clears all registry-dependent caches (order + swap) through a single &State<CacheGroup> parameter. Token cache is excluded since it fetches from GitHub, not the registry.

Not cached: POST /v1/swap/calldata (must produce fresh tx data), write operations (deploy, cancel), GET /registry (already in-memory).

Checks

By submitting this for review, I'm confirming I've done the following:

  • made this PR as small as possible
  • unit-tested any new functionality
  • linked any relevant issues or PRs
  • included screenshots (if this involves a front-end change)

Summary by CodeRabbit

  • New Features
    • Integrated caching across order details, swap quotes, and token list endpoints with configurable TTLs and capacity; caches are managed at startup and cleared on registry updates.
  • Behavior Change
    • Endpoints now return cached responses when available and populate cache on first fetch; errors are not cached.
  • Tests
    • Added tests validating cache hits, misses, error handling, and cache invalidation on registry update.
  • Chores
    • Added a caching library dependency to support the new subsystem.

Introduce AppCache<K, V> generic wrapper over moka::future::Cache with
TTL support, CacheGroup for type-erased bulk invalidation, and
get_or_try_insert for atomic cache-miss-fetch patterns. Register order
and swap caches plus CacheGroup as Rocket managed state.
Cache GET /v1/tokens responses using get_or_try_insert so errors are
never cached. Register TokenCache via the existing tokens fairing.
Cache GET /v1/order/<hash> (60s TTL, 1000 capacity) and
POST /v1/swap/quote (5s TTL, 500 capacity). Each route module owns
its cache type alias and configuration.
Use CacheGroup to bulk-invalidate all registry-dependent caches when
PUT /admin/registry succeeds. Token cache is excluded since it fetches
from GitHub, not the registry.
@findolor findolor self-assigned this Feb 27, 2026
@findolor findolor requested a review from hardyjosh February 27, 2026 09:38
@coderabbitai
Copy link

coderabbitai bot commented Feb 27, 2026

📝 Walkthrough

Walkthrough

Introduces a crate-local caching subsystem (moka) with AppCache, Invalidatable, and CacheGroup; integrates caches into order, swap, and tokens endpoints; registers caches in Rocket state; and invalidates caches on registry updates. Includes async tests for cache behaviors.

Changes

Cohort / File(s) Summary
Manifest
Cargo.toml
Added moka = { version = "0.12", features = ["future"] }.
Cache Core
src/cache.rs
New AppCache wrapper around moka::future::Cache, Invalidatable trait, CacheGroup for centralized invalidation, and async unit tests covering insert/get/get_or_try_insert/error handling and group invalidation.
App Wiring
src/main.rs
Registered caches (order, swap) and a CacheGroup in Rocket managed state; imported cache module.
Order — module & handler
src/routes/order/mod.rs, src/routes/order/get_order.rs
Added OrderDetailCache type & factory; get_order now accepts cache State and uses get_or_try_insert to serve or compute OrderDetail. Added handler test for cache hit.
Swap — module & handler
src/routes/swap/mod.rs, src/routes/swap/quote.rs
Added SwapQuoteCache type & factory; post_swap_quote now accepts cache State and uses cache-backed get_or_try_insert keyed by (input, output, amount). Added handler test for cache hit.
Tokens
src/routes/tokens.rs
Added TokenCache type, integrated caching into get_tokens with lazy fetch-and-cache semantics, ensured errors are not cached, and updated fairing to manage cache; tests added/updated for hit/miss and error handling.
Admin & invalidation
src/routes/admin.rs
put_registry now accepts CacheGroup state, calls invalidate_all() after updating registry, and updated test to assert caches are cleared.
Error conversions
src/error.rs
Added From<Arc<ApiError>> for ApiError impl to allow converting shared errors into ApiError.

Sequence Diagram(s)

sequenceDiagram
  rect rgba(200,230,255,0.5)
    participant Client
    participant Rocket as API
    participant Cache
    participant RaindexDB as Raindex/DB
  end

  Client->>API: Request (get_order / post_swap_quote / get_tokens)
  API->>Cache: get_or_try_insert(key, fetch_fn)
  alt cache hit
    Cache-->>API: value (cached)
  else cache miss
    Cache->>RaindexDB: fetch data (via shared_raindex / upstream)
    RaindexDB-->>Cache: data
    Cache-->>API: data
  end
  API-->>Client: Response (data)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • 0xgleb
  • hardyjosh

Poem

🐇 I found a nook to stash and tuck,
Orders, quotes, and tokens — safe from muck.
A miss, a fetch, then cached delight,
Registry clears and all is bright!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 68.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'Add in-process endpoint caching with moka' directly and clearly describes the main change: introduction of in-process caching using the moka library for endpoints.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-endpoint-caching

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/cache.rs`:
- Around line 33-45: Replace the manual check-then-fetch-then-insert in
get_or_try_insert with moka's single-flight API: call self.0.try_get_with(key,
|| async { fetch().await.map_err(Arc::new) }) so concurrent misses for the same
key are coalesced and errors are not cached; update the function
signature/generic bound so the error returned is Arc<E> (Result<V, Arc<E>>) and
ensure returned V is cloned or owned as moka requires. Keep the function name
get_or_try_insert and use self.0.try_get_with to locate the change.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0468a1c and 19016f1.

📒 Files selected for processing (9)
  • Cargo.toml
  • src/cache.rs
  • src/main.rs
  • src/routes/admin.rs
  • src/routes/order/get_order.rs
  • src/routes/order/mod.rs
  • src/routes/swap/mod.rs
  • src/routes/swap/quote.rs
  • src/routes/tokens.rs

src/cache.rs Outdated
Comment on lines +33 to +45
pub(crate) async fn get_or_try_insert<F, Fut, E>(&self, key: K, fetch: F) -> Result<V, E>
where
F: FnOnce() -> Fut,
Fut: Future<Output = Result<V, E>>,
{
if let Some(cached) = self.0.get(&key).await {
tracing::info!("cache hit");
return Ok(cached);
}
let value = fetch().await?;
self.0.insert(key, value.clone()).await;
Ok(value)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find Cargo.toml to check moka version
find . -name "Cargo.toml" -type f | head -5

Repository: ST0x-Technology/st0x.rest.api

Length of output: 85


🏁 Script executed:

# Search for moka dependency version
rg "moka" --type toml | head -20

Repository: ST0x-Technology/st0x.rest.api

Length of output: 134


🌐 Web query:

moka 0.12 try_get_with single-flight cache API documentation

💡 Result:

In Moka v0.12.x, the “single-flight” (request coalescing) API is try_get_with on Cache:

  • Sync cache: moka::sync::Cache::try_get_with(key, init) -> Result<V, Arc<E>>

    • If key is missing, it runs init: FnOnce() -> Result<V, E> and inserts the value only on Ok(V). On Err(E), nothing is inserted and you get Err(Arc<E>).
    • Single-flight guarantee: concurrent calls for the same missing key are coalesced so only one init runs; the rest wait (as long as the error type E matches).
      Docs: [1]
  • Async cache: moka::future::Cache::try_get_with(key, init).await -> Result<V, Arc<E>>

    • Same behavior, but init is a Future<Output = Result<V, E>>.
    • Same single-flight/coalescing guarantee for concurrent calls on the same missing key.
      Docs: [2]

Note: v0.12.0 introduced major API/internal changes (notably, removing background threads); see the migration notes/changelog if you’re upgrading. [3]

Sources: [1] [2] [3]


🏁 Script executed:

# Check the actual src/cache.rs file to verify the snippet and any other issues
cat -n src/cache.rs

Repository: ST0x-Technology/st0x.rest.api

Length of output: 6154


🏁 Script executed:

# Also check if there are imports and overall structure to understand full context
head -40 src/cache.rs

Repository: ST0x-Technology/st0x.rest.api

Length of output: 1178


Fix get_or_try_insert to eliminate concurrent cache stampedes using moka's single-flight API.

Lines 38–44 implement a manual check-then-fetch-then-insert pattern. Under concurrent misses for the same key, multiple tasks can all pass the cache check and independently execute fetch(), defeating single-flight semantics and amplifying upstream calls.

Use moka's try_get_with instead, which coalesces concurrent misses and does not cache errors. Note: the return type changes from Result<V, E> to Result<V, Arc<E>>.

💡 Suggested fix
-    pub(crate) async fn get_or_try_insert<F, Fut, E>(&self, key: K, fetch: F) -> Result<V, E>
+    pub(crate) async fn get_or_try_insert<F, Fut, E>(
+        &self,
+        key: K,
+        fetch: F,
+    ) -> Result<V, Arc<E>>
     where
         F: FnOnce() -> Fut,
         Fut: Future<Output = Result<V, E>>,
+        E: Send + Sync + 'static,
     {
-        if let Some(cached) = self.0.get(&key).await {
-            tracing::info!("cache hit");
-            return Ok(cached);
-        }
-        let value = fetch().await?;
-        self.0.insert(key, value.clone()).await;
-        Ok(value)
+        self.0.try_get_with(key, fetch).await
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub(crate) async fn get_or_try_insert<F, Fut, E>(&self, key: K, fetch: F) -> Result<V, E>
where
F: FnOnce() -> Fut,
Fut: Future<Output = Result<V, E>>,
{
if let Some(cached) = self.0.get(&key).await {
tracing::info!("cache hit");
return Ok(cached);
}
let value = fetch().await?;
self.0.insert(key, value.clone()).await;
Ok(value)
}
pub(crate) async fn get_or_try_insert<F, Fut, E>(
&self,
key: K,
fetch: F,
) -> Result<V, Arc<E>>
where
F: FnOnce() -> Fut,
Fut: Future<Output = Result<V, E>>,
E: Send + Sync + 'static,
{
self.0.try_get_with(key, fetch).await
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cache.rs` around lines 33 - 45, Replace the manual
check-then-fetch-then-insert in get_or_try_insert with moka's single-flight API:
call self.0.try_get_with(key, || async { fetch().await.map_err(Arc::new) }) so
concurrent misses for the same key are coalesced and errors are not cached;
update the function signature/generic bound so the error returned is Arc<E>
(Result<V, Arc<E>>) and ensure returned V is cloned or owned as moka requires.
Keep the function name get_or_try_insert and use self.0.try_get_with to locate
the change.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/cache.rs (2)

113-140: Add a concurrent-miss test to lock in single-flight behavior.

The tests cover hit/miss/error semantics well, but a multi-task same-key miss test would protect the anti-stampede guarantee this PR depends on.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cache.rs` around lines 113 - 140, Add a concurrent-miss test to verify
single-flight behavior by creating a new async test (e.g.,
test_concurrent_miss_single_flight) that spawns multiple tasks which
concurrently call AppCache::get_or_try_insert on the same key; use a shared
counter (AtomicUsize or Mutex) inside the fetch closure that sleeps/delays
briefly and increments the counter so you can assert the fetch ran only once,
ensure all tasks receive the same Ok(value), and confirm the
cache.get(&"key").await returns the cached value; reference
AppCache::get_or_try_insert, AppCache::get, and AppCache::insert behavior in the
test.

39-39: Defer fetch() call by wrapping in async block.

At Line 39, fetch() is invoked eagerly before try_get_with determines if initialization is needed. Any synchronous work in the factory or before the first .await runs even on cache hits. Wrap the call in async move { ... } to defer execution until a cache miss:

♻️ Proposed fix
-        self.0.try_get_with(key, fetch()).await
+        self.0.try_get_with(key, async move { fetch().await }).await
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cache.rs` at line 39, The call to fetch() is executed eagerly in the
expression self.0.try_get_with(key, fetch()).await; change the second argument
to a deferred async block so the factory runs only on a cache miss, e.g. pass
async move { fetch().await } (or async move { fetch() } if fetch already returns
the final value) instead of fetch(), keeping the call site
self.0.try_get_with(key, ...).await and preserving ownership by using async
move.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/cache.rs`:
- Around line 113-140: Add a concurrent-miss test to verify single-flight
behavior by creating a new async test (e.g., test_concurrent_miss_single_flight)
that spawns multiple tasks which concurrently call AppCache::get_or_try_insert
on the same key; use a shared counter (AtomicUsize or Mutex) inside the fetch
closure that sleeps/delays briefly and increments the counter so you can assert
the fetch ran only once, ensure all tasks receive the same Ok(value), and
confirm the cache.get(&"key").await returns the cached value; reference
AppCache::get_or_try_insert, AppCache::get, and AppCache::insert behavior in the
test.
- Line 39: The call to fetch() is executed eagerly in the expression
self.0.try_get_with(key, fetch()).await; change the second argument to a
deferred async block so the factory runs only on a cache miss, e.g. pass async
move { fetch().await } (or async move { fetch() } if fetch already returns the
final value) instead of fetch(), keeping the call site self.0.try_get_with(key,
...).await and preserving ownership by using async move.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 19016f1 and 2445289.

📒 Files selected for processing (2)
  • src/cache.rs
  • src/error.rs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant