Skip to content

Simplify priority limits#580

Open
joshuahannan wants to merge 1 commit intomasterfrom
simplify-priorities
Open

Simplify priority limits#580
joshuahannan wants to merge 1 commit intomasterfrom
simplify-priorities

Conversation

@joshuahannan
Copy link
Member

@joshuahannan joshuahannan commented Feb 18, 2026

Refactors the FlowTransactionScheduler contract to only have buckets for each priority and not have any shared limits

Upgrade Risk Analysis                                                                                                                                                                         
                                                                                                                                                                                                
  Storage/Type Compatibility: No Issues                                                                                                                                                         

  The Config struct retains all the same fields (slotSharedEffortLimit, priorityEffortReserve, etc.) — they're just zeroed out in the new init. The stored Config on mainnet will deserialize
  correctly after the upgrade because no fields were removed or changed type. Cadence contract upgrades require this to be true, and it is. ✓

  The slotUsedEffort, slotQueue, transactions, and sortedTimestamps structures are all unchanged. ✓

  ---
  The Core Accounting Question

  The critical question is whether the pre-upgrade slotUsedEffort data will be correctly interpreted by the new getSlotAvailableEffort logic.

  Good news: the priorityEffortLimit values are identical before and after the upgrade:

  ┌──────────┬──────────────────────┬───────────┐
  │ Priority │      Old limit       │ New limit │
  ├──────────┼──────────────────────┼───────────┤
  │ High     │ 10000 + 5000 = 15000 │ 15000     │
  ├──────────┼──────────────────────┼───────────┤
  │ Medium   │ 2500 + 5000 = 7500   │ 7500      │
  ├──────────┼──────────────────────┼───────────┤
  │ Low      │ 2500                 │ 2500      │
  └──────────┴──────────────────────┴───────────┘

  So the new code's limit - used calculation is using the same ceiling values the old code enforced. Individual per-priority overflows are impossible.

  ---
  The Real Risk: In-Flight Slots with Cross-Priority Shared Pool Usage

  This is the one genuine concern. The old model had a shared pool (5000 effort units) that was jointly consumed by High and Medium. The new model treats each priority as independent.

  Concrete example:

  A slot exists at upgrade time with:
  - slotUsedEffort = {High: 14000, Medium: 0, Low: 0}
  - Under old rules: mediumSharedUsed = 0, highSharedUsed = 14000 - 10000 = 4000 → shared pool has only 1000 left → Medium available = 1000 + 2500 = 3500
  - Under new rules: Medium available = 7500 - 0 = 7500

  So after the upgrade, new Medium priority transactions can be scheduled into that slot with up to 7500 effort, instead of the 3500 the old rules would allow. The total slot effort could
  reach 14000 + 7500 = 21500, exceeding the old slotTotalEffortLimit of 17500 but within the new intentional ceiling of 25000.

  How bad is this in practice?
  - The effect is bounded by the shared pool size (5000 effort units)
  - It only affects slots that are partially filled at the moment of the upgrade
  - Scheduled transactions are typically days/weeks out, so most in-flight slots will already be in the past (processed) by the time the upgrade deploys
  - The collection effort limit (500k) and transaction count limit (150) provide a hard safety ceiling regardless
  - This cannot cause a panic — just means some slots absorb slightly more work than the old rules intended

  ---
  Low Priority Transactions: Safe

  Two concerns, both benign:

  1. Pre-upgrade Low txs in slots: The old model capped Low at min(17500 - highUsed - mediumUsed, 2500), which is always ≤ 2500. So slotUsedEffort[Low] can never exceed 2500 pre-upgrade. The
  new limit is also 2500. No underflow on saturatingSubtract. ✓
  2. Pre-upgrade Low txs that were bumped: These were already moved to later slots with correct slotUsedEffort accounting. They stay where they are and execute normally. ✓
  3. Post-upgrade High txs will NOT bump existing Low txs from shared slots — that rescheduleLowPriorityTransactions logic is gone. This is actually beneficial for Low tx users (their txs stay
   put).

  ---
  The setConfig Call

  Running set_config_details with the same limit values (15k/7.5k/2.5k) is safe. The only things that change in the stored Config are the legacy fields (slotSharedEffortLimit → 0,
  priorityEffortReserve → {0,0,0}, slotTotalEffortLimit → 25000). The new code doesn't use those for scheduling, so there's no behavioral change from the setConfig call itself.

  The post-condition collectionEffortLimit > slotTotalEffortLimit changes to 500000 > 25000 from 500000 > 17500 — still valid. ✓

  ---
  Summary

  ┌──────────────────────────────────────────────────┬──────────┬────────────────────────────────────────────────────────────┐
  │                       Risk                       │ Severity │                           Notes                            │
  ├──────────────────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────────┤
  │ State/type corruption                            │ None     │ All fields preserved, same types                           │
  ├──────────────────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────────┤
  │ Per-priority accounting for new txs              │ None     │ Same limits (15k/7.5k/2.5k)                                │
  ├──────────────────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────────┤
  │ In-flight slots loosening shared pool constraint │ Low      │ Transient, bounded by 5k effort, resolves as slots process │
  ├──────────────────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────────┤
  │ Low priority accounting                          │ None     │ Always bounded at 2500 pre-upgrade                         │
  ├──────────────────────────────────────────────────┼──────────┼────────────────────────────────────────────────────────────┤
  │ setConfig call safety                            │ None     │ Same effective limits, legacy fields zeroed                │
  └──────────────────────────────────────────────────┴──────────┴────────────────────────────────────────────────────────────┘

  The upgrade is safe. The only real concern is the transient slot loosening for in-flight slots that were consuming the shared pool, and that's a minor behavioral change rather than a
  correctness issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments