This repository documents a real-world performance experiment comparing three frontend data-processing approaches in Angular:
- Naive TypeScript (object-based)
- Optimized TypeScript (data-oriented, typed arrays + indexes)
- Rust compiled to WebAssembly (WASM)
The goal was not to "prove WASM is faster" (it is), but to answer a more practical question:
How far can well-written TypeScript go before WASM becomes necessary?
Modern frontend applications increasingly deal with large datasets (analytics dashboards, logs, telemetry, financial data). A common assumption is that JavaScript quickly becomes unusable at scale and that Rust + WASM is the only viable solution.
This experiment challenges that assumption by first optimizing the data layout and algorithms in TypeScript, then comparing the results to WASM.
- Angular (standalone components)
- TypeScript
- Rust +
wasm-bindgen - Chrome DevTools (Performance & Memory profiling)
-
Synthetic dataset (for reproducibility)
-
Size: 20k → 50M rows
-
Each row contains:
- numeric value
- probability
- metadata fields
Synthetic data allows controlled scaling and consistent benchmarking across runs.
Approach
- Operates directly on
DataRow[] - Uses
Array.filter,map,sort - Creates multiple intermediate arrays
Characteristics
- Easy to write and reason about
- Heavy heap allocation
- High GC pressure
- Poor cache locality
This represents how large datasets are often handled in frontend codebases.
This version keeps the original DataRow[] intact and processes data using a data-oriented design.
Techniques used
Float64Arrayfor numeric columns- Index arrays instead of object copies
- Single-pass loops
- No mutation of original objects
Why this works
- Typed arrays provide contiguous memory → better CPU cache usage
- No object churn → reduced GC pauses
- Sorting integers (indexes) instead of objects
This approach treats TypeScript closer to a systems language while remaining fully idiomatic and safe for UI usage.
Approach
- Heavy computation moved to Rust
- Angular sends numeric arrays to WASM
- WASM returns aggregated results only
Characteristics
- Near-native performance
- Minimal memory overhead
- No DOM or framework interaction
This represents the upper bound of performance achievable in the browser today.
| Dataset Size | Naive TS | Optimized TS | Rust (WASM) |
|---|---|---|---|
| 10M rows | ~1.8 s | ~0.5 s | ~0.15 s |
| 50M rows | ~12–13 s | ~3–4 s | ~1–1.5 s |
Example output (10M rows):
- Naive TS: ~1885 ms
- Optimized TS: ~495 ms
Both pipelines produced identical results (minor floating-point differences due to accumulation order).
- Object-based processing caused large JS heap spikes
- Typed arrays significantly reduced heap usage
- No duplication of original dataset
- Main bottleneck was memory allocation, not arithmetic
Chrome DevTools confirmed that GC and object churn dominated runtime in the naive implementation.
Performance profiling showed:
- Naive TS blocked the main thread for several seconds
- Optimized TS reduced blocking substantially
- WASM minimized input latency and preserved responsiveness
This directly impacts INP (Interaction to Next Paint) and perceived UX quality.
- Data layout matters more than language
- TypeScript can handle millions of rows when written data-first
- WASM amplifies good design — it doesn’t replace it
- Memory allocation is often the real bottleneck in frontend performance
- Optimizing TS first simplifies WASM integration later
| Use Case | Recommendation |
|---|---|
| ≤5M rows | Optimized TypeScript |
| 5M–20M rows | TS + Web Worker |
| 20M+ rows | Rust + WASM |
| Heavy math / probabilities | WASM |
| UI filtering / interaction | TypeScript |
This experiment reshaped how I think about frontend performance. Rust and WASM are powerful tools, but the biggest gains came from rethinking data structures and memory access patterns.
Optimizing TypeScript closed most of the performance gap — and made the transition to WASM straightforward and intentional.