Implements performance optimizations across reactive components#2
Implements performance optimizations across reactive components#2michaelstonis merged 10 commits intodevelopfrom
Conversation
…StdDev) - Convert Max operator to use sealed MaxState<TSource, TProperty> class - Convert Min operator to use sealed MinState<TSource, TProperty> class - Convert Avg operator to use sealed AvgState<TSource, TProperty> class - Convert StdDev operator to use sealed StdDevState<TSource, TProperty> class All operators now use Observable.Create<T, TState> with static lambdas, eliminating closure allocations in hot paths.
- FilterOperator: Convert both static and dynamic predicate overloads to use sealed class state containers (FilterState, DynamicFilterState) - TransformOperator: Convert all 3 overloads to use tuple-based state with Observable.Create<T, TState> and static lambdas - All 285 DynamicData tests pass
- Replace generic ActionDisposable with dedicated SuppressDisposable/DelayDisposable classes - These classes capture the owner reference directly instead of capturing an Action delegate - Eliminates closure allocation for each SuppressChangeNotifications/DelayChangeNotifications call - All 459 R3Ext tests pass
- Add initial capacity (4) to handlers list to reduce early resizes - Replace local function wrapper with HandlerWrapper sealed class - Replace Disposable.Create closure with dedicated HandlerUnregistration class - Use static lambda in Select to eliminate closure in wrapper - All 26 Interaction tests pass
- BufferUntilInactive: Add initial capacity (16) to internal list - Reduces early resizes during buffer accumulation - ArrayPool approach would require API changes (returning pooled arrays)
- Added optional initialCapacity parameter (default 0) to SourceCache constructor - When capacity is specified, Dictionary is created with that capacity - Reduces resizes when the expected size is known upfront - All 285 DynamicData tests pass
- Create PropertyEventArgsCache utility class for caching event args - Uses ConcurrentDictionary with 128 entry limit to prevent unbounded growth - Update RxObject and RxRecord to use cached event args - Reduces allocations for frequently raised property changes - All 459 R3Ext tests pass
Bindings optimizations: - Use string.Concat for key construction in BindingRegistry - Optimize SplitTypePath to avoid typePart allocation in Release mode - Add PropertyEventArgsCache to InternalLeaf test types - Increase PropertyEventArgsCache size to 256 Test infrastructure: - Add TestModuleInitializer to set DefaultFrameProvider before tests - Fixes NotSupportedException when running tests from VS Code GUI
Removes the temporary performance optimization checklists and progress logs now that all tasks are complete. The completed optimization efforts included: * Eliminating closures in reactive operators and disposables * Adding initial capacities to collections for reduced reallocations * Reducing string allocations and caching event arguments
There was a problem hiding this comment.
Pull request overview
This PR implements comprehensive performance optimizations across R3.DynamicData and R3Ext libraries by eliminating closure allocations and reducing object instantiations. The changes primarily focus on converting lambda-based reactive operators to stateful patterns and introducing caching mechanisms.
- Refactors reactive operators (
Max,Min,Avg,StdDev,Filter,Transform) to use statefulObservable.Createoverloads instead of closure-capturing lambdas - Introduces
PropertyEventArgsCacheto reuse property change event argument instances - Adds optional capacity parameters to reduce collection reallocations
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 15 comments.
Show a summary per file
| File | Description |
|---|---|
| R3Ext/Utilities/PropertyEventArgsCache.cs | New cache implementation for reusing PropertyChangedEventArgs and PropertyChangingEventArgs instances with a 256-item size limit |
| R3Ext/Timing/TimingExtensions.Buffer.cs | Pre-allocates buffer list with initial capacity of 16 to reduce early reallocations |
| R3Ext/RxRecord.cs | Replaces closure-based disposables with dedicated SuppressDisposable and DelayDisposable classes; uses PropertyEventArgsCache |
| R3Ext/RxObject.cs | Same disposable refactoring as RxRecord; uses PropertyEventArgsCache for event args |
| R3Ext/Interactions/Interaction.cs | Eliminates closures in handler registration by introducing HandlerWrapper and HandlerUnregistration classes; pre-allocates handler list with capacity 4 |
| R3Ext/Bindings/InternalLeaf.cs | Uses PropertyEventArgsCache for property change notifications |
| R3Ext/Bindings/BindingRegistry.cs | Uses string.Concat instead of concatenation operator; adds conditional compilation to avoid unused string allocations in release builds |
| R3Ext.Tests/TestModuleInitializer.cs | New module initializer to ensure ObservableSystem.DefaultFrameProvider is set before tests run |
| R3.DynamicData/Operators/TransformOperator.cs | Refactors three Transform overloads to use stateful tuple-based approach, eliminating closure allocations |
| R3.DynamicData/Operators/FilterOperator.cs | Refactors static and dynamic filter operators to use state classes (StaticFilterState and DynamicFilterState) instead of closures |
| R3.DynamicData/List/ObservableListAggregates.cs | Refactors Max, Min, Avg, and StdDev operators to use dedicated state classes instead of closure-captured local variables |
| R3.DynamicData/Cache/SourceCache.cs | Adds optional initialCapacity parameter to constructor for pre-allocating internal dictionary |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (ChangedCache.Count < MaxCacheSize) | ||
| { | ||
| ChangedCache.TryAdd(propertyName, args); |
There was a problem hiding this comment.
The cache implementation has a potential race condition. Between checking ChangedCache.Count < MaxCacheSize (line 32) and calling TryAdd (line 34), another thread could add entries, causing the cache to exceed the limit. While the comment acknowledges "may slightly exceed", this could lead to unbounded growth under high concurrency.
Consider using GetOrAdd with a lazy factory to ensure atomicity, or accept the race condition and document that the cache size is approximate.
| if (ChangingCache.Count < MaxCacheSize) | ||
| { | ||
| ChangingCache.TryAdd(propertyName, args); |
There was a problem hiding this comment.
Same race condition as in GetPropertyChanged: checking ChangingCache.Count < MaxCacheSize before TryAdd is not atomic, potentially allowing the cache to grow beyond the intended limit under concurrent access.
| // Try to get from cache first | ||
| if (ChangedCache.TryGetValue(propertyName, out var cached)) | ||
| { | ||
| return cached; | ||
| } | ||
|
|
||
| // Create new and try to add to cache | ||
| var args = new PropertyChangedEventArgs(propertyName); | ||
|
|
||
| // Only cache if we haven't exceeded the limit (simple check, may slightly exceed) | ||
| if (ChangedCache.Count < MaxCacheSize) | ||
| { | ||
| ChangedCache.TryAdd(propertyName, args); | ||
| } | ||
|
|
||
| return args; |
There was a problem hiding this comment.
The cache lookup creates a new PropertyChangedEventArgs instance even when it already exists in the cache. If the cache hit rate is high, this results in unnecessary allocations that are immediately discarded.
Consider using ConcurrentDictionary.GetOrAdd with a factory function to avoid allocating when the value already exists in the cache.
| // Try to get from cache first | ||
| if (ChangingCache.TryGetValue(propertyName, out var cached)) | ||
| { | ||
| return cached; | ||
| } | ||
|
|
||
| // Create new and try to add to cache | ||
| var args = new PropertyChangingEventArgs(propertyName); | ||
|
|
||
| // Only cache if we haven't exceeded the limit (simple check, may slightly exceed) | ||
| if (ChangingCache.Count < MaxCacheSize) | ||
| { | ||
| ChangingCache.TryAdd(propertyName, args); | ||
| } | ||
|
|
||
| return args; |
There was a problem hiding this comment.
Same issue as GetPropertyChanged: a new PropertyChangingEventArgs is allocated even when already cached, which defeats the purpose of the cache optimization when cache hits occur. Use GetOrAdd to only allocate when necessary.
| // Try to get from cache first | |
| if (ChangingCache.TryGetValue(propertyName, out var cached)) | |
| { | |
| return cached; | |
| } | |
| // Create new and try to add to cache | |
| var args = new PropertyChangingEventArgs(propertyName); | |
| // Only cache if we haven't exceeded the limit (simple check, may slightly exceed) | |
| if (ChangingCache.Count < MaxCacheSize) | |
| { | |
| ChangingCache.TryAdd(propertyName, args); | |
| } | |
| return args; | |
| // Only cache if we haven't exceeded the limit (simple check, may slightly exceed) | |
| if (ChangingCache.Count < MaxCacheSize) | |
| { | |
| // Atomically get or add to cache | |
| return ChangingCache.GetOrAdd(propertyName, static n => new PropertyChangingEventArgs(n)); | |
| } | |
| // If cache is full, just allocate a new instance | |
| return new PropertyChangingEventArgs(propertyName); |
| foreach (var i in change.Range) | ||
| { | ||
| foreach (var i in change.Range) | ||
| if (_itemValues.TryGetValue(i, out var vR)) | ||
| { | ||
| if (itemValues.TryGetValue(i, out var vR)) | ||
| { | ||
| Decrement(i, vR); | ||
| } | ||
| Decrement(i, vR); | ||
| } | ||
| } |
There was a problem hiding this comment.
This foreach loop implicitly filters its target sequence - consider filtering the sequence explicitly using '.Where(...)'.
| if (_valueCounts.TryGetValue(newVal2, out var c)) | ||
| { | ||
| _valueCounts[newVal2] = c + 1; | ||
| } | ||
| else | ||
| { | ||
| _valueCounts[newVal2] = 1; | ||
| } |
There was a problem hiding this comment.
Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.
| if (_valueCounts.TryGetValue(newVal2, out var c)) | |
| { | |
| _valueCounts[newVal2] = c + 1; | |
| } | |
| else | |
| { | |
| _valueCounts[newVal2] = 1; | |
| } | |
| _valueCounts[newVal2] = _valueCounts.TryGetValue(newVal2, out var c) ? c + 1 : 1; |
| if (_valueCounts.TryGetValue(value, out var count)) | ||
| { | ||
| _valueCounts[value] = count + 1; | ||
| } | ||
| else | ||
| { | ||
| _valueCounts[value] = 1; | ||
| } |
There was a problem hiding this comment.
Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.
| if (_valueCounts.TryGetValue(value, out var count)) | |
| { | |
| _valueCounts[value] = count + 1; | |
| } | |
| else | |
| { | |
| _valueCounts[value] = 1; | |
| } | |
| _valueCounts[value] = _valueCounts.TryGetValue(value, out var count) ? count + 1 : 1; |
| if (_valueCounts.TryGetValue(newVal2, out var c)) | ||
| { | ||
| _valueCounts[newVal2] = c + 1; | ||
| } | ||
| else | ||
| { | ||
| _valueCounts[newVal2] = 1; | ||
| } |
There was a problem hiding this comment.
Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.
| if (_valueCounts.TryGetValue(newVal2, out var c)) | |
| { | |
| _valueCounts[newVal2] = c + 1; | |
| } | |
| else | |
| { | |
| _valueCounts[newVal2] = 1; | |
| } | |
| _valueCounts[newVal2] = _valueCounts.TryGetValue(newVal2, out var c) ? c + 1 : 1; |
| if (_valueCounts.TryGetValue(value, out var count)) | ||
| { | ||
| _valueCounts[value] = count + 1; | ||
| } | ||
| else | ||
| { | ||
| _valueCounts[value] = 1; | ||
| } |
There was a problem hiding this comment.
Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.
PropertyEventArgsCache: - Use GetOrAdd with static factory to avoid allocations on cache hit - More atomic caching behavior FilterOperator: - Replace ContainsKey + indexer with TryGetValue to avoid double lookups ObservableListAggregates: - Simplify valueCounts increment with ternary operator
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Only cache if we haven't exceeded the limit (approximate check for performance) | ||
| if (ChangedCache.Count < MaxCacheSize) | ||
| { | ||
| // Atomically get or add to cache - avoids allocation on cache hit | ||
| return ChangedCache.GetOrAdd(propertyName, static n => new PropertyChangedEventArgs(n)); | ||
| } |
There was a problem hiding this comment.
Race condition exists between checking Count and calling GetOrAdd. Multiple threads could simultaneously see Count < MaxCacheSize and add entries, allowing the cache to grow beyond MaxCacheSize. While the comment acknowledges this as an 'approximate check,' this could lead to unbounded cache growth under high concurrent load. Consider either: (1) accepting this behavior and documenting it more explicitly, (2) using a more robust size-limiting strategy like periodically pruning the cache, or (3) using a true LRU cache implementation.
| } | ||
|
|
||
| private void AddHandler(Func<IInteractionContext<TInput, TOutput>, Observable<Unit>> handler) | ||
| internal void AddHandler(Func<IInteractionContext<TInput, TOutput>, Observable<Unit>> handler) |
There was a problem hiding this comment.
Changing AddHandler and RemoveHandler from private to internal broadens the accessibility of these methods. This exposes internal implementation details that could be called from any code within the same assembly, potentially bypassing the intended RegisterHandler API and its safety checks. Consider keeping these methods private and making HandlerUnregistration a nested private class if it needs access to these methods.
| foreach (var i in change.Range) | ||
| { | ||
| foreach (var i in change.Range) | ||
| if (_itemValues.TryGetValue(i, out var vR)) | ||
| { | ||
| if (itemValues.TryGetValue(i, out var vR)) | ||
| { | ||
| Decrement(i, vR); | ||
| } | ||
| Decrement(i, vR); | ||
| } | ||
| } |
There was a problem hiding this comment.
This foreach loop implicitly filters its target sequence - consider filtering the sequence explicitly using '.Where(...)'.
Introduces a series of performance optimizations aimed at reducing allocations and improving efficiency across
R3.DynamicDataandR3Extlibraries.Max,Min,Avg,StdDev,Filter,Transform) inR3.DynamicDatato use statefulObservable.Createoverloads, minimizing heap allocations. Applies similar optimizations toRxObject/RxRecorddisposables andInteractionhandler registration inR3Ext.PropertyEventArgsCacheto reusePropertyChangedEventArgsandPropertyChangingEventArgsinstances, significantly reducing boilerplate and garbage collection pressure.SourceCacheconstructor and pre-allocates the internalListin theBufferoperator, reducing reallocations.string.Concatfor binding keys and uses conditional compilation to avoid unnecessary string allocations in debug builds withinBindingRegistry.R3.ObservableSystem, resolving issues with specific test runners.