Skip to content

Implements performance optimizations across reactive components#2

Merged
michaelstonis merged 10 commits intodevelopfrom
feature/performance-optimizations
Nov 29, 2025
Merged

Implements performance optimizations across reactive components#2
michaelstonis merged 10 commits intodevelopfrom
feature/performance-optimizations

Conversation

@michaelstonis
Copy link
Owner

Introduces a series of performance optimizations aimed at reducing allocations and improving efficiency across R3.DynamicData and R3Ext libraries.

  • Eliminates Closure Allocations: Refactors several reactive operators (Max, Min, Avg, StdDev, Filter, Transform) in R3.DynamicData to use stateful Observable.Create overloads, minimizing heap allocations. Applies similar optimizations to RxObject/RxRecord disposables and Interaction handler registration in R3Ext.
  • Reduces Object Instantiation: Introduces PropertyEventArgsCache to reuse PropertyChangedEventArgs and PropertyChangingEventArgs instances, significantly reducing boilerplate and garbage collection pressure.
  • Optimizes Collection Capacities: Adds an optional initial capacity to the SourceCache constructor and pre-allocates the internal List in the Buffer operator, reducing reallocations.
  • Enhances String Handling: Employs string.Concat for binding keys and uses conditional compilation to avoid unnecessary string allocations in debug builds within BindingRegistry.
  • Ensures Test Stability: Adds a module initializer to ensure a default frame provider is set for R3.ObservableSystem, resolving issues with specific test runners.

michaelstonis and others added 9 commits November 29, 2025 00:19
…StdDev)

- Convert Max operator to use sealed MaxState<TSource, TProperty> class
- Convert Min operator to use sealed MinState<TSource, TProperty> class
- Convert Avg operator to use sealed AvgState<TSource, TProperty> class
- Convert StdDev operator to use sealed StdDevState<TSource, TProperty> class

All operators now use Observable.Create<T, TState> with static lambdas,
eliminating closure allocations in hot paths.
- FilterOperator: Convert both static and dynamic predicate overloads
  to use sealed class state containers (FilterState, DynamicFilterState)
- TransformOperator: Convert all 3 overloads to use tuple-based state
  with Observable.Create<T, TState> and static lambdas
- All 285 DynamicData tests pass
- Replace generic ActionDisposable with dedicated SuppressDisposable/DelayDisposable classes
- These classes capture the owner reference directly instead of capturing an Action delegate
- Eliminates closure allocation for each SuppressChangeNotifications/DelayChangeNotifications call
- All 459 R3Ext tests pass
- Add initial capacity (4) to handlers list to reduce early resizes
- Replace local function wrapper with HandlerWrapper sealed class
- Replace Disposable.Create closure with dedicated HandlerUnregistration class
- Use static lambda in Select to eliminate closure in wrapper
- All 26 Interaction tests pass
- BufferUntilInactive: Add initial capacity (16) to internal list
- Reduces early resizes during buffer accumulation
- ArrayPool approach would require API changes (returning pooled arrays)
- Added optional initialCapacity parameter (default 0) to SourceCache constructor
- When capacity is specified, Dictionary is created with that capacity
- Reduces resizes when the expected size is known upfront
- All 285 DynamicData tests pass
- Create PropertyEventArgsCache utility class for caching event args
- Uses ConcurrentDictionary with 128 entry limit to prevent unbounded growth
- Update RxObject and RxRecord to use cached event args
- Reduces allocations for frequently raised property changes
- All 459 R3Ext tests pass
Bindings optimizations:
- Use string.Concat for key construction in BindingRegistry
- Optimize SplitTypePath to avoid typePart allocation in Release mode
- Add PropertyEventArgsCache to InternalLeaf test types
- Increase PropertyEventArgsCache size to 256

Test infrastructure:
- Add TestModuleInitializer to set DefaultFrameProvider before tests
- Fixes NotSupportedException when running tests from VS Code GUI
Removes the temporary performance optimization checklists and progress logs now that all tasks are complete.

The completed optimization efforts included:
*   Eliminating closures in reactive operators and disposables
*   Adding initial capacities to collections for reduced reallocations
*   Reducing string allocations and caching event arguments
Copilot AI review requested due to automatic review settings November 29, 2025 07:07
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements comprehensive performance optimizations across R3.DynamicData and R3Ext libraries by eliminating closure allocations and reducing object instantiations. The changes primarily focus on converting lambda-based reactive operators to stateful patterns and introducing caching mechanisms.

  • Refactors reactive operators (Max, Min, Avg, StdDev, Filter, Transform) to use stateful Observable.Create overloads instead of closure-capturing lambdas
  • Introduces PropertyEventArgsCache to reuse property change event argument instances
  • Adds optional capacity parameters to reduce collection reallocations

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 15 comments.

Show a summary per file
File Description
R3Ext/Utilities/PropertyEventArgsCache.cs New cache implementation for reusing PropertyChangedEventArgs and PropertyChangingEventArgs instances with a 256-item size limit
R3Ext/Timing/TimingExtensions.Buffer.cs Pre-allocates buffer list with initial capacity of 16 to reduce early reallocations
R3Ext/RxRecord.cs Replaces closure-based disposables with dedicated SuppressDisposable and DelayDisposable classes; uses PropertyEventArgsCache
R3Ext/RxObject.cs Same disposable refactoring as RxRecord; uses PropertyEventArgsCache for event args
R3Ext/Interactions/Interaction.cs Eliminates closures in handler registration by introducing HandlerWrapper and HandlerUnregistration classes; pre-allocates handler list with capacity 4
R3Ext/Bindings/InternalLeaf.cs Uses PropertyEventArgsCache for property change notifications
R3Ext/Bindings/BindingRegistry.cs Uses string.Concat instead of concatenation operator; adds conditional compilation to avoid unused string allocations in release builds
R3Ext.Tests/TestModuleInitializer.cs New module initializer to ensure ObservableSystem.DefaultFrameProvider is set before tests run
R3.DynamicData/Operators/TransformOperator.cs Refactors three Transform overloads to use stateful tuple-based approach, eliminating closure allocations
R3.DynamicData/Operators/FilterOperator.cs Refactors static and dynamic filter operators to use state classes (StaticFilterState and DynamicFilterState) instead of closures
R3.DynamicData/List/ObservableListAggregates.cs Refactors Max, Min, Avg, and StdDev operators to use dedicated state classes instead of closure-captured local variables
R3.DynamicData/Cache/SourceCache.cs Adds optional initialCapacity parameter to constructor for pre-allocating internal dictionary

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 32 to 34
if (ChangedCache.Count < MaxCacheSize)
{
ChangedCache.TryAdd(propertyName, args);
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache implementation has a potential race condition. Between checking ChangedCache.Count < MaxCacheSize (line 32) and calling TryAdd (line 34), another thread could add entries, causing the cache to exceed the limit. While the comment acknowledges "may slightly exceed", this could lead to unbounded growth under high concurrency.

Consider using GetOrAdd with a lazy factory to ensure atomicity, or accept the race condition and document that the cache size is approximate.

Copilot uses AI. Check for mistakes.
Comment on lines 55 to 57
if (ChangingCache.Count < MaxCacheSize)
{
ChangingCache.TryAdd(propertyName, args);
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same race condition as in GetPropertyChanged: checking ChangingCache.Count < MaxCacheSize before TryAdd is not atomic, potentially allowing the cache to grow beyond the intended limit under concurrent access.

Copilot uses AI. Check for mistakes.
Comment on lines 22 to 37
// Try to get from cache first
if (ChangedCache.TryGetValue(propertyName, out var cached))
{
return cached;
}

// Create new and try to add to cache
var args = new PropertyChangedEventArgs(propertyName);

// Only cache if we haven't exceeded the limit (simple check, may slightly exceed)
if (ChangedCache.Count < MaxCacheSize)
{
ChangedCache.TryAdd(propertyName, args);
}

return args;
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache lookup creates a new PropertyChangedEventArgs instance even when it already exists in the cache. If the cache hit rate is high, this results in unnecessary allocations that are immediately discarded.

Consider using ConcurrentDictionary.GetOrAdd with a factory function to avoid allocating when the value already exists in the cache.

Copilot uses AI. Check for mistakes.
Comment on lines 45 to 60
// Try to get from cache first
if (ChangingCache.TryGetValue(propertyName, out var cached))
{
return cached;
}

// Create new and try to add to cache
var args = new PropertyChangingEventArgs(propertyName);

// Only cache if we haven't exceeded the limit (simple check, may slightly exceed)
if (ChangingCache.Count < MaxCacheSize)
{
ChangingCache.TryAdd(propertyName, args);
}

return args;
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as GetPropertyChanged: a new PropertyChangingEventArgs is allocated even when already cached, which defeats the purpose of the cache optimization when cache hits occur. Use GetOrAdd to only allocate when necessary.

Suggested change
// Try to get from cache first
if (ChangingCache.TryGetValue(propertyName, out var cached))
{
return cached;
}
// Create new and try to add to cache
var args = new PropertyChangingEventArgs(propertyName);
// Only cache if we haven't exceeded the limit (simple check, may slightly exceed)
if (ChangingCache.Count < MaxCacheSize)
{
ChangingCache.TryAdd(propertyName, args);
}
return args;
// Only cache if we haven't exceeded the limit (simple check, may slightly exceed)
if (ChangingCache.Count < MaxCacheSize)
{
// Atomically get or add to cache
return ChangingCache.GetOrAdd(propertyName, static n => new PropertyChangingEventArgs(n));
}
// If cache is full, just allocate a new instance
return new PropertyChangingEventArgs(propertyName);

Copilot uses AI. Check for mistakes.
Comment on lines +214 to 220
foreach (var i in change.Range)
{
foreach (var i in change.Range)
if (_itemValues.TryGetValue(i, out var vR))
{
if (itemValues.TryGetValue(i, out var vR))
{
Decrement(i, vR);
}
Decrement(i, vR);
}
}
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This foreach loop implicitly filters its target sequence - consider filtering the sequence explicitly using '.Where(...)'.

Copilot uses AI. Check for mistakes.
case Kernel.ChangeReason.Remove:
_allData.Remove(key);

if (_filteredData.ContainsKey(key))
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inefficient use of 'ContainsKey' and indexer.

Copilot uses AI. Check for mistakes.
Comment on lines 252 to 259
if (_valueCounts.TryGetValue(newVal2, out var c))
{
_valueCounts[newVal2] = c + 1;
}
else
{
_valueCounts[newVal2] = 1;
}
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.

Suggested change
if (_valueCounts.TryGetValue(newVal2, out var c))
{
_valueCounts[newVal2] = c + 1;
}
else
{
_valueCounts[newVal2] = 1;
}
_valueCounts[newVal2] = _valueCounts.TryGetValue(newVal2, out var c) ? c + 1 : 1;

Copilot uses AI. Check for mistakes.
Comment on lines 288 to 295
if (_valueCounts.TryGetValue(value, out var count))
{
_valueCounts[value] = count + 1;
}
else
{
_valueCounts[value] = 1;
}
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.

Suggested change
if (_valueCounts.TryGetValue(value, out var count))
{
_valueCounts[value] = count + 1;
}
else
{
_valueCounts[value] = 1;
}
_valueCounts[value] = _valueCounts.TryGetValue(value, out var count) ? count + 1 : 1;

Copilot uses AI. Check for mistakes.
Comment on lines 497 to 504
if (_valueCounts.TryGetValue(newVal2, out var c))
{
_valueCounts[newVal2] = c + 1;
}
else
{
_valueCounts[newVal2] = 1;
}
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.

Suggested change
if (_valueCounts.TryGetValue(newVal2, out var c))
{
_valueCounts[newVal2] = c + 1;
}
else
{
_valueCounts[newVal2] = 1;
}
_valueCounts[newVal2] = _valueCounts.TryGetValue(newVal2, out var c) ? c + 1 : 1;

Copilot uses AI. Check for mistakes.
Comment on lines 533 to 540
if (_valueCounts.TryGetValue(value, out var count))
{
_valueCounts[value] = count + 1;
}
else
{
_valueCounts[value] = 1;
}
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both branches of this 'if' statement write to the same variable - consider using '?' to express intent better.

Copilot uses AI. Check for mistakes.
PropertyEventArgsCache:
- Use GetOrAdd with static factory to avoid allocations on cache hit
- More atomic caching behavior

FilterOperator:
- Replace ContainsKey + indexer with TryGetValue to avoid double lookups

ObservableListAggregates:
- Simplify valueCounts increment with ternary operator
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +22 to +27
// Only cache if we haven't exceeded the limit (approximate check for performance)
if (ChangedCache.Count < MaxCacheSize)
{
// Atomically get or add to cache - avoids allocation on cache hit
return ChangedCache.GetOrAdd(propertyName, static n => new PropertyChangedEventArgs(n));
}
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Race condition exists between checking Count and calling GetOrAdd. Multiple threads could simultaneously see Count < MaxCacheSize and add entries, allowing the cache to grow beyond MaxCacheSize. While the comment acknowledges this as an 'approximate check,' this could lead to unbounded cache growth under high concurrent load. Consider either: (1) accepting this behavior and documenting it more explicitly, (2) using a more robust size-limiting strategy like periodically pruning the cache, or (3) using a true LRU cache implementation.

Copilot uses AI. Check for mistakes.
}

private void AddHandler(Func<IInteractionContext<TInput, TOutput>, Observable<Unit>> handler)
internal void AddHandler(Func<IInteractionContext<TInput, TOutput>, Observable<Unit>> handler)
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing AddHandler and RemoveHandler from private to internal broadens the accessibility of these methods. This exposes internal implementation details that could be called from any code within the same assembly, potentially bypassing the intended RegisterHandler API and its safety checks. Consider keeping these methods private and making HandlerUnregistration a nested private class if it needs access to these methods.

Copilot uses AI. Check for mistakes.
Comment on lines +446 to 452
foreach (var i in change.Range)
{
foreach (var i in change.Range)
if (_itemValues.TryGetValue(i, out var vR))
{
if (itemValues.TryGetValue(i, out var vR))
{
Decrement(i, vR);
}
Decrement(i, vR);
}
}
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This foreach loop implicitly filters its target sequence - consider filtering the sequence explicitly using '.Where(...)'.

Copilot uses AI. Check for mistakes.
@michaelstonis michaelstonis merged commit cad93ea into develop Nov 29, 2025
7 checks passed
@michaelstonis michaelstonis deleted the feature/performance-optimizations branch November 29, 2025 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants