v2.3.1
What's changed
- Introduce a simple heuristic to estimate the optimal
ConcurrentDictionarybucket count forConcurrentLru/ConcurrentLfu/ClassicLrubased on thecapacityconstructor arg. When the cache is at capacity, theConcurrentDictionarywill have a prime number bucket count and a load factor of 0.75.- When capacity is less than 150 elements, start with a
ConcurrentDictionarycapacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing. - For larger caches, pick
ConcurrentDictionaryinitial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4ConcurrentDictionarygrow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.
- When capacity is less than 150 elements, start with a
SingletonCachesets the internalConcurrentDictionarycapacity to the next prime number greater than the capacity constructor argument.- Fix ABA concurrency bug in
Scopedby changingReferenceCountto use reference equality (viaobject.ReferenceEquals). - .NET6 target now compiled with
SkipLocalsInit. Minor performance gains. - Simplified
AtomicFactory/AsyncAtomicFactory/ScopedAtomicFactory/ScopedAsyncAtomicFactoryby removing redundant reads, reducing code size. ConcurrentLfu.Countnow does not lock the underlyingConcurrentDictionary, matchingConcurrentLru.Count.- Use
CollectionsMarshal.AsSpanto enumerate candidates withinConcurrentLfu.Trimon .NET6.
Full changelog: v2.3.0...v2.3.1