The app needs to load data and cache it for a period of time. I would expect that if multiple parts of the app want to access the same cache key at the same time, the cache should be smart enough to only load the data once and return the result of that call to all callers. However, MemoryCache is not doing this. If you hit the cache in parallel (which often happens in the app) it creates a task for each attempt to get the cache value. I thought that this code would achieve the desired result, but it doesn't. I would expect the cache to only run one GetDataAsync task, wait for it to complete, and use the result to get the values for other calls.
using Microsoft.Extensions.Caching.Memory;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace ConsoleApp4
{
class Program
{
private const string Key = "1";
private static int number = 0;
static async Task Main(string[] args)
{
var memoryCache = new MemoryCache(new MemoryCacheOptions { });
var tasks = new List<Task>();
tasks.Add(memoryCache.GetOrCreateAsync(Key, (cacheEntry) => GetDataAsync()));
tasks.Add(memoryCache.GetOrCreateAsync(Key, (cacheEntry) => GetDataAsync()));
tasks.Add(memoryCache.GetOrCreateAsync(Key, (cacheEntry) => GetDataAsync()));
await Task.WhenAll(tasks);
Console.WriteLine($"The cached value was: {memoryCache.Get(Key)}");
}
public static async Task<int> GetDataAsync()
{
//Simulate getting a large chunk of data from the database
await Task.Delay(3000);
number++;
Console.WriteLine(number);
return number;
}
}
}
That's not what happens. The above displays these results (not necessarily in this order):
2
1
3
The cached value was: 3
It creates a task for each cache request and discards the values returned from the other two.
This needlessly spends time and it makes me wonder if you can say this class is even thread-safe. ConcurrentDictionary has the same behaviour. I tested it and the same thing happens.
Is there a way to achieve the desired behaviour where the task doesn't run 3 times?
MemoryCache leaves it to you to decide how to handle races to populate a cache key. In your case you don't want multiple threads to compete to populate a key presumably because it's expensive to do that.
To coordinate the work of multiple threads like that you need a lock, but using a C# lock statement in asynchronous code can lead to thread pool starvation. Fortunately, SemaphoreSlim provides a way to do async locking so it becomes a matter of creating a guarded memory cache that wraps an underlying IMemoryCache.
My first solution only had a single semaphore for the entire cache putting all cache population tasks in a single line which isn't very smart so instead here is more elaborate solution with a semaphore for each cache key. Another solution could be to have a fixed number of semaphores picked by a hash of the key.
sealed class GuardedMemoryCache : IDisposable
{
readonly IMemoryCache cache;
readonly ConcurrentDictionary<object, SemaphoreSlim> semaphores = new();
public GuardedMemoryCache(IMemoryCache cache) => this.cache = cache;
public async Task<TItem> GetOrCreateAsync<TItem>(object key, Func<ICacheEntry, Task<TItem>> factory)
{
var semaphore = GetSemaphore(key);
await semaphore.WaitAsync();
try
{
return await cache.GetOrCreateAsync(key, factory);
}
finally
{
semaphore.Release();
RemoveSemaphore(key);
}
}
public object Get(object key) => cache.Get(key);
public void Dispose()
{
foreach (var semaphore in semaphores.Values)
semaphore.Release();
}
SemaphoreSlim GetSemaphore(object key) => semaphores.GetOrAdd(key, _ => new SemaphoreSlim(1));
void RemoveSemaphore(object key)
{
if (semaphores.TryRemove(key, out var semaphore))
semaphore.Dispose();
}
}
If multiple threads try to populate the same cache key only a single thread will actually do it. The other threads will instead return the value that was created.
Assuming that you use dependency injection, you can let GuardedMemoryCache implement IMemoryCache by adding a few more methods that forward to the underlying cache to modify the caching behavior throughout your application with very few code changes.
There are different solutions available, the most famous of which is probably LazyCache: it's a great library.
Another one that you may find useful is FusionCache ⚡🦥, which I recently released: it has the exact same feature (although implemented differently) and much more.
The feature you are looking for is described here and you can use it like this:
var result = await fusionCache.GetOrSetAsync(
Key,
_ => await GetDataAsync(),
TimeSpan.FromMinutes(2)
);
You may also find some of the other features interesting, like fail-safe, advanced timeouts with background factory completion and support for an optional, distributed 2nd level.
If you will give it a chance please let me know what you think.
/shameless-plug
Related
Partially, this question has a bit of similarity with this one, but as another one is not properly asked (and not fully asked) I am trying to ask it in general, so this question can not be considered as a duplication.
The question is about understanding of how AsyncLock actually works. (In this context I am referring to Neosmart.AsyncLock library, however, I consider it uses common approach in AsyncLock implementation).
So. For instance, we have a main thread (let it be a UI-thread):
static void Main(string[] args)
{
Console.WriteLine("Press Esc for exit");
var lck = new AsyncLock();
var doJob = new Action(async () =>
{
using (await lck.LockAsync())
{
// long lasting job
Console.WriteLine("+++ Job starts");
await Task.Delay(TimeSpan.FromSeconds(10));
Console.WriteLine("--- Job finished");
}
});
while (Console.ReadKey().Key != ConsoleKey.Escape)
{
doJob();
}
}
so, sequental pressing of Enter starts doJobevery time without waiting until previous job finished.
However, when we change it to:
Task.Run(() =>
{
doJob();
});
... everything works like a charm, and no new jobs is run until previous finished.
That's clear that async logic is far different than classic lock(_myLock) and can't be comparable directly, however, still, why the first approach doesn't work that way when the second call of LockAsync would "lock" (again, in async context) the "long lasting job" to start until previous finished.
There is actually a practical request of why I need that code to work this way, (and real question is how I can achieve that with await LockAsync?):
For example, in my app (for instance, my mobile app), on very launch there is some data I am starting pre-loading (this is a common service which needs that data to keep in cache for further use), then, when the UI is started, a particular page requests the same data for the UI to appear and asks the same service to load the same data. So, without any custom logic that service would start two long lasting jobs to retrieve the same pack of data. Instead, I want my UI to receive the data from the cache right the data pre-loading finished.
Like that (an abstract possible scenario):
class MyApp
{
string[] _cache = null;
AsyncLock _lock = new AsyncLock();
async Task<IEnumerable<string>> LoadData()
{
using (await _lock.LockAsync())
{
if (_cache == null)
{
await Task.Delay(TimeSpan.FromSeconds(10));
_cache = new[] {"one", "two", "three"};
}
return _cache;
}
}
void OnAppLaunch()
{
LoadData();
}
async void OnMyCustomEvent()
{
var data = await LoadData();
// to do something else with the data
}
}
the problem would be solved if I would change it to Task.Run(async () => { var data = await LoadData(); }) but it doesn't look like quite clean and nice approach.
As Matthew points out in the comments, AsyncLock is reentrant, meaning that if the same thread attempts to take the lock a second time, it recognizes that and allows it to continue. The author of AsyncLock wrote a lengthy article about how reentrance was really the reason he wrote it: AsyncLock: an async/await-friendly locking library for C# and .NET
It's not a bug; it's a feature.™
After the "Update 5/25/2017" heading, there are code examples demonstrating exactly what you are experiencing here and showing how it is a feature.
Reasons to want reentrance are:
If you are just concerned with multiple threads touching the same variable (preventing race conditions), then there is simply no reason to block a thread that already has the lock.
It makes recursive functions that use locks easier to write because you don't need to test if you already have the lock. A lack of reentrance support + sloppy recursive coding = deadlock.
If you really want it to not be reentrant, you can use what he says was not appropriate for reentrance: SemaphoreSlim:
var lck = new SemaphoreSlim(1);
var doJob = new Action(async () => {
await lck.WaitAsync();
try {
// long lasting job
Console.WriteLine("+++ Job starts");
await Task.Delay(TimeSpan.FromSeconds(2));
Console.WriteLine("--- Job finished");
} finally {
lck.Release();
}
});
I have a long running request to a web service which should be cached on the server side after completion. My problem is - I don't know how to prevent it being called concurrently/simultaneously before it's cached after first request.
My thought is I should create a data request Task and store it in a concurrent dictionary. So every other request should check if Task is already running and wait for it to complete.
I've ended up with this:
private static ConcurrentDictionary<string, Task> tasksCache = new ConcurrentDictionary<string, Task>();
public static T GetFromCache<T>(this ICacheManager<object> cacheManager, string name, Func<T> func)
{
if (cacheManager.Exists(name))
return (T)cacheManager[name];
if (tasksCache.ContainsKey(name))
{
tasksCache[name].Wait();
return (tasksCache[name] as Task<T>).Result;
}
var runningTask = Task.Run(() => func.Invoke());
tasksCache[name] = runningTask;
runningTask.Wait();
var data = runningTask.Result;
cacheManager.Put(name, data);
tasksCache.TryRemove(name, out Task t);
return data;
}
But this looks messy. Is there a better way?
I'd consider wrapping these in a Lazy<T> for each task, which has built-in semantics for controlling concurrent initialization.
This example demonstrates the use of the Lazy<T> class to provide lazy initialization with access from multiple threads.
You'll want to specify an appropriate LazyThreadSafetyMode.
Fully thread safe; uses locking to ensure that only one thread initializes the value. ExecutionAndPublication
I have a 3rd party component that is "expensive" to spin up. This component is not thread safe. Said component is hosted inside of a WCF service (for now), so... every time a call comes into the service I have to new up the component.
What I'd like to do instead is have a pool of say 16 threads that each spin up their own copy of the component and have a mechanism to call the method and have it distributed to one of the 16 threads and have the value returned.
So something simple like:
var response = threadPool.CallMethod(param1, param2);
Its fine for the call to block until it gets a response as I need the response to proceed.
Any suggestions? Maybe I'm overthinking it and a ConcurrentQueue that is serviced by 16 threads would do the job, but now sure how the method return value would get returned to the caller?
WCF will already use the thread pool to manage its resources so if you add a layer of thread management on top of that it is only going to go badly. Avoid doing that if possible as you will get contention on your service calls.
What I would do in your situation is just use a single ThreadLocal or thread static that would get initialized with your expensive object once. Thereafter it would be available to the thread pool thread.
That is assuming that your object is fine on an MTA thread; I'm guessing it is from your post since it sounds like things are current working, but just slow.
There is the concern that too many objects get created and you use too much memory as the pool grows too large. However, see if this is the case in practice before doing anything else. This is a very simple strategy to implement so easy to trial. Only get more complex if you really need to.
First and foremost, I agree with #briantyler: ThreadLocal<T> or thread static fields is probably what you want. You should go with that as a starting point and consider other options if it doesn't meet your needs.
A complicated but flexible alternative is a singleton object pool. In its most simple form your pool type will look like this:
public sealed class ObjectPool<T>
{
private readonly ConcurrentQueue<T> __objects = new ConcurrentQueue<T>();
private readonly Func<T> __factory;
public ObjectPool(Func<T> factory)
{
__factory = factory;
}
public T Get()
{
T obj;
return __objects.TryDequeue(out obj) ? obj : __factory();
}
public void Return(T obj)
{
__objects.Enqueue(obj);
}
}
This doesn't seem awfully useful if you're thinking of type T in terms of primitive classes or structs (i.e. ObjectPool<MyComponent>), as the pool does not have any threading controls built in. But you can substitute your type T for a Lazy<T> or Task<T> monad, and get exactly what you want.
Pool initialisation:
Func<Task<MyComponent>> factory = () => Task.Run(() => new MyComponent());
ObjectPool<Task<MyComponent>> pool = new ObjectPool<Task<MyComponent>>(factory);
// "Pre-warm up" the pool with 16 concurrent tasks.
// This starts the tasks on the thread pool and
// returns immediately without blocking.
for (int i = 0; i < 16; i++) {
pool.Return(pool.Get());
}
Usage:
// Get a pooled task or create a new one. The task may
// have already completed, in which case Result will
// be available immediately. If the task is still
// in flight, accessing its Result will block.
Task<MyComponent> task = pool.Get();
try
{
MyComponent component = task.Result; // Alternatively you can "await task"
// Do something with component.
}
finally
{
pool.Return(task);
}
This method is more complex than maintaining your component in a ThreadLocal or thread static field, but if you need to do something fancy like limiting the number of pooled instances, the pool abstraction can be quite useful.
EDIT
Basic "fixed set of X instances" pool implementation with a Get which blocks once the pool has been drained:
public sealed class ObjectPool<T>
{
private readonly Queue<T> __objects;
public ObjectPool(IEnumerable<T> items)
{
__objects = new Queue<T>(items);
}
public T Get()
{
lock (__objects)
{
while (__objects.Count == 0) {
Monitor.Wait(__objects);
}
return __objects.Dequeue();
}
}
public void Return(T obj)
{
lock (__objects)
{
__objects.Enqueue(obj);
Monitor.Pulse(__objects);
}
}
}
So I have this small code block that will perform several Tasks in parallel.
// no wrapping in Task, it is async
var activityList = await dataService.GetActivitiesAsync();
// Select a good enough tuple
var results = (from activity in activityList
select new {
Activity = activity,
AthleteTask = dataService.GetAthleteAsync(activity.AthleteID)
}).ToList(); // begin enumeration
// Wait for them to finish, ie relinquish control of the thread
await Task.WhenAll(results.Select(t => t.AthleteTask));
// Set the athletes
foreach(var pair in results)
{
pair.Activity.Athlete = pair.AthleteTask.Result;
}
So I'm downloading Athlete data for each given Activity. But it could be that we are requesting the same athlete several times.
How can we ensure that the GetAthleteAsync method will only go online to fetch the actual data if it's not yet in our memory cache?
Currently I tried using a ConcurrentDictionary<int, Athelete> inside the GetAthleteAsync method
private async Task<Athlete> GetAthleteAsync(int athleteID)
{
if(cacheAthletes.Contains(athleteID))
return cacheAthletes[atheleID];
** else fetch from web
}
You can change your ConcurrentDictionary to cache the Task<Athlete> instead of just the Athlete. Remember, a Task<T> is a promise - an operation that will eventually result in a T. So, you can cache operations instead of results.
ConcurrentDictionary<int, Task<Athlete>> cacheAthletes;
Then, your logic will go like this: if the operation is already in the cache, return the cached task immediately (synchronously). If it's not, then start the download, add the download operation to the cache, and return the new download operation. Note that all the "download operation" logic is moved to another method:
private Task<Athlete> GetAthleteAsync(int athleteID)
{
return cacheAthletes.GetOrAdd(athleteID, id => LoadAthleteAsync(id));
}
private async Task<Athlete> LoadAthleteAsync(int athleteID)
{
// Load from web
}
This way, multiple parallel requests for the same athlete will get the same Task<Athlete>, and each athlete is only downloaded once.
You also need to skip tasks, which unsuccessfuly completed.
That's my snippet:
ObjectCache _cache = MemoryCache.Default;
static object _lockObject = new object();
public Task<T> GetAsync<T>(string cacheKey, Func<Task<T>> func, TimeSpan? cacheExpiration = null) where T : class
{
var task = (T)_cache[cacheKey];
if (task != null) return task;
lock (_lockObject)
{
task = (T)_cache[cacheKey];
if (task != null) return task;
task = func();
Set(cacheKey, task, cacheExpiration);
task.ContinueWith(t => {
if (t.Status != TaskStatus.RanToCompletion)
_cache.Remove(cacheKey);
});
}
return task;
}i
When caching values provided by Task-objects, you'd like to make sure the cache implementation ensures that:
No parallel or unnecessary operations to get a value will be started. In your case, this is your question about avoiding multiple GetAthleteAsync for the same id.
You don't want to have negative caching (i.e. caching failed results), or if you do want it, it needs to be a implementation decision and you need to handle eventually replacing failed results somehow.
Cache users can't get invalidated results from the cache, even if the value is invalidated during an await.
I have a blog post about caching Task-objects with example code, that ensures all points above and could be useful in your situation. Basically my solution is to store Lazy<Task<T>> objects in a MemoryCache.
For starters let me just throw it out there that I know the code below is not thread safe (correction: might be). What I am struggling with is finding an implementation that is and one that I can actually get to fail under test. I am refactoring a large WCF project right now that needs some (mostly) static data cached and its populated from a SQL database. It needs to expire and "refresh" at least once a day which is why I am using MemoryCache.
I know that the code below should not be thread safe but I cannot get it to fail under heavy load and to complicate matters a google search shows implementations both ways (with and without locks combined with debates whether or not they are necessary.
Could someone with knowledge of MemoryCache in a multi threaded environment let me definitively know whether or not I need to lock where appropriate so that a call to remove (which will seldom be called but its a requirement) will not throw during retrieval/repopulation.
public class MemoryCacheService : IMemoryCacheService
{
private const string PunctuationMapCacheKey = "punctuationMaps";
private static readonly ObjectCache Cache;
private readonly IAdoNet _adoNet;
static MemoryCacheService()
{
Cache = MemoryCache.Default;
}
public MemoryCacheService(IAdoNet adoNet)
{
_adoNet = adoNet;
}
public void ClearPunctuationMaps()
{
Cache.Remove(PunctuationMapCacheKey);
}
public IEnumerable GetPunctuationMaps()
{
if (Cache.Contains(PunctuationMapCacheKey))
{
return (IEnumerable) Cache.Get(PunctuationMapCacheKey);
}
var punctuationMaps = GetPunctuationMappings();
if (punctuationMaps == null)
{
throw new ApplicationException("Unable to retrieve punctuation mappings from the database.");
}
if (punctuationMaps.Cast<IPunctuationMapDto>().Any(p => p.UntaggedValue == null || p.TaggedValue == null))
{
throw new ApplicationException("Null values detected in Untagged or Tagged punctuation mappings.");
}
// Store data in the cache
var cacheItemPolicy = new CacheItemPolicy
{
AbsoluteExpiration = DateTime.Now.AddDays(1.0)
};
Cache.AddOrGetExisting(PunctuationMapCacheKey, punctuationMaps, cacheItemPolicy);
return punctuationMaps;
}
//Go oldschool ADO.NET to break the dependency on the entity framework and need to inject the database handler to populate cache
private IEnumerable GetPunctuationMappings()
{
var table = _adoNet.ExecuteSelectCommand("SELECT [id], [TaggedValue],[UntaggedValue] FROM [dbo].[PunctuationMapper]", CommandType.Text);
if (table != null && table.Rows.Count != 0)
{
return AutoMapper.Mapper.DynamicMap<IDataReader, IEnumerable<PunctuationMapDto>>(table.CreateDataReader());
}
return null;
}
}
The default MS-provided MemoryCache is entirely thread safe. Any custom implementation that derives from MemoryCache may not be thread safe. If you're using plain MemoryCache out of the box, it is thread safe. Browse the source code of my open source distributed caching solution to see how I use it (MemCache.cs):
https://github.com/haneytron/dache/blob/master/Dache.CacheHost/Storage/MemCache.cs
While MemoryCache is indeed thread safe as other answers have specified, it does have a common multi threading issue - if 2 threads try to Get from (or check Contains) the cache at the same time, then both will miss the cache and both will end up generating the result and both will then add the result to the cache.
Often this is undesirable - the second thread should wait for the first to complete and use its result rather than generating results twice.
This was one of the reasons I wrote LazyCache - a friendly wrapper on MemoryCache that solves these sorts of issues. It is also available on Nuget.
As others have stated, MemoryCache is indeed thread safe. The thread safety of the data stored within it however, is entirely up to your using's of it.
To quote Reed Copsey from his awesome post regarding concurrency and the ConcurrentDictionary<TKey, TValue> type. Which is of course applicable here.
If two threads call this [GetOrAdd] simultaneously, two instances of TValue can easily be constructed.
You can imagine that this would be especially bad if TValue is expensive to construct.
To work your way around this, you can leverage Lazy<T> very easily, which coincidentally is very cheap to construct. Doing this ensures if we get into a multithreaded situation, that we're only building multiple instances of Lazy<T> (which is cheap).
GetOrAdd() (GetOrCreate() in the case of MemoryCache) will return the same, singular Lazy<T> to all threads, the "extra" instances of Lazy<T> are simply thrown away.
Since the Lazy<T> doesn't do anything until .Value is called, only one instance of the object is ever constructed.
Now for some code! Below is an extension method for IMemoryCache which implements the above. It arbitrarily is setting SlidingExpiration based on a int seconds method param. But this is entirely customizable based on your needs.
Note this is specific to .netcore2.0 apps
public static T GetOrAdd<T>(this IMemoryCache cache, string key, int seconds, Func<T> factory)
{
return cache.GetOrCreate<T>(key, entry => new Lazy<T>(() =>
{
entry.SlidingExpiration = TimeSpan.FromSeconds(seconds);
return factory.Invoke();
}).Value);
}
To call:
IMemoryCache cache;
var result = cache.GetOrAdd("someKey", 60, () => new object());
To perform this all asynchronously, I recommend using Stephen Toub's excellent AsyncLazy<T> implementation found in his article on MSDN. Which combines the builtin lazy initializer Lazy<T> with the promise Task<T>:
public class AsyncLazy<T> : Lazy<Task<T>>
{
public AsyncLazy(Func<T> valueFactory) :
base(() => Task.Factory.StartNew(valueFactory))
{ }
public AsyncLazy(Func<Task<T>> taskFactory) :
base(() => Task.Factory.StartNew(() => taskFactory()).Unwrap())
{ }
}
Now the async version of GetOrAdd():
public static Task<T> GetOrAddAsync<T>(this IMemoryCache cache, string key, int seconds, Func<Task<T>> taskFactory)
{
return cache.GetOrCreateAsync<T>(key, async entry => await new AsyncLazy<T>(async () =>
{
entry.SlidingExpiration = TimeSpan.FromSeconds(seconds);
return await taskFactory.Invoke();
}).Value);
}
And finally, to call:
IMemoryCache cache;
var result = await cache.GetOrAddAsync("someKey", 60, async () => new object());
Check out this link: http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache(v=vs.110).aspx
Go to the very bottom of the page (or search for the text "Thread Safety").
You will see:
^ Thread Safety
This type is thread safe.
As mentioned by #AmitE at the answer of #pimbrouwers, his example is not working as demonstrated here:
class Program
{
static async Task Main(string[] args)
{
var cache = new MemoryCache(new MemoryCacheOptions());
var tasks = new List<Task>();
var counter = 0;
for (int i = 0; i < 10; i++)
{
var loc = i;
tasks.Add(Task.Run(() =>
{
var x = GetOrAdd(cache, "test", TimeSpan.FromMinutes(1), () => Interlocked.Increment(ref counter));
Console.WriteLine($"Interation {loc} got {x}");
}));
}
await Task.WhenAll(tasks);
Console.WriteLine("Total value creations: " + counter);
Console.ReadKey();
}
public static T GetOrAdd<T>(IMemoryCache cache, string key, TimeSpan expiration, Func<T> valueFactory)
{
return cache.GetOrCreate(key, entry =>
{
entry.SetSlidingExpiration(expiration);
return new Lazy<T>(valueFactory, LazyThreadSafetyMode.ExecutionAndPublication);
}).Value;
}
}
Output:
Interation 6 got 8
Interation 7 got 6
Interation 2 got 3
Interation 3 got 2
Interation 4 got 10
Interation 8 got 9
Interation 5 got 4
Interation 9 got 1
Interation 1 got 5
Interation 0 got 7
Total value creations: 10
It seems like GetOrCreate returns always the created entry. Luckily, that's very easy to fix:
public static T GetOrSetValueSafe<T>(IMemoryCache cache, string key, TimeSpan expiration,
Func<T> valueFactory)
{
if (cache.TryGetValue(key, out Lazy<T> cachedValue))
return cachedValue.Value;
cache.GetOrCreate(key, entry =>
{
entry.SetSlidingExpiration(expiration);
return new Lazy<T>(valueFactory, LazyThreadSafetyMode.ExecutionAndPublication);
});
return cache.Get<Lazy<T>>(key).Value;
}
That works as expected:
Interation 4 got 1
Interation 9 got 1
Interation 1 got 1
Interation 8 got 1
Interation 0 got 1
Interation 6 got 1
Interation 7 got 1
Interation 2 got 1
Interation 5 got 1
Interation 3 got 1
Total value creations: 1
Just uploaded sample library to address issue for .Net 2.0.
Take a look on this repo:
RedisLazyCache
I'm using Redis cache but it also failover or just Memorycache if Connectionstring is missing.
It's based on LazyCache library that guarantees single execution of callback for write in an event of multi threading trying to load and save data specially if the callback are very expensive to execute.
The cache is threadsafe, but like others have stated, its possible that GetOrAdd will call the func multiple types if call from multiple types.
Here is my minimal fix on that
private readonly SemaphoreSlim _cacheLock = new SemaphoreSlim(1);
and
await _cacheLock.WaitAsync();
var data = await _cache.GetOrCreateAsync(key, entry => ...);
_cacheLock.Release();