BackgroundTransferService Ram Issue - c#

I have some doubt about BackgroundTransferRequest RAM efficiency, or more probably, I'm missing something.
BackgroundTransferRequest should initialize a web request (GET by default) and then store the response to IsolatedStorage specified by DownloadLocation, so we shouldn't have any stream opened containing the result, we just have the physical file in the IsolatedStorage.
Simple, Easy, Efficient.
Then, why after 200 succeeded Downloads I still have 42k occupied ram?
Of course If I restart the application I have 1k occupied ram and previously downloaded files are still on the Isolated Storage, so we probably have something on the BackgroundTransferRequest that occupies ram and never free it, despite the dispose call.
Please correct me If I'm doing something wrong.
Below you can see the code snippet used for adding and removing transfers.
INITIALIZING TRANSFER
BackgroundTransferRequest transferRequest = new BackgroundTransferRequest(transfer.TransferUri);
transfer.RequestId = transferRequest.RequestId;
transferRequest.DownloadLocation = transfer.DestinationUri;
transferRequest.TransferPreferences = TransferPreferences.AllowCellularAndBattery;
BackgroundTransferService.Add(transferRequest);
ONCE DONE, REMOVE TRANSFER
BackgroundTransferRequest transferToRemove = BackgroundTransferService.Find(transferID);
if (transferToRemove != null)
{
BackgroundTransferService.Remove(transferToRemove);
transferToRemove.Dispose();
transferToRemove = null;
}
Thanks you very much!

When using the BackgroundTransferService class you have to be very careful with references to the BackgroundTransferRequest objects to avoid memory leaks.
_BackgroundRequests = BackgroundTransferService.Requests;
The previous assignment will create new references to the BackgroundTransferRequest objects, thus you should always dispose of the existing ones to avoid memory leaks.
If in your code you keep a local reference to the BackgroundTransferService.Requests list, then you should always dispose of the old references when using the Requests property.
foreach(var Request in _BackgroundRequests)
{
Request.Dispose();
}
_BackgroundRequests = BackgroundTransferService.Requests;
Since the BackgroundTransferService allows a maximum of 5 BackgroundTransferRequest objects to be queued, one can be tempted to use the Count() method on the BackgroundTransferService.Requests list.
Remember that this will create new references and can cause memory leaks. The best solution would be to keep an internal counter of the current queued transfers or to never care about the number of queued transfer and handle the exception thrown by the service when too many requests are queued.
Finally, one should dispose of the BackgroundTransferRequest objects once they have been completed (successfully or not), but you already do so.

Related

HttpClient seems to be causing my app to slowdown every ~3 minutes while freeing up a ton of memory

So basically I am running a program which is able to send up to 7,000 HTTP requests every second in average, 24/7, in order to detect last changes on a website as quickly as possible.
However, every 2.5 to 3 minutes in average, my program slowdowns for around 10-15 seconds and goes from ~7K rq/s to less than 1000.
Here are logs from my program, where you can see the amount of requests it sends every second:
https://pastebin.com/029VLxZG
When scrolling down through the logs, you can see it goes slower every ~3 minutes. Example: https://i.imgur.com/US0wPzm.jpeg
At first I thought it was my server's ethernet connection going in a temporary "restricted" mode, and I even tried contacting my host about it. But then I ran 2 instances of my program simulteanously just to see what would happen and I noticed that, even though the issue (downtime) was happening on both, it wasn't always happening at the same time (depending on when the program was started, if you get what I mean), which meant the problem wasn't coming from the internet connection, but my program itself.
I investigated a little bit more, and found out that as soon as my program goes from ~7K rq/s to ~700, a lot of RAM is being freed up on my server.
I have taken 2 screenshots of the consecutive seconds before and once the downtime occurs (including RAM metrics), to compare, and you can view them here: https://imgur.com/a/sk2TYQZ (please note that I was using less threads here, which is why the average "normal" speed is ~2K rq/s instead of ~7K as mentioned before)
If you'd like to see more of it, here is the full record of the issue, in a video which lasts about 40 seconds: https://i.imgur.com/z27FlVP.mp4 - As you can see, after the RAM is freed up, its usage slowly goes up again, before the same process repeats every ~3 minutes.
For more context, here is the method I am using to send the HTTP requests (it is being called from a lot of threads concurrently, as my app is multi-threaded in order to be super fast):
public static async Task<bool> HasChangedAsync(string endpoint, HttpClient httpClient)
{
const string baseAddress = "https://example.com/";
string response = await httpClient.GetStringAsync(baseAddress + endpoint);
return response.Contains("example");
}
One thing I did is I tried replacing the whole method by await Task.Delay(25) then return false, and that fixed the issue, RAM usage was barely increasing.
This lead me to believe the issue is HttpClient / my HTTP requests, and even though I tried replacing the GetStringAsync method by GetAsync using both a HttpRequestMessage and HttpResponseMessage (and disposing them with using), the behavior ended up being the exact same.
So here I am, desperate for a fix, and without enough knowledge about memory, garbage collector etc (if that's even needed here) to be able to fix this myself.
Please, Stack Overflow, do you have any idea?
Thanks a lot.
Your best bet would be to stream the response and then use chunks of it to find what your are looking for. An example implementation could be something as follows:
using var response = await Client.GetAsync(BaseUrl, HttpCompletionOption.ResponseHeadersRead);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
string line = null;
while ((line = await reader.ReadLineAsync()) != null)
{
if(line.Contains("example"))// do whatever
}

How do I properly dispose and free the memory used for V8.Net.V8Engine instances?

I'm running into an issue when using my V8Engine instance, it appears to have a small memory leak, and disposing of it, as well as forcing the garbage collection doesn't seem to help much. It will eventually throw an AccessViolationException on V8Enging local_m_negine = new V8Engine() claiming a Fatal error in heap setup, Allocation failed - process out of memory and Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Monitoring the program's memory usage through Task manager whilst running confirms that it is leaking memory, around 1000 KB every couple of seconds I think. I suspect it is the variables being declared within the executed script not being collected, or something to do with the GlobalObject.SetProperty method. Calling V8Engine.ForceV8GarbageCollection(), V8Engine.Dispose() and even GC.WaitForPendingFinalizers() & GC.Collect() doesn't prevent this memory being leaked (Although it is worth noting that it seems to leak it slower with these commands in place, and I know I shouldn't use GC but it was there as a last resort to see if it would fix the issue.)
A tangential issue that could also provide a solution is the inability to clear the execution context for V8Engine. I am required to dispose and re-instantiate the engine for each script, which I believe is where the memory leak is happening, otherwise I run into issues where variables have already been declared, causing V8Engine.Execute() to throw an exception saying such.
I can definitely confirm that the memory leak is something to do with the V8Engine Implementation, as running the older version of this program that uses Microsoft.JScript has no such memory leak, and the memory used remains consistent.
The affected code is as follows;
//Create the V8Engine and dispose when done
using (V8Engine local_m_engine = new V8Engine())
{
//Set the Lookup instance as a global object so that the JS code in the V8.Net wrapper can access it
local_m_engine.GlobalObject.SetProperty("Lookup", m_lookup, null, true, ScriptMemberSecurity.ReadOnly);
//Execute the script
result = local_m_engine.Execute(script);
//Please just clear everything I can't cope.
local_m_engine.ForceV8GarbageCollection();
local_m_engine.GlobalObject.Dispose();
}
EDIT:
Not sure how useful this will be but I've been running some memory profiling tools on it and have learnt that after running an isolated version of the original code, My software ends up with a large amount of instances of IndexedObjectList's full of null values (see here: http://imgur.com/a/bll5K). It appears to have one instance of each class for each V8Engine instance that is made, but they aren't being disposed or freed. I cant help but feel like I'm missing a command or something here.
The code I'm using to test and recreate the memory leak that the above implementation causes is as follows:
using System;
using V8.Net;
namespace V8DotNetMemoryTest
{
class Program
{
static void Main(string[] args)
{
string script = #" var math1 = 5;
var math2 = 10;
result = 5 + 10;";
Handle result;
int i = 0;
V8Engine local_m_engine;
while (true)
{
//Create the V8Engine and dispose when done
local_m_engine = new V8Engine();
//Set the Lookup instance as a global object so that the JS code in the V8.Net wrapper can access it
//local_m_engine.GlobalObject.SetProperty("Lookup", m_lookup, null, true, ScriptMemberSecurity.ReadOnly);
//Execute the script
result = local_m_engine.Execute(script);
Console.WriteLine(i++);
result.ReleaseManagedObject();
result.Dispose();
local_m_engine.Dispose();
GC.WaitForPendingFinalizers();
GC.Collect();
local_m_engine = null;
}
}
}
}
Sorry, I had no idea this question existed. Make sure to use the v8.net tag.
Your problem is this line:
result = local_m_engine.Execute(script);
The result returned is never disposed. ;) You are responsible for returned handles. Those handles are struct values, not class objects.
You could also do using (result = local_m_engine.Execute(script)) { ... }
There is a new version released. I am finally resurrecting this project again as I will need it for the FlowScript VPL project - and it now supports .Net Standard as well for cross-platform support!

Loading many large photos into a Panel efficiently

How do I load many large photos from a directory and its sub-directories in such a way as to prevent an OutOfMemoryException?
I have been using:
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}
which has worked until now. The photos that I need to work with now are anywhere between 15 and 40MB's each, and there could be hundreds of them.
You're attacking the garbage collector with this approach. Loading 15-40mb objects in a loop will always invite an OutOfMemoryException. This is because the objects go straight onto the large object heap, all objects > 85K do. Large objects become Gen 2 objects immediately and the memory is not automatically compacted as of .Net 4.5.1 (you request it) and will not be compacted at all in earlier versions.
Therefore even if you get away with initially loading the objects and the app keeps running, there is every chance that these objects, even when dereferenced completely, will hang around, fragmenting the large object heap. Once fragmentation occurs and for example the user closes the control to do something else for a minute or two and opens the control again, it is much more likely all the new objects will not be able to slot in to the LOH - the memory must be contiguous when allocation occurs. The GC runs collections on Gen 2 and LOH much less often for performance reasons - memcpy is used by the GC in the background and this is expensive on larger blocks of memory.
Also, the memory consumed will not be released if you have all of these images referenced from a control that is in use as well, imagine tabs. The whole idea of doing this is misconceived. Use thumbnails or load full scale images as needed by the user and be careful with the memory consumed.
UPDATE
Rather than telling you what you should and should not do I have decided to try to help you do it :)
I wrote a small program that operates on a directory containing 440 jpeg files with a total size of 335 megabytes. When I first ran your code I got the OutOfMemoryException and the form remained unresponsive.
Step 1
The first thing to note is if you are compiling as x86 or AnyCpu you need to change this to x64. Right click project, go to Build tab and set the target platform to x64.
This is because the amount of memory that can be addressed on a 32 bit x86 platform is limited. All .Net processes run within a virtual address space and the CLR heap size will be whatever the process is allowed by the OS and is not really within the control of the developer. However, it will allocate as much memory as is available - I am running on 64 bit Windows 8.1 so changing the target platform gives me an almost unlimited amount of memory space to use - right up to the limit of physical memory your process will be allowed.
After doing this running your code did not cause an OutOfMemoryException
Step 2
I changed the target framework to 4.5.1 from the default 4.5 in VS 2013. I did this so I could use GCSettings.LargeObjectHeapCompactionMode, as it is only available in 4.5.1 . I noticed that closing the form took an age because the GC was doing a crazy amount of work releasing memory. Basically I would set this at the end of the loadPics code as it will allow the large object heap to not get fragmented on the next blocking garbage collection. This will be essential for your app I believe so if possible try to use this version of the framework. You should test it on earlier versions too to see the difference when interacting with your app.
Step 3
As the app was still unresponsive I made the code run asynchronously
Step 4
As the code now runs on a separate thread to the UI thread it caused a GUI cross thread exception when accessing the form, so I had to use Invoke which posts a message back to the UI thread from the code's thread. This is because UI controls can only be accessed from a UI thread.
Code
private async void button1_Click(object sender, EventArgs e)
{
await LoadAllPics();
}
private async Task LoadAllPics()
{
IEnumerable<string> files = Directory.EnumerateFiles(#"C:\Dropbox\Photos", "*.JPG", SearchOption.AllDirectories);
await Task.Run(() =>
{
foreach(string file in files)
{
Invoke((MethodInvoker)(() =>
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}));
}
}
);
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
}
You can try resizing the image when you are putting on the UI.
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file).resizeImage(50,50) };
this.Controls.Add(pic);
}
public static Image resizeImage(this Image imgToResize, Size size)
{
return (Image)(new Bitmap(imgToResize, size));
}

OutOfMemoryException # WriteableBitmap # background agent

I have a Windows Phone 8 App, which uses Background agent to:
Get data from internet;
Generate image based on a User Control that uses the data from step 1 as data source;
In the User Control I have Grid & StackPanel & some Text and Image controls;
When some of the Images use local resources from the installation folder (/Assets/images/...)
One of them which I used as a background is selected by user from the phone's photo library, so I have to set the source using C# code behind.
However, when it runs under the background, it get the OutOfMemoryException, some troubleshooting so far:
When I run the process in the "front", everything works fine;
If I comment out the update progress, and create the image directly, it also works fine;
If I don't set the background image, it also works fine;
The OutOfMemoryException was thrown out during var bmp = new WriteableBitmap(480, 800);
I already shrink the image size from 1280*768 to 800*480, I think it is the bottom line for a full screen background image, isn't it?
After some research, I found out this problem occurs because it exceeded the 11 MB limitation for a Periodic Task.
I tried use the DeviceStatus.ApplicationCurrentMemoryUsage to track the memory usage:
-- the limitation is 11,534,336 (bit)
-- when background agent started, even without any task in it, the memory usage turns to be 4,648,960
-- When get update from internet, it grew up to 5,079,040
-- when finished, it dropped back to 4,648,960
-- When the invoke started (to generate image from the User Control), it grew up to 8,499,200
Well, I guess that's the problem, there is little memory available for it to render the image via WriteableBitmap.
Any idea how to work out this problem?
Is there a better method to generate an image from a User Control / or anything else?
Actually the original image might only be 100 kb or around, however, when rendering by WriteableBitmap, the file size (as well as the required memory size I guess) might grew up to 1-2MB.
Or can I release the memory from anywhere?
==============================================================
BTW, when this Code Project article says I can use only 11MB memory in a Periodic Task;
However, this MSDN article says that I can use up to 20 MB or 25MB with Windows Phone 8 Update 3;
Which is correct? And why am I in the first situation?
==============================================================
Edit:
Speak of the debugger, it also stated in the MSDN article:
When running under the debugger, memory and timeout restrictions are suspended.
But why would I still hit the limitation?
==============================================================
Edit:
Well, I found something seems to be helpful, I will check on them for now, suggestions are still welcome.
http://writeablebitmapex.codeplex.com/
http://suchan.cz/2012/07/pro-live-tiles-for-windows-phone/
http://notebookheavy.com/2011/12/06/microsoft-style-dynamic-tiles-for-windows-phone-mango/
==============================================================
The code to generate the image:
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
var customBG = new ImageUserControl();
customBG.Measure(new Size(480, 800));
var bmp = new WriteableBitmap(480, 800); //Thrown the **OutOfMemoryException**
bmp.Render(customBG, null);
bmp.Invalidate();
using (var isf = IsolatedStorageFile.GetUserStoreForApplication())
{
filename = "/Shared/NewBackGround.jpg";
using (var stream = isf.OpenFile(filename, System.IO.FileMode.OpenOrCreate))
{
bmp.SaveJpeg(stream, 480, 800, 0, 100);
}
}
}
The XAML code for the ImageUserControl:
<UserControl blabla... d:DesignHeight="800" d:DesignWidth="480">
<Grid x:Name="LayoutRoot">
<Image x:Name="nBackgroundSource" Stretch="UniformToFill"/>
//blabla...
</Grid>
</UserControl>
The C# code behind the ImageUserControl:
public ImageUserControl()
{
InitializeComponent();
LupdateUI();
}
public void LupdateUI()
{
DataInfo _dataInfo = new DataInfo();
LayoutRoot.DataContext = _dataInfo;
try
{
using (var isoStore = IsolatedStorageFile.GetUserStoreForApplication())
{
using (var isoFileStream = isoStore.OpenFile("/Shared/BackgroundImage.jpg", FileMode.Open, FileAccess.Read))
{
BitmapImage bi = new BitmapImage();
bi.SetSource(isoFileStream);
nBackgroundSource.Source = bi;
}
}
}
catch (Exception) { }
}
When DataInfo is another Class within the settings page that hold data get from the internet:
public class DataInfo
{
public string Wind1 { get { return GetValueOrDefault<string>("Wind1", "N/A"); } set { if (AddOrUpdateValue("Wind1", value)) { Save(); } } }
public string Wind2 { get { return GetValueOrDefault<string>("Wind2", "N/A"); } set { if (AddOrUpdateValue("Wind2", value)) { Save(); } } }
//blabla...
}
If I comment out the update progress, and create the image directly, it also works fine I think you should focus on that part. It seems to indicate some memory isn't freed after the update. Make sure all the references used during the update process go out of scope before rendering the picture. Forcing a garbage collection can help too:
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect(); // Frees the memory that was used by the finalizers
Another thing to consider is that the debugger is also using a lot of memory. Do a real test by compiling your project in "Release" mode and deploying on a phone to make sure you're running out of memory.
Still, I have already been in that situation so I know it may not be enough. The point is: some libraries in the .NET Framework are loaded lazily. For instance, if your update process involves downloading some data, then the background agent will load the network libraries. Those libraries can't be unloaded and will waste some of your agent's memory. That's why, even by freeing all the memory you used during the update process, you won't reach back the same amount of free memory you had when starting the background agent. Seeing this, what I did in one of my app was to span the workload of the background agent across two executions. Basically, when the agents executes:
Check in the isolated storage if there's pending data to be processed. If not, just execute the update process and store all needed data in the isolated storage
If there is pending data (that is, in the next execution), generate the picture and clear the data
It means the picture will be generated only once every hour instead of once every 30 minutes, so use this workaround only if everything else fail.
The larger memory limit is for background audio agents, it clearly states that in the documentation. You're stuck with 11 MB, which can really be a big pain when you're trying to do something smart with pictures in the background.
480x800 adds a MB to your memory because it takes 4 bytes for every pixels, so in the end it's around 1.22MB. When compressed in JPEG, then yes - it makes sense that it's only around 100KB. But whenever you use WriteableBitmap, it gets loaded into memory.
One of the things you could try before forcing the GC.Collect as mentioned in another answer is to null things out even before they go out of scope - whether it's a BitmapImage or a WriteableBitmap. Other than that you can try removing the Image object from the Grid programmatically when you're done and setting the Source of it also to be null.
Are there any other WriteableBitmap, BitmapImage or Image objects you're not showing us?
Also, try without the debugger. I've read somewhere that it adds another 1-2MB which is a lot when you have only 11 MB. Although, if it crashes so quickly with the debugger, I wouldn't risk it even if it suddenly seams OK without the debugger. But just for testing purposes you can give it a shot.
Do you need to use the ImageUserControl? Can you try doing step by step creating Image and all the other objects, without the XAML, so you can measure memory in each and every step to see at which point it goes through the roof?

C# - Locking issues with Mutex

I've got a web application that controls which web applications get served traffic from our load balancer. The web application runs on each individual server.
It keeps track of the "in or out" state for each application in an object in the ASP.NET application state, and the object is serialized to a file on the disk whenever the state is changed. The state is deserialized from the file when the web application starts.
While the site itself only gets a couple requests a second tops, and the file it rarely accessed, I've found that it was extremely easy for some reason to get collisions while attempting to read from or write to the file. This mechanism needs to be extremely reliable, because we have an automated system that regularly does rolling deployments to the server.
Before anyone makes any comments questioning the prudence of any of the above, allow me to simply say that explaining the reasoning behind it would make this post much longer than it already is, so I'd like to avoid moving mountains.
That said, the code that I use to control access to the file looks like this:
internal static Mutex _lock = null;
/// <summary>Executes the specified <see cref="Func{FileStream, Object}" /> delegate on
/// the filesystem copy of the <see cref="ServerState" />.
/// The work done on the file is wrapped in a lock statement to ensure there are no
/// locking collisions caused by attempting to save and load the file simultaneously
/// from separate requests.
/// </summary>
/// <param name="action">The logic to be executed on the
/// <see cref="ServerState" /> file.</param>
/// <returns>An object containing any result data returned by <param name="func" />.
///</returns>
private static Boolean InvokeOnFile(Func<FileStream, Object> func, out Object result)
{
var l = new Logger();
if (ServerState._lock.WaitOne(1500, false))
{
l.LogInformation( "Got lock to read/write file-based server state."
, (Int32)VipEvent.GotStateLock);
var fileStream = File.Open( ServerState.PATH, FileMode.OpenOrCreate
, FileAccess.ReadWrite, FileShare.None);
result = func.Invoke(fileStream);
fileStream.Close();
fileStream.Dispose();
fileStream = null;
ServerState._lock.ReleaseMutex();
l.LogInformation( "Released state file lock."
, (Int32)VipEvent.ReleasedStateLock);
return true;
}
else
{
l.LogWarning( "Could not get a lock to access the file-based server state."
, (Int32)VipEvent.CouldNotGetStateLock);
result = null;
return false;
}
}
This usually works, but occasionally I cannot get access to the mutex (I see the "Could not get a lock" event in the log). I cannot reproduce this locally - it only happens on my production servers (Win Server 2k3/IIS 6). If I remove the timeout, the application hangs indefinitely (race condition??), including on subsequent requests.
When I do get the errors, looking at the event log tells me that the mutex lock was achieved and released by the previous request before the error was logged.
The mutex is instantiated in the Application_Start event. I get the same results when it is instantiated statically in the declaration.
Excuses, excuses: threading/locking is not my forté, as I generally don't have to worry about it.
Any suggestions as to why it randomly would fail to get a signal?
Update:
I've added proper error handling (how embarrassing!), but I am still getting the same errors - and for the record, unhandled exceptions were never the problem.
Only one process would ever be accessing the file - I don't use a web garden for this application's web pool, and no other applications use the file. The only exception I can think of would be when the app pool recycles, and the old WP is still open when the new one is created - but I can tell from watching the task manager that the issue occurs while there is only one worker process.
#mmr: How is using Monitor any different from using a Mutex? Based on the MSDN documentation, it looks like it is effectively doing the same thing - if and I can't get the lock with my Mutex, it does fail gracefully by just returning false.
Another thing to note: The issues I'm having seem to be completely random - if it fails on one request, it might work fine on the next. There doesn't seem to be a pattern, either (certainly no every other, at least).
Update 2:
This lock is not used for any other call. The only time _lock is referenced outside the InvokeOnFile method is when it is instantiated.
The Func that is invoked is either reading from the file and deserializing into an object, or serializing an object and writing it to the file. Neither operation is done on a separate thread.
ServerState.PATH is a static readonly field, which I don't expect would cause any concurrency problems.
I'd also like to re-iterate my earlier point that I cannot reproduce this locally (in Cassini).
Lessons learned:
Use proper error handling (duh!)
Use the right tool for the job (and have a basic understanding of what/how that tool does). As sambo points out, using a Mutex apparently has a lot of overhead, which was causing issues in my application, whereas Monitor is designed specifically for .NET.
You should only be using Mutexes if you need cross-process synchronization.
Although a mutex can be used for
intra-process thread synchronization,
using Monitor is generally preferred,
because monitors were designed
specifically for the .NET Framework
and therefore make better use of
resources. In contrast, the Mutex
class is a wrapper to a Win32
construct. While it is more powerful
than a monitor, a mutex requires
interop transitions that are more
computationally expensive than those
required by the Monitor class.
If you need to support inter-process locking you need a Global mutex.
The pattern being used is incredibly fragile, there is no exception handling and you are not ensuring that your Mutex is released. That is really risky code and most likely the reason you see these hangs when there is no timeout.
Also, if your file operation ever takes longer than 1.5 seconds then there is a chance concurrent Mutexes will not be able to grab it. I would recommend getting the locking right and avoiding the timeout.
I think its best to re-write this to use a lock. Also, it looks like you are calling out to another method, if this take forever, the lock will be held forever. That's pretty risky.
This is both shorter and much safer:
// if you want timeout support use
// try{var success=Monitor.TryEnter(m_syncObj, 2000);}
// finally{Monitor.Exit(m_syncObj)}
lock(m_syncObj)
{
l.LogInformation( "Got lock to read/write file-based server state."
, (Int32)VipEvent.GotStateLock);
using (var fileStream = File.Open( ServerState.PATH, FileMode.OpenOrCreate
, FileAccess.ReadWrite, FileShare.None))
{
// the line below is risky, what will happen if the call to invoke
// never returns?
result = func.Invoke(fileStream);
}
}
l.LogInformation("Released state file lock.", (Int32)VipEvent.ReleasedStateLock);
return true;
// note exceptions may leak out of this method. either handle them here.
// or in the calling method.
// For example the file access may fail of func.Invoke may fail
If some of the file operations fail, the lock will not be released. Most probably that is the case. Put the file operations in try/catch block, and release the lock in the finally block.
Anyway, if you read the file in your Global.asax Application_Start method, this will ensure that noone else is working on it (you said that the file is read on application start, right?). To avoid collisions on application pool restaring, etc., you just can try to read the file (assuming that the write operation takes an exclusive lock), and then wait 1 second and retry if exception is thrown.
Now, you have the problem of synchronizing the writes. Whatever method decides to change the file should take care to not invoke a write operation if another one is in progress with simple lock statement.
I see a couple of potential issues here.
Edit for Update 2: If the function is a simple serialize/deserialize combination, I'd separate the two out into two different functions, one into a 'serialize' function, and one into a 'deserialize' function. They really are two different tasks. You can then have different, lock-specific tasks. Invoke is nifty, but I've gotten into lots of trouble myself going for 'nifty' over 'working'.
1) Is your LogInformation function locking? Because you call it inside the mutex first, and then once you release the mutex. So if there's a lock to write to the log file/structure, then you can end up with your race condition there. To avoid that, put the log inside the lock.
2) Check out using the Monitor class, which I know works in C# and I'd assume works in ASP.NET. For that, you can just simply try to get the lock, and fail gracefully otherwise. One way to use this is to just keep trying to get the lock. (Edit for why: see here; basically, a mutex is across processes, the Monitor is in just one process, but was designed for .NET and so is preferred. No other real explanation is given by the docs.)
3) What happens if the filestream opening fails, because someone else has the lock? That would throw an exception, and that could cause this code to behave badly (ie, the lock is still held by the thread that has the exception, and another thread can get at it).
4) What about the func itself? Does that start another thread, or is it entirely within the one thread? What about accessing ServerState.PATH?
5) What other functions can access ServerState._lock? I prefer to have each function that requires a lock get its own lock, to avoid race/deadlock conditions. If you have many many threads, and each of them try to lock on the same object but for totally different tasks, then you could end up with deadlocks and races without any really easily understandable reason. I've changed to code to reflect that idea, rather than using some global lock. (I realize other people suggest a global lock; I really don't like that idea, because of the possibility of other things grabbing it for some task that is not this task).
Object MyLock = new Object();
private static Boolean InvokeOnFile(Func<FileStream, Object> func, out Object result)
{
var l = null;
var filestream = null;
Boolean success = false;
if (Monitor.TryEnter(MyLock, 1500))
try {
l = new Logger();
l.LogInformation("Got lock to read/write file-based server state.", (Int32)VipEvent.GotStateLock);
using (fileStream = File.Open(ServerState.PATH, FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.None)){
result = func.Invoke(fileStream);
} //'using' means avoiding the dispose/close requirements
success = true;
}
catch {//your filestream access failed
l.LogInformation("File access failed.", (Int32)VipEvent.ReleasedStateLock);
} finally {
l.LogInformation("About to released state file lock.", (Int32)VipEvent.ReleasedStateLock);
Monitor.Exit(MyLock);//gets you out of the lock you've got
}
} else {
result = null;
//l.LogWarning("Could not get a lock to access the file-based server state.", (Int32)VipEvent.CouldNotGetStateLock);//if the lock doesn't show in the log, then it wasn't gotten; again, if your logger is locking, then you could have some issues here
}
return Success;
}

Categories