Memory Mapped File gets deleted from memory - c#

For some reason, when i read from a memory mapped file a couple of times it just gets randomly deleted from memory, i don't know what's going on. Is the kernel or GC deleting it from memory? If they are, how do i prevent them from doing so?
I am serializing an object to Json and writing it to memory.
I get an exception when trying to read again after a couple of times, i get FileNotFoundException: Unable to find the specified file.
private const String Protocol = #"Global\";
Code to write to memory mapped file:
public static Boolean WriteToMemoryFile<T>(List<T> data)
{
try
{
if (data == null)
{
throw new ArgumentNullException("Data cannot be null", "data");
}
var mapName = typeof(T).FullName.ToLower();
var mutexName = Protocol + typeof(T).FullName.ToLower();
var serializedData = JsonConvert.SerializeObject(data);
var capacity = serializedData.Length + 1;
var mmf = MemoryMappedFile.CreateOrOpen(mapName, capacity);
var isMutexCreated = false;
var mutex = new Mutex(true, mutexName, out isMutexCreated);
if (!isMutexCreated)
{
var isMutexOpen = false;
do
{
isMutexOpen = mutex.WaitOne();
}
while (!isMutexOpen);
var streamWriter = new StreamWriter(mmf.CreateViewStream());
streamWriter.WriteLine(serializedData);
streamWriter.Close();
mutex.ReleaseMutex();
}
else
{
var streamWriter = new StreamWriter(mmf.CreateViewStream());
streamWriter.WriteLine(serializedData);
streamWriter.Close();
mutex.ReleaseMutex();
}
return true;
}
catch (Exception ex)
{
return false;
}
}
Code to read from memory mapped file:
public static List<T> ReadFromMemoryFile<T>()
{
try
{
var mapName = typeof(T).FullName.ToLower();
var mutexName = Protocol + typeof(T).FullName.ToLower();
var mmf = MemoryMappedFile.OpenExisting(mapName);
var mutex = Mutex.OpenExisting(mutexName);
var isMutexOpen = false;
do
{
isMutexOpen = mutex.WaitOne();
}
while (!isMutexOpen);
var streamReader = new StreamReader(mmf.CreateViewStream());
var serializedData = streamReader.ReadLine();
streamReader.Close();
mutex.ReleaseMutex();
var data = JsonConvert.DeserializeObject<List<T>>(serializedData);
mmf.Dispose();
return data;
}
catch (Exception ex)
{
return default(List<T>);
}
}

The process that created the memory mapped file must keep a reference to it for as long as you want it to live. Using CreateOrOpen is a bit tricky for exactly this reason - you don't know whether disposing the memory mapped file is going to destroy it or not.
You can easily see this at work by adding an explicit mmf.Dispose() to your WriteToMemoryFile method - it will close the file completely. The Dispose method is called from the finalizer of the mmf instance some time after all the references to it drop out of scope.
Or, to make it even more obvious that GC is the culprit, you can try invoking GC explicitly:
WriteToMemoryFile("Hi");
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
ReadFromMemoryFile().Dump(); // Nope, the value is lost now
Note that I changed your methods slightly to work with simple strings; you really want to produce the simplest possible code that reproduces the behaviour you observe. Even just having to get JsonConverter is an unnecessary complication, and might cause people to not even try running your code :)
And as a side note, you want to check for AbandonedMutexException when you're doing Mutex.WaitOne - it's not a failure, it means you took over the mutex. Most applications handle this wrong, leading to issues with deadlocks as well as mutex ownership and lifetime :) In other words, treat AbandonedMutexException as success. Oh, and it's good idea to put stuff like Mutex.ReleaseMutex in a finally clause, to make sure it actually happens, even if you get an exception. Thread or process dead doesn't matter (that will just cause one of the other contendants to get AbandonedMutexException), but if you just get an exception that you "handle" with your return false;, the mutex will not be released until you close all your applications and start again fresh :)

Clearly, the problem is that the MMF loose its context as explained by Luaan. But still nobody explains how to perform it:
The code 'Write to MMF file' must run on a separate async thread.
The code 'Read from MMF' will notify once read completed that the MMF had been read. The notification can be a flag in a file for example.
Therefore the async thread running the 'Write to MMF file' will run as long as the MMF file is read from the second part. We have therefore created the context within which the memory mapped file is valid.

Related

C# Does Process.Dispose() also cleanup the StandardInput and StandardInput.BaseStream?

Currently working with a process I am starting up and then accessing the StandardInput.BaseStream and then copying a Stream to it. Do I need to Dispose of the StandardInput and/or the StandardInput.BaseStream at all or is that handled with Process.Dispose()?
Process someProgram = null;
try
{
someProgram = new Process();
someProgram.StartInfo.RedirectStandardInput = true;
someProgram.StartInfo.FileName = #"C:\Temp\SomeProgram.exe";
someProgram.Start();
streamParamater.CopyTo(someProgram.StandardInput.BaseStream);
someProgram.WaitForExit();
}
catch
{
// Error Logging
}
finally
{
if (someProgram != null)
{
someProgram.Dispose();
}
streamParamater.Dispose();
}
The readers / writers and their base streams are not disposed by calling Close() or Dispose() on the Process instance.
The Process.Close() method just sets the references to null so that they can be collected by GC once there are not other references left.
There is also this comment in the source code of Process.Close():
//Don't call close on the Readers and writers
//since they might be referenced by somebody else while the
//process is still alive but this method called.
So, you have to call Dispose() on the readers / writers if you want to make sure that the resources are freed as soon as possible.

Writing to file in a thread safe manner

Writing Stringbuilder to file asynchronously. This code takes control of a file, writes a stream to it and releases it. It deals with requests from asynchronous operations, which may come in at any time.
The FilePath is set per class instance (so the lock Object is per instance), but there is potential for conflict since these classes may share FilePaths. That sort of conflict, as well as all other types from outside the class instance, would be dealt with retries.
Is this code suitable for its purpose? Is there a better way to handle this that means less (or no) reliance on the catch and retry mechanic?
Also how do I avoid catching exceptions that have occurred for other reasons.
public string Filepath { get; set; }
private Object locker = new Object();
public async Task WriteToFile(StringBuilder text)
{
int timeOut = 100;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
while (true)
{
try
{
//Wait for resource to be free
lock (locker)
{
using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
{
writer.Write(text.ToString());
}
}
break;
}
catch
{
//File not available, conflict with other class instances or application
}
if (stopwatch.ElapsedMilliseconds > timeOut)
{
//Give up.
break;
}
//Wait and Retry
await Task.Delay(5);
}
stopwatch.Stop();
}
How you approach this is going to depend a lot on how frequently you're writing. If you're writing a relatively small amount of text fairly infrequently, then just use a static lock and be done with it. That might be your best bet in any case because the disk drive can only satisfy one request at a time. Assuming that all of your output files are on the same drive (perhaps not a fair assumption, but bear with me), there's not going to be much difference between locking at the application level and the lock that's done at the OS level.
So if you declare locker as:
static object locker = new object();
You'll be assured that there are no conflicts with other threads in your program.
If you want this thing to be bulletproof (or at least reasonably so), you can't get away from catching exceptions. Bad things can happen. You must handle exceptions in some way. What you do in the face of error is something else entirely. You'll probably want to retry a few times if the file is locked. If you get a bad path or filename error or disk full or any of a number of other errors, you probably want to kill the program. Again, that's up to you. But you can't avoid exception handling unless you're okay with the program crashing on error.
By the way, you can replace all of this code:
using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
{
writer.Write(text.ToString());
}
With a single call:
File.AppendAllText(Filepath, text.ToString());
Assuming you're using .NET 4.0 or later. See File.AppendAllText.
One other way you could handle this is to have the threads write their messages to a queue, and have a dedicated thread that services that queue. You'd have a BlockingCollection of messages and associated file paths. For example:
class LogMessage
{
public string Filepath { get; set; }
public string Text { get; set; }
}
BlockingCollection<LogMessage> _logMessages = new BlockingCollection<LogMessage>();
Your threads write data to that queue:
_logMessages.Add(new LogMessage("foo.log", "this is a test"));
You start a long-running background task that does nothing but service that queue:
foreach (var msg in _logMessages.GetConsumingEnumerable())
{
// of course you'll want your exception handling in here
File.AppendAllText(msg.Filepath, msg.Text);
}
Your potential risk here is that threads create messages too fast, causing the queue to grow without bound because the consumer can't keep up. Whether that's a real risk in your application is something only you can say. If you think it might be a risk, you can put a maximum size (number of entries) on the queue so that if the queue size exceeds that value, producers will wait until there is room in the queue before they can add.
You could also use ReaderWriterLock, it is considered to be more 'appropriate' way to control thread safety when dealing with read write operations...
To debug my web apps (when remote debug fails) I use following ('debug.txt' end up in \bin folder on the server):
public static class LoggingExtensions
{
static ReaderWriterLock locker = new ReaderWriterLock();
public static void WriteDebug(string text)
{
try
{
locker.AcquireWriterLock(int.MaxValue);
System.IO.File.AppendAllLines(Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Replace("file:\\", ""), "debug.txt"), new[] { text });
}
finally
{
locker.ReleaseWriterLock();
}
}
}
Hope this saves you some time.

System.IO.File.Move--How to wait for move completion?

I am writing a WPF application in c# and I need to move some files--the rub is that I really REALLY need to know if the files make it. To do this, I wrote a check that makes sure that the file gets to the target directory after the move--the problem is that sometimes I get to the check before the file finishes moving:
System.IO.File.Move(file.FullName, endLocationWithFile);
System.IO.FileInfo[] filesInDirectory = endLocation.GetFiles();
foreach (System.IO.FileInfo temp in filesInDirectory)
{
if (temp.Name == shortFileName)
{
return true;
}
}
// The file we sent over has not gotten to the correct directory....something went wrong!
throw new IOException("File did not reach destination");
}
catch (Exception e)
{
//Something went wrong, return a fail;
logger.writeErrorLog(e);
return false;
}
Could somebody tell me how to make sure that the file actually gets to the destination?--The files that I will be moving could be VERY large--(Full HD mp4 files of up to 2 hours)
Thanks!
You could use streams with Aysnc Await to ensure the file is completely copied
Something like this should work:
private void Button_Click(object sender, RoutedEventArgs e)
{
string sourceFile = #"\\HOMESERVER\Development Backup\Software\Microsoft\en_expression_studio_4_premium_x86_dvd_537029.iso";
string destinationFile = "G:\\en_expression_studio_4_premium_x86_dvd_537029.iso";
MoveFile(sourceFile, destinationFile);
}
private async void MoveFile(string sourceFile, string destinationFile)
{
try
{
using (FileStream sourceStream = File.Open(sourceFile, FileMode.Open))
{
using (FileStream destinationStream = File.Create(destinationFile))
{
await sourceStream.CopyToAsync(destinationStream);
if (MessageBox.Show("I made it in one piece :), would you like to delete me from the original file?", "Done", MessageBoxButton.YesNo) == MessageBoxResult.Yes)
{
sourceStream.Close();
File.Delete(sourceFile);
}
}
}
}
catch (IOException ioex)
{
MessageBox.Show("An IOException occured during move, " + ioex.Message);
}
catch (Exception ex)
{
MessageBox.Show("An Exception occured during move, " + ex.Message);
}
}
If your using VS2010 you will have to install Async CTP to use the new Async/Await syntax
You could watch for the files to disappear from the original directory, and then confirm that they indeed appeared in the target directory.
I have not had great experience with file watchers. I would probably have the thread doing the move wait for an AutoResetEvent while a separate thread or timer runs to periodically check for the files to disappear from the original location, check that they are in the new location, and perhaps (depending on your environment and needs) perform a consistency check (e.g. MD5 check) of the files. Once those conditions are satisfied, the "checker" thread/timer would trigger the AutoResetEvent so that the original thread can progress.
Include some "this is taking way too long" logic in the "checker".
Why not manage the copy yourself by copying streams?
//http://www.dotnetthoughts.net/writing_file_with_non_cache_mode_in_c/
const FileOptions FILE_FLAG_NO_BUFFERING = (FileOptions) 0x20000000;
//experiment with different buffer sizes for optimal speed
var bufLength = 4096;
using(var outFile =
new FileStream(
destPath,
FileMode.Create,
FileAccess.Write,
FileShare.None,
bufLength,
FileOptions.WriteThrough | FILE_FLAG_NO_BUFFERING))
using(var inFile = File.OpenRead(srcPath))
{
//either
//inFile.CopyTo(outFile);
//or
var fileSizeInBytes = inFile.Length;
var buf = new byte[bufLength];
long totalCopied = 0L;
int amtRead;
while((amtRead = inFile.Read(buf,0,bufLength)) > 0)
{
outFile.Write(buf,0,amtRead);
totalCopied += amtRead;
double progressPct =
Convert.ToDouble(totalCopied) * 100d / fileSizeInBytes;
progressPct.Dump();
}
}
//file is written
You most likely want the move to happen in a separate thread so that you aren't stopping the execution of your application for hours.
If the program cannot continue without the move being completed, then you could open a dialog and check in on the move thread periodically to update a progress tracker. This provides the user with feedback and will prevent them from feeling as if the program has frozen.
There's info and an example on this here:
http://hintdesk.com/c-wpf-copy-files-with-progress-bar-by-copyfileex-api/
try checking periodically in a background task whether the copied file
size reached the file size of the original file (you can add hashes comparing between the files)
Got similar problem recently.
OnBackupStarts();
//.. do stuff
new TaskFactory().StartNew(() =>
{
OnBackupStarts()
//.. do stuff
OnBackupEnds();
});
void OnBackupEnds()
{
if (BackupChanged != null)
{
BackupChanged(this, new BackupChangedEventArgs(BackupState.Done));
}
}
do not wait, react to event
In first place, consider that Moving files in an operating system does not “recreates” the file in the new directory, but only changes its location data in the “files allocation table”, as physically copy all bytes to delete old ones is just a waste of time.
Due to that reason, moving files is a very fast process, no matter the file size.
EDIT: As Mike Christiansen states in his comment, this "speedy" process only happens when files are moving inside the same volume (you know, C:\... to C:\...)
Thus, copy/delete behavior as proposed by “sa_ddam213” in his response will work but is not the optimal solution (takes longer to finish, will not work if for example you don’t have enough free disk to make the copy of the file while the old one exists, …).
MSDN documentation about File.Move(source,destination) method does not specifies if it waits for completion, but the code given as example makes a simple File.Exists(…) check, saying that having there the original file “is unexpected”:
// Move the file.
File.Move(path, path2);
Console.WriteLine("{0} was moved to {1}.", path, path2);
// See if the original exists now.
if (File.Exists(path))
{
Console.WriteLine("The original file still exists, which is unexpected.");
}
else
{
Console.WriteLine("The original file no longer exists, which is expected.");
}
Perhaps, you could use a similar approach to this one, checking in a while loop for the existence of the new file, and the non existence of the old one, giving a “timer” exit for the loop just in case something unexpected happens at operating system level, and the files get lost:
// We perform the movement of the file
File.Move(source,destination);
// Sets an "exit" datetime, after wich the loop will end, for example 15 seconds. The moving process should always be quicker than that if files are in the same volume, almost immediate, but not if they are in different ones
DateTime exitDateTime = DateTime.Now.AddSeconds(15);
bool exitLoopByExpiration = false;
// We stops here until copy is finished (by checking fies existence) or the time limit excedes
while (File.Exists(source) && !File.Exists(destination) && !exitLoopByExpiration ) {
// We compare current datetime with the exit one, to see if we reach the exit time. If so, we set the flag to exit the loop by expiration time, not file moving
if (DateTime.Now.CompareTo(exitDateTime) > 0) { exitLoopByExpiration = true; }
}
//
if (exitLoopByExpiration) {
// We can perform extra work here, like log problems or throw exception, if the loop exists becouse of time expiration
}
I have checked this solution and seems to work without problems.

c# memory leak in loop

public void DoPing(object state)
{
string host = state as string;
m_lastPingResult = false;
while (!m_pingThreadShouldStop.WaitOne(250))
{
Ping p = new Ping();
try
{
PingReply reply = p.Send(host, 3000);
if (reply.Status == IPStatus.Success)
{
m_lastPingResult = true;
}
else
{
m_lastPingResult = false;
}
}
catch
{
}
numping = numping + 1;
}
}
Any idea why this code gives me a memory leak? I can see it's this code as changing the wait value to smaller or larger values increases the rate of the memory usage. Does any one have any idea how to resolve it? or how to see what part of the code is causing it?
In some garbage collected languages, there is a limitation that the object isn't collected if the method that created it still hasn't exited.
I believe .net works this way in debug mode. Quoting from this article; note the bolded statement.
http://www.simple-talk.com/dotnet/.net-framework/understanding-garbage-collection-in-.net/
A local variable in a method that is currently running is considered
to be a GC root. The objects referenced by these variables can always
be accessed immediately by the method they are declared in, and so
they must be kept around. The lifetime of these roots can depend on
the way the program was built. In debug builds, a local variable lasts
for as long as the method is on the stack. In release builds, the JIT
is able to look at the program structure to work out the last point
within the execution that a variable can be used by the method and
will discard it when it is no longer required. This strategy isn’t
always used and can be turned off, for example, by running the program
in a debugger.
Garbage collection only happens when there is memory pressure, thus just seeing your memory usage go up doesn't mean there is a memory leak and in this code I don't see how there could be a legitimate leak. You can add
GC.Collect();
GC.WaitForPendingFinalizers();
to double check but shouldn't leave that in production.
Edit: someone in comments pointed out that Ping is Disposable. not calling dispose can cause leaks that will eventually get cleaned up but may take a long time and cause non memory related problems.
Add a finally statement to your try-catch, like this:
catch() {}
finally
{
Ping.Dispose();
}
using(var p = new Ping())
{
try
{
var reply = p.Send(host, 3000);
if (reply.Status == IPStatus.Success)
_lastPingResult = true;
else
_lastPingResult = false;
}
catch(Exception e)
{
//...
}
}
This can be used from a static Class:
public static bool testNet(string pHost, int pTimeout)
{
Ping p = new Ping();
bool isNetOkay = false;
int netTries = 0;
do
{
PingReply reply = p.Send(pHost, pTimeout);
if (reply.Status == IPStatus.Success)
{
isNetOkay = true;
break;
}
netTries++;
} while (netTries < 4);
//Void memory leak
p.Dispose();
return isNetOkay;
}

How do I programmatically use the "using" keyword in C#?

I have some System.Diagnotics.Processes to run. I'd like to call the close method on them automatically. Apparently the "using" keyword does this for me.
Is this the way to use the using keyword?
foreach(string command in S) // command is something like "c:\a.exe"
{
try
{
using(p = Process.Start(command))
{
// I literally put nothing in here.
}
}
catch (Exception e)
{
// notify of process failure
}
}
I'd like to start multiple processes to run concurrently.
using(p = Process.Start(command))
This will compile, as the Process class implements IDisposable, however you actually want to call the Close method.
Logic would have it that the Dispose method would call Close for you, and by digging into the CLR using reflector, we can see that it does in fact do this for us. So far so good.
Again using reflector, I looked at what the Close method does - it releases the underlying native win32 process handle, and clears some member variables. This (releasing external resources) is exactly what the IDisposable pattern is supposed to do.
However I'm not sure if this is what you want to achieve here.
Releasing the underlying handles simply says to windows 'I am no longer interested in tracking this other process'. At no point does it actually cause the other process to quit, or cause your process to wait.
If you want to force them quit, you'll need to use the p.Kill() method on the processes - however be advised it is never a good idea to kill processes as they can't clean up after themselves, and may leave behind corrupt files, and so on.
If you want to wait for them to quit on their own, you could use p.WaitForExit() - however this will only work if you're waiting for one process at a time. If you want to wait for them all concurrently, it gets tricky.
Normally you'd use WaitHandle.WaitAll for this, but as there's no way to get a WaitHandle object out of a System.Diagnostics.Process, you can't do this (seriously, wtf were microsoft thinking?).
You could spin up a thread for each process, and call `WaitForExit in those threads, but this is also the wrong way to do it.
You instead have to use p/invoke to access the native win32 WaitForMultipleObjects function.
Here's a sample (which I've tested, and actually works)
[System.Runtime.InteropServices.DllImport( "kernel32.dll" )]
static extern uint WaitForMultipleObjects( uint nCount, IntPtr[] lpHandles, bool bWaitAll, uint dwMilliseconds );
static void Main( string[] args )
{
var procs = new Process[] {
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 2'" ),
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 3'" ),
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 4'" ) };
// all started asynchronously in the background
var handles = procs.Select( p => p.Handle ).ToArray();
WaitForMultipleObjects( (uint)handles.Length, handles, true, uint.MaxValue ); // uint.maxvalue waits forever
}
For reference:
The using keyword for IDisposable objects:
using(Writer writer = new Writer())
{
writer.Write("Hello");
}
is just compiler syntax. What it compiles down to is:
Writer writer = null;
try
{
writer = new Writer();
writer.Write("Hello");
}
finally
{
if( writer != null)
{
((IDisposable)writer).Dispose();
}
}
using is a bit better since the compiler prevents you from reassigning the writer reference inside the using block.
The framework guidelines Section 9.3.1 p. 256 state:
CONSIDER providing method Close(), in addition to the Dispose(), if close is standard terminology in the area.
In your code example, the outer try-catch is unnecessary (see above).
Using probably isn't doing what you want to here since Dispose() gets called as soon as p goes out of scope. This doesn't shut down the process (tested).
Processes are independent, so unless you call p.WaitForExit() they spin off and do their own thing completely independent of your program.
Counter-intuitively, for a Process, Close() only releases resources but leaves the program running. CloseMainWindow() can work for some processes, and Kill() will work to kill any process. Both CloseMainWindow() and Kill() can throw exceptions, so be careful if you're using them in a finally block.
To finish, here's some code that waits for processes to finish but doesn't kill off the processes when an exception occurs. I'm not saying it's better than Orion Edwards, just different.
List<System.Diagnostics.Process> processList = new List<System.Diagnostics.Process>();
try
{
foreach (string command in Commands)
{
processList.Add(System.Diagnostics.Process.Start(command));
}
// loop until all spawned processes Exit normally.
while (processList.Any())
{
System.Threading.Thread.Sleep(1000); // wait and see.
List<System.Diagnostics.Process> finished = (from o in processList
where o.HasExited
select o).ToList();
processList = processList.Except(finished).ToList();
foreach (var p in finished)
{
// could inspect exit code and exit time.
// note many properties are unavailable after process exits
p.Close();
}
}
}
catch (Exception ex)
{
// log the exception
throw;
}
finally
{
foreach (var p in processList)
{
if (p != null)
{
//if (!p.HasExited)
// processes will still be running
// but CloseMainWindow() or Kill() can throw exceptions
p.Dispose();
}
}
}
I didn't bother Kill()'ing off the processes because the code starts get even uglier. Read the msdn documentation for more information.
try
{
foreach(string command in S) // command is something like "c:\a.exe"
{
using(p = Process.Start(command))
{
// I literally put nothing in here.
}
}
}
catch (Exception e)
{
// notify of process failure
}
The reason it works is because when the exception happens, the variable p falls out of scope and thus it's Dispose method is called that closes the process is how that would go. Additionally, I would think you'd want to spin a thread off for each command rather than wait for an executable to finish before going on to the next one.

Categories