Currently working with a process I am starting up and then accessing the StandardInput.BaseStream and then copying a Stream to it. Do I need to Dispose of the StandardInput and/or the StandardInput.BaseStream at all or is that handled with Process.Dispose()?
Process someProgram = null;
try
{
someProgram = new Process();
someProgram.StartInfo.RedirectStandardInput = true;
someProgram.StartInfo.FileName = #"C:\Temp\SomeProgram.exe";
someProgram.Start();
streamParamater.CopyTo(someProgram.StandardInput.BaseStream);
someProgram.WaitForExit();
}
catch
{
// Error Logging
}
finally
{
if (someProgram != null)
{
someProgram.Dispose();
}
streamParamater.Dispose();
}
The readers / writers and their base streams are not disposed by calling Close() or Dispose() on the Process instance.
The Process.Close() method just sets the references to null so that they can be collected by GC once there are not other references left.
There is also this comment in the source code of Process.Close():
//Don't call close on the Readers and writers
//since they might be referenced by somebody else while the
//process is still alive but this method called.
So, you have to call Dispose() on the readers / writers if you want to make sure that the resources are freed as soon as possible.
Related
For some reason, when i read from a memory mapped file a couple of times it just gets randomly deleted from memory, i don't know what's going on. Is the kernel or GC deleting it from memory? If they are, how do i prevent them from doing so?
I am serializing an object to Json and writing it to memory.
I get an exception when trying to read again after a couple of times, i get FileNotFoundException: Unable to find the specified file.
private const String Protocol = #"Global\";
Code to write to memory mapped file:
public static Boolean WriteToMemoryFile<T>(List<T> data)
{
try
{
if (data == null)
{
throw new ArgumentNullException("Data cannot be null", "data");
}
var mapName = typeof(T).FullName.ToLower();
var mutexName = Protocol + typeof(T).FullName.ToLower();
var serializedData = JsonConvert.SerializeObject(data);
var capacity = serializedData.Length + 1;
var mmf = MemoryMappedFile.CreateOrOpen(mapName, capacity);
var isMutexCreated = false;
var mutex = new Mutex(true, mutexName, out isMutexCreated);
if (!isMutexCreated)
{
var isMutexOpen = false;
do
{
isMutexOpen = mutex.WaitOne();
}
while (!isMutexOpen);
var streamWriter = new StreamWriter(mmf.CreateViewStream());
streamWriter.WriteLine(serializedData);
streamWriter.Close();
mutex.ReleaseMutex();
}
else
{
var streamWriter = new StreamWriter(mmf.CreateViewStream());
streamWriter.WriteLine(serializedData);
streamWriter.Close();
mutex.ReleaseMutex();
}
return true;
}
catch (Exception ex)
{
return false;
}
}
Code to read from memory mapped file:
public static List<T> ReadFromMemoryFile<T>()
{
try
{
var mapName = typeof(T).FullName.ToLower();
var mutexName = Protocol + typeof(T).FullName.ToLower();
var mmf = MemoryMappedFile.OpenExisting(mapName);
var mutex = Mutex.OpenExisting(mutexName);
var isMutexOpen = false;
do
{
isMutexOpen = mutex.WaitOne();
}
while (!isMutexOpen);
var streamReader = new StreamReader(mmf.CreateViewStream());
var serializedData = streamReader.ReadLine();
streamReader.Close();
mutex.ReleaseMutex();
var data = JsonConvert.DeserializeObject<List<T>>(serializedData);
mmf.Dispose();
return data;
}
catch (Exception ex)
{
return default(List<T>);
}
}
The process that created the memory mapped file must keep a reference to it for as long as you want it to live. Using CreateOrOpen is a bit tricky for exactly this reason - you don't know whether disposing the memory mapped file is going to destroy it or not.
You can easily see this at work by adding an explicit mmf.Dispose() to your WriteToMemoryFile method - it will close the file completely. The Dispose method is called from the finalizer of the mmf instance some time after all the references to it drop out of scope.
Or, to make it even more obvious that GC is the culprit, you can try invoking GC explicitly:
WriteToMemoryFile("Hi");
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
ReadFromMemoryFile().Dump(); // Nope, the value is lost now
Note that I changed your methods slightly to work with simple strings; you really want to produce the simplest possible code that reproduces the behaviour you observe. Even just having to get JsonConverter is an unnecessary complication, and might cause people to not even try running your code :)
And as a side note, you want to check for AbandonedMutexException when you're doing Mutex.WaitOne - it's not a failure, it means you took over the mutex. Most applications handle this wrong, leading to issues with deadlocks as well as mutex ownership and lifetime :) In other words, treat AbandonedMutexException as success. Oh, and it's good idea to put stuff like Mutex.ReleaseMutex in a finally clause, to make sure it actually happens, even if you get an exception. Thread or process dead doesn't matter (that will just cause one of the other contendants to get AbandonedMutexException), but if you just get an exception that you "handle" with your return false;, the mutex will not be released until you close all your applications and start again fresh :)
Clearly, the problem is that the MMF loose its context as explained by Luaan. But still nobody explains how to perform it:
The code 'Write to MMF file' must run on a separate async thread.
The code 'Read from MMF' will notify once read completed that the MMF had been read. The notification can be a flag in a file for example.
Therefore the async thread running the 'Write to MMF file' will run as long as the MMF file is read from the second part. We have therefore created the context within which the memory mapped file is valid.
i was searching how to do it for about 6 hours,but didn't find a way.
Is there any way i can change a process's parent process? some api maby ?
google didn't gave much, same for this site, so i opened new question.
What i'm trying to do is to lock a file for personal use, then delete it.
i create the file on program A and use it with program B, when B finish the use, i delete with A, the thing is that B creates a sub process, which don't have B as his parent, so when i use :
File.Open(_moviePath, FileMode.Open, FileAccess.Read, FileShare.Inheritable);
I try to lock the file because i don't want other programs/users to be able to copy it but
it failes.
tnx.
instead of locking the file this way, why not use Mutex? It allows for cross process locking. This will work fine if this is to remain on a single box. http://msdn.microsoft.com/en-us/library/bwe34f1k(v=vs.110).aspx
And no you cannot reassign a parent process owner to a child process.
Here is an example, i will explain below: http://www.dotnetperls.com/mutex
using System;
using System.Threading;
class Program
{
static Mutex _m;
static bool IsMutexExisting(string token)
{
try
{
// Try to open existing mutex.
Mutex.OpenExisting(token);
}
catch
{
return true;
}
// More than one instance.
return false;
}
So in your example program A will do it's thing and then wait.. how to get A to wait?
Have program A attempt to open an existing mutex (a mutex that only B will create), for example... pcode:
while( IsMutexExisting("B Token") == false )
{
System.Threading.Thread.Sleep(500); //sleep for a 1/2 sec
}
//ok, B has created the mutex, let's wait for it to be released indicating it is complete.
Mutex m = Mutex.OpenExisting("B Token");
m.WaitOne(); // will block execution until B releases the Mutex
// lock created, this means B signaled us
// do the rest of A code here...
Program B:
<does what it does>
//Create Mutex to signal A
Mutex m = null;
try{
m =new Mutex(true,"B Token");
...
...
}
finally{
m.ReleaseMutex();
}
public void DoPing(object state)
{
string host = state as string;
m_lastPingResult = false;
while (!m_pingThreadShouldStop.WaitOne(250))
{
Ping p = new Ping();
try
{
PingReply reply = p.Send(host, 3000);
if (reply.Status == IPStatus.Success)
{
m_lastPingResult = true;
}
else
{
m_lastPingResult = false;
}
}
catch
{
}
numping = numping + 1;
}
}
Any idea why this code gives me a memory leak? I can see it's this code as changing the wait value to smaller or larger values increases the rate of the memory usage. Does any one have any idea how to resolve it? or how to see what part of the code is causing it?
In some garbage collected languages, there is a limitation that the object isn't collected if the method that created it still hasn't exited.
I believe .net works this way in debug mode. Quoting from this article; note the bolded statement.
http://www.simple-talk.com/dotnet/.net-framework/understanding-garbage-collection-in-.net/
A local variable in a method that is currently running is considered
to be a GC root. The objects referenced by these variables can always
be accessed immediately by the method they are declared in, and so
they must be kept around. The lifetime of these roots can depend on
the way the program was built. In debug builds, a local variable lasts
for as long as the method is on the stack. In release builds, the JIT
is able to look at the program structure to work out the last point
within the execution that a variable can be used by the method and
will discard it when it is no longer required. This strategy isn’t
always used and can be turned off, for example, by running the program
in a debugger.
Garbage collection only happens when there is memory pressure, thus just seeing your memory usage go up doesn't mean there is a memory leak and in this code I don't see how there could be a legitimate leak. You can add
GC.Collect();
GC.WaitForPendingFinalizers();
to double check but shouldn't leave that in production.
Edit: someone in comments pointed out that Ping is Disposable. not calling dispose can cause leaks that will eventually get cleaned up but may take a long time and cause non memory related problems.
Add a finally statement to your try-catch, like this:
catch() {}
finally
{
Ping.Dispose();
}
using(var p = new Ping())
{
try
{
var reply = p.Send(host, 3000);
if (reply.Status == IPStatus.Success)
_lastPingResult = true;
else
_lastPingResult = false;
}
catch(Exception e)
{
//...
}
}
This can be used from a static Class:
public static bool testNet(string pHost, int pTimeout)
{
Ping p = new Ping();
bool isNetOkay = false;
int netTries = 0;
do
{
PingReply reply = p.Send(pHost, pTimeout);
if (reply.Status == IPStatus.Success)
{
isNetOkay = true;
break;
}
netTries++;
} while (netTries < 4);
//Void memory leak
p.Dispose();
return isNetOkay;
}
My answer:
After getting annoyed, I have found a solution. The problem was indeed C# either C#'s garbage collector or C#'s multithreading, it probably thought the object was no longer needed within THAT thread, and deleted it. The solution was found as follows:
I implemented the ClientThread into the Server class, passing the Client object as a parameters, this minor change made it work. Thank you for all your responses, if anyone in the future has this problem maybe it wasn't C#'s garbage collector. But C# mutithreading OR networking must be done within the same class. I kept my client class and just made the thread object run the function within the Server class.
If anyone can figure out what my problem was, feel free to comment so I can expand my little knowledge of C#'s memory management.
Thanks again to all the people who attempted to help me in this thread.
Original Question
I'm a C++ programmer so I'm used to managing memory myself, and I'm really not sure how to solve this problem.
For instance in C++:
while(true)
{
void* ptr = new char[1000];
}
This would be an obvious memory leaking program, so I need to go ahead and clean it up with:
delete ptr;
But there are cases when I want to create memory for use in a different thread and I DO NOT WANT IT DELETED AFTER THE LOOP.
while(true)
{
socket.Accept(new Client());
}
//////////Client Constructor////////////
Client()
{
clientThread.Start();
}
This snippet is basically what I want to do in C#, but my client connects then disconnects immediately, I'm assuming this is because at the end of the while loop my new Client() is being deleted by our favorite Garbage Collector.
So my question is, how do I get around this and make it NOT delete my object.
Many have replied saying various things about having other links to it in the code. I forgot to mention that I also save the new client in a list of clients located globally
List<Client> clients;
//inside loop
clients.Add(new Client(socket.Accept()));
Ok because I'm unsure if I'm missing more information here is the ACTUAL code snippet
// Server class
internal Socket socket { get; set; }
internal Thread thread { get; set; }
internal List<Client> clients { get; set; }
internal void Init()
{
socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
thread = new Thread(AcceptThread);
}
internal void Start(int port,List<Client> clients)
{
var ipep = new IPEndPoint(IPAddress.Any, port);
this.socket.Bind(ipep);
this.socket.Listen(10);
this.clients = clients;
this.thread.Start();
}
internal void End()
{
socket.Close();
thread.Abort();
}
internal void AcceptThread()
{
int ids = 0;
while (true)
{
Client client = new Client();
client.Init(socket.Accept());
client.clientid = ids++;
client.Start();
clients.Add(client);
}
}
// Client class
public class Client
{
.....
#region Base Code
internal void Init(Socket socket)
{
this.socket = socket;
this.status = new SocketStatus();
this.thread = new Thread(ClientThread);
this.stream = new Stream();
}
internal void Start()
{
thread.Start();
}
internal void Close()
{
socket.Close();
status = SocketStatus.Null;
thread.Abort();
}
internal void ClientThread()
{
try
{
while (true)
{
byte[] data = new byte[1];
int rec = socket.Receive(data);
if (rec == 0)
throw new Exception();
else
stream.write(data[0]);
}
}
catch(Exception e)
{
Close();
}
}
#endregion
}
I thank you for all your replies.
That's not how it works at all. If there exists any reference to the instance of Client you created, it is not garbage-collected. This doesn't just apply to your own code, either. Therefore, if GCing is indeed the source of your issue, you never could have accessed it in the first place!
If you weren't intending to access it, you can hold on to them anyway by putting them in a List. However, I believe that once you actually use them in the other thread you're talking about, your problems will go away.
I've been out of the c# game for a while but I don't see anything immediately wrong there. Garbage collection shouldn't kick in until objects are actually not referenced anymore. if your socket.Accept() doesn't keep a reference, perhaps you could do this manually:
var clients = new List<Client>();
while(true)
{
client = new Client();
clients.Add(client);
socket.Accept(client);
}
////////// Client Constructor ////////////
Client()
{
clientThread.Start();
}
From MSDN:
If no data is available for reading, the Receive method will block until data is
available, unless a time-out value was set by using
Socket.ReceiveTimeout. If the time-out value was exceeded, the Receive
call will throw a SocketException. If you are in non-blocking mode,
and there is no data available in the in the protocol stack buffer,
the Receive method will complete immediately and throw a
SocketException. You can use the Available property to determine if
data is available for reading. When Available is non-zero, retry the
receive operation.
If you are using a connection-oriented Socket, the Receive method will
read as much data as is available, up to the size of the buffer. If
the remote host shuts down the Socket connection with the Shutdown
method, and all available data has been received, the Receive method
will complete immediately and return zero bytes.
This appears to be the only way to get a 0 return value from the Receive method, and not get an exception, so it would appear that whatever is on the other end is closing the connection.
The garbage collector only deletes resources that aren't reachable through any reference in your program. As long as you still have a variable that refers to the object, it'll continue to exist.
I have some System.Diagnotics.Processes to run. I'd like to call the close method on them automatically. Apparently the "using" keyword does this for me.
Is this the way to use the using keyword?
foreach(string command in S) // command is something like "c:\a.exe"
{
try
{
using(p = Process.Start(command))
{
// I literally put nothing in here.
}
}
catch (Exception e)
{
// notify of process failure
}
}
I'd like to start multiple processes to run concurrently.
using(p = Process.Start(command))
This will compile, as the Process class implements IDisposable, however you actually want to call the Close method.
Logic would have it that the Dispose method would call Close for you, and by digging into the CLR using reflector, we can see that it does in fact do this for us. So far so good.
Again using reflector, I looked at what the Close method does - it releases the underlying native win32 process handle, and clears some member variables. This (releasing external resources) is exactly what the IDisposable pattern is supposed to do.
However I'm not sure if this is what you want to achieve here.
Releasing the underlying handles simply says to windows 'I am no longer interested in tracking this other process'. At no point does it actually cause the other process to quit, or cause your process to wait.
If you want to force them quit, you'll need to use the p.Kill() method on the processes - however be advised it is never a good idea to kill processes as they can't clean up after themselves, and may leave behind corrupt files, and so on.
If you want to wait for them to quit on their own, you could use p.WaitForExit() - however this will only work if you're waiting for one process at a time. If you want to wait for them all concurrently, it gets tricky.
Normally you'd use WaitHandle.WaitAll for this, but as there's no way to get a WaitHandle object out of a System.Diagnostics.Process, you can't do this (seriously, wtf were microsoft thinking?).
You could spin up a thread for each process, and call `WaitForExit in those threads, but this is also the wrong way to do it.
You instead have to use p/invoke to access the native win32 WaitForMultipleObjects function.
Here's a sample (which I've tested, and actually works)
[System.Runtime.InteropServices.DllImport( "kernel32.dll" )]
static extern uint WaitForMultipleObjects( uint nCount, IntPtr[] lpHandles, bool bWaitAll, uint dwMilliseconds );
static void Main( string[] args )
{
var procs = new Process[] {
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 2'" ),
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 3'" ),
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 4'" ) };
// all started asynchronously in the background
var handles = procs.Select( p => p.Handle ).ToArray();
WaitForMultipleObjects( (uint)handles.Length, handles, true, uint.MaxValue ); // uint.maxvalue waits forever
}
For reference:
The using keyword for IDisposable objects:
using(Writer writer = new Writer())
{
writer.Write("Hello");
}
is just compiler syntax. What it compiles down to is:
Writer writer = null;
try
{
writer = new Writer();
writer.Write("Hello");
}
finally
{
if( writer != null)
{
((IDisposable)writer).Dispose();
}
}
using is a bit better since the compiler prevents you from reassigning the writer reference inside the using block.
The framework guidelines Section 9.3.1 p. 256 state:
CONSIDER providing method Close(), in addition to the Dispose(), if close is standard terminology in the area.
In your code example, the outer try-catch is unnecessary (see above).
Using probably isn't doing what you want to here since Dispose() gets called as soon as p goes out of scope. This doesn't shut down the process (tested).
Processes are independent, so unless you call p.WaitForExit() they spin off and do their own thing completely independent of your program.
Counter-intuitively, for a Process, Close() only releases resources but leaves the program running. CloseMainWindow() can work for some processes, and Kill() will work to kill any process. Both CloseMainWindow() and Kill() can throw exceptions, so be careful if you're using them in a finally block.
To finish, here's some code that waits for processes to finish but doesn't kill off the processes when an exception occurs. I'm not saying it's better than Orion Edwards, just different.
List<System.Diagnostics.Process> processList = new List<System.Diagnostics.Process>();
try
{
foreach (string command in Commands)
{
processList.Add(System.Diagnostics.Process.Start(command));
}
// loop until all spawned processes Exit normally.
while (processList.Any())
{
System.Threading.Thread.Sleep(1000); // wait and see.
List<System.Diagnostics.Process> finished = (from o in processList
where o.HasExited
select o).ToList();
processList = processList.Except(finished).ToList();
foreach (var p in finished)
{
// could inspect exit code and exit time.
// note many properties are unavailable after process exits
p.Close();
}
}
}
catch (Exception ex)
{
// log the exception
throw;
}
finally
{
foreach (var p in processList)
{
if (p != null)
{
//if (!p.HasExited)
// processes will still be running
// but CloseMainWindow() or Kill() can throw exceptions
p.Dispose();
}
}
}
I didn't bother Kill()'ing off the processes because the code starts get even uglier. Read the msdn documentation for more information.
try
{
foreach(string command in S) // command is something like "c:\a.exe"
{
using(p = Process.Start(command))
{
// I literally put nothing in here.
}
}
}
catch (Exception e)
{
// notify of process failure
}
The reason it works is because when the exception happens, the variable p falls out of scope and thus it's Dispose method is called that closes the process is how that would go. Additionally, I would think you'd want to spin a thread off for each command rather than wait for an executable to finish before going on to the next one.