I have a bunch of incoming web requests which I need to serialize and process one after each other. I cannot process more than one at a time. My current solution is to run a tool which can be accessed by a Web API using a Memory Mapped File acting as shared memory. I also need a Mutex to allow exclusive access to the shared memory. An event will be used to signal that a task was added.
So, basically we have there a multiple producers and one consumer. What follows is my first solution. Could someone tell whether or not there are some sort of race conditions or any other problems:
MemoryMappedFile MMF = MemoryMappedFile.CreateNew("Task_Queue", 5000);
MemoryMappedViewAccessor MMF_Accessor = MMF.CreateViewAccessor();
bool Mutex_Created = false;
Mutex Mutex = new System.Threading.Mutex(false, "Mutex", out Mutex_Created);
if(Mutex_Created==false)
{
// bad error
return;
}
EventWaitHandle EWH = new EventWaitHandle(false, EventResetMode.ManualReset);
Random Rand = new Random();
// Consumer
// work on the Tasks
Task.Factory.StartNew(() =>
{
while (true)
{
// wait until a task has been added
EWH.WaitOne();
// get exclusive access to read in task
Mutex.WaitOne();
byte[] Buffer = ASCIIEncoding.ASCII.GetBytes(new string(' ', 5000));
MMF_Accessor.ReadArray(0, Buffer, 0, Buffer.Length);
int Position = 0;
foreach (var b in Buffer) { if (b == 0) break; Position++; }
string Message = string.Empty;
for (int i = 0; i < Position; ++i)
{
Message += (char)Buffer[i];
}
if(string.IsNullOrEmpty(Message) == false)
{
Console.WriteLine(Message);
}
Mutex.ReleaseMutex();
EWH.Reset();
}
});
// Producer
// via a web request
Task.Factory.StartNew(() =>
{
while (true)
{
if(EWH.WaitOne(0))
{
// consumer must take in the task first
// wait a bit and then try again
Thread.Sleep(100);
break;
}
// wait until we access Shared Memory and claim it when we can
Mutex.WaitOne();
string Request = "Task 1 ";
byte[] Buffer = ASCIIEncoding.ASCII.GetBytes(Request);
Buffer[Buffer.Length - 1] = 0;
MMF_Accessor.WriteArray(0, Buffer, 0, Buffer.Length);
// Signal that a tasks has been added to shared memory
EWH.Set();
// release the mutex so that others can use the shared memory
Mutex.ReleaseMutex();
Thread.Sleep(Rand.Next(10, 1000));
}
});
// Producer
// via a web request
Task.Factory.StartNew(() =>
{
while (true)
{
// wait until we access Shared Memory and claim it when we can
Mutex.WaitOne();
string Request = "Task 2 ";
byte[] Buffer = ASCIIEncoding.ASCII.GetBytes(Request);
Buffer[Buffer.Length - 1] = 0;
MMF_Accessor.WriteArray(0, Buffer, 0, Buffer.Length);
// Signal that a tasks has been added to shared memory
EWH.Set();
// release the mutex so that others can use the shared memory
Mutex.ReleaseMutex();
Random r = new Random();
Thread.Sleep(Rand.Next(10, 1000));
}
});
while (Console.Read() != 'q') ;
Please note the posted code is just for demonstration purposes.
Related
I read data from the serial port and parse it in a separate class. However data is incorrectly parsed and some samples are repeated while others are missing.
Here is an example of the parsed packet. It starts with the packetIndex (shoudl start from 1 and incrementing). You can see how the packetIdx repeats and some of the other values repeat as well. I think that's due to multithreading but I'm not sure how to fix it.
2 -124558.985180734 -67934.4168823262 -164223.049786454 -163322.386243628
2 -124619.580759952 -67962.535376851 -164191.757344217 -163305.68949052
3 -124685.719571795 -67995.8394760894 -164191.042088394 -163303.119039907
5 -124801.747477263 -68045.7062179692 -164195.288919841 -163299.140429394
6 -124801.747477263 -68045.7062179692 -164221.105184687 -163297.46404856
6 -124832.8387538 -68041.9287731563 -164214.936103217 -163294.983004926
This is what I should receive:
1 -124558.985180734 -67934.4168823262 -164223.049786454 -163322.386243628
2 -124619.580759952 -67962.535376851 -164191.757344217 -163305.68949052
3 -124685.719571795 -67995.8394760894 -164191.042088394 -163303.119039907
4 -124801.747477263 -68045.7062179692 -164195.288919841 -163299.140429394
...
This is the SerialPort_DataReceived
public void serialPort1_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
lock (_lock)
{
byte[] buffer = new byte[_serialPort1.BytesToRead];
_serialPort1.Read(buffer, 0, buffer.Length);
for (int i = 0; i < buffer.Length; i++)
{
//Parse data
double[] samplesAtTimeT = DataParserObj.interpretBinaryStream(buffer[i]);
//Add data to BlockingCollection when parsed
if (samplesAtTimeT != null)
_bqBufferTimerSeriesData.Add(samplesAtTimeT);
}
}
}
And the class that parses the data:
public class DataParser
{
private int packetSampleCounter = 0;
private int localByteCounter = 0;
private int packetState = 0;
private byte[] tmpBuffer = new byte[3];
private double[] ParsedData = new double[5]; //[0] packetIdx (0-255), [1-4] signal
public double[] interpretBinaryStream(byte actbyte)
{
bool returnDataFlag = false;
switch (packetState)
{
case 0: // end packet indicator
if (actbyte == 0xC0)
packetState++;
break;
case 1: // start packet indicator
if (actbyte == 0xA0)
packetState++;
else
packetState = 0;
break;
case 2: // packet Index
packetSampleCounter = 0;
ParsedData[packetSampleCounter] = actbyte;
packetSampleCounter++;
localByteCounter = 0;
packetState++;
break;
case 3: //channel data (4 channels x 3byte/channel)
// 3 bytes
tmpBuffer[localByteCounter] = actbyte;
localByteCounter++;
if (localByteCounter == 3)
{
ParsedData[packetSampleCounter] = Bit24ToInt32(tmpBuffer);
if (packetSampleCounter == 5)
packetState++; //move to next state, end of packet
else
localByteCounter = 0;
}
break;
case 4: // end packet
if (actbyte == 0xC0)
{
returnDataFlag = true;
packetState = 1;
}
else
packetState = 0;
break;
default:
packetState = 0;
break;
}
if (returnDataFlag)
return ParsedData;
else
return null;
}
}
Get rid of the DataReceived event and instead use await serialPort.BaseStream.ReadAsync(....) to get notified when data comes in. async/await is much cleaner and doesn't force you into multithreaded data processing. For high speed networking, parallel processing is great. But serial ports are slow, so extra threads have no benefit.
Also, BytesToRead is buggy (it does return the number of queued bytes, but it destroys other state) and you should never call it.
Finally, do NOT ignore the return value from Read (or BaseStream.ReadAsync). You need to know how bytes were actually placed into your buffer, because it is not guaranteed to be the same number you asked for.
private async void ReadTheSerialData()
{
var buffer = new byte[200];
while (serialPort.IsOpen) {
var valid = await serialPort.BaseStream.ReadAsync(buffer, 0, buffer.Length);
for (int i = 0; i < valid; ++i)
{
//Parse data
double[] samplesAtTimeT = DataParserObj.interpretBinaryStream(buffer[i]);
//Add data to BlockingCollection when parsed
if (samplesAtTimeT != null)
_bqBufferTimerSeriesData.Add(samplesAtTimeT);
}
}
}
Just call this function after opening the port and setting your flow control, timeouts, etc. You may find that you no longer need the blocking queue, but can just handle the contents of samplesAtTimeT directly.
So I got a server that shoots off the below task in a while loop it's functioning as my client listener. The issue seems like it wants to loop as fast as possible through the clients which is great! But it fires tasks off too fast before the previous task is done a new one happens ( for the same client).
I don't want it to wait for the task to complete! I want it to shoot more tasks but just not create anymore tasks for a specific client that already has an existing task.
What is the best way to go about this... I see a lot of people using WhenAll or something but I don't care about All the tasks.
//the below is in a while loop which goes through the clients that are connected.
if (stream.DataAvailable)
{
// the below task is the one that is firing before the pervious fired one completes.
Task DataInterpetTask = Task.Factory.StartNew(() =>
{
int totalBuffer, totalRecieved = 0;
byte[] totalBufferByte = new byte[4];
byte[] buffer = new byte[0];
byte[] tbuffer;
int rLength, prevLength;
stream.Read(totalBufferByte, 0, 4);
totalBuffer = BitConverter.ToInt32(totalBufferByte, 0);
Console.WriteLine("got stuff: " + totalBuffer);
byte[] buf = new byte[c.getClientSocket().ReceiveBufferSize];
while (totalBuffer > totalRecieved)
{
rLength = stream.Read(buf, 0, buf.Length);
totalRecieved = rLength + totalRecieved;
if (rLength < buf.Length)
{
byte[] temp = new byte[rLength];
Array.Copy(buf, temp, rLength);
buf = temp;
}
prevLength = buffer.Length;
tbuffer = buffer;
buffer = new byte[buffer.Length + rLength];
Array.Copy(tbuffer, buffer, tbuffer.Length);
buf.CopyTo(buffer, prevLength);
}
String msg = Encoding.ASCII.GetString(buffer);
if (msg.Contains("PNG") || msg.Contains("RBG") || msg.Contains("HDR"))
{
Console.WriteLine("Receiving Picture");
RowContainer tempContainer;
if ((tempContainer = MainWindow.mainWindow.RowExists(c)) != null)
{
tempContainer.Image = buffer;
Console.WriteLine("Updating row: " + tempContainer.rowNumber);
MainWindow.mainWindowDispacter.BeginInvoke(new Action(() =>
MainWindow.mainWindow.UpdateRowContainer(tempContainer, 0)));
}
else
{
Console.WriteLine("Adding row for Picture");
MainWindow.mainWindowDispacter.BeginInvoke(new Action(() =>
MainWindow.mainWindow.CreateClientRowContainer(c, buffer)));
}
return;
}
String switchString = msg.Substring(0, 4);
if (msg.Length > 4)
msg = msg.Substring(4);
MainWindow.mainWindowDispacter.BeginInvoke(new Action(() =>
{
if (MainWindow.debugWindow != null)
MainWindow.debugWindow.LogTextBox.AppendText("Received message " + msg + " from client: " + c.getClientIp() + " as a " + switchString + " type" + Environment.NewLine);
}));
RowContainer tContain = MainWindow.mainWindow.RowExists(c);
if(tContain == null)
return;
switch (switchString)
{
case "pong":
c.RespondedPong();
break;
case "stat":
tContain.SetState(msg);
MainWindow.mainWindow.UpdateRowContainer(tContain, 1);
break;
case "osif":
tContain.Os = msg;
MainWindow.mainWindowDispacter.BeginInvoke(new Action(() =>
MainWindow.mainWindow.UpdateRowContainer(tContain, 2)));
break;
case "idle":
tContain.idle = msg;
MainWindow.mainWindowDispacter.BeginInvoke(new Action(() =>
MainWindow.mainWindow.UpdateRowContainer(tContain, 3)));
break;
}
});
}
If you don't want to start a new operation for a given client until the previous one has completed, then just keep track of that. Either move the client object to some "in progress" list, so that it's not even in the list of clients that you are processing in your loop, or add a flag to the client object class that indicates that the operation is "in progress", so that your loop can ignore the client until the current operation has completed (e.g. if (c.OperationInProgress) continue;).
That said, it appears that you are using polling (i.e. checking DataAvailable) to manage your I/O. This is a really inefficient approach, and on top of that is the root cause of your current problem. If you were using the much better asynchronous model, your code would be more efficient and you wouldn't even have this problem, because the asynchronous API would provide all of the state management you need in order to avoid overlapping operations for a given client.
Unfortunately, your code example is very sparse, missing all of the detail that would be required to offer specific advice on how to change the implementation to use the asynchronous model (and what little code there is, includes a lot of extra code that would not be found in a good code example). So hopefully the above is enough to get you on track to the right solution.
I need to parse large text that is similar to XML. Because the text it is not in memory ( I have a StreamReader object) placing that stream on memory is where I take the most time. So on one thread I place that stream into an array (memory). And I have another thread that process that array. But I am having wierd behavieours. For example take a look at this image:
Note that listToProcess[counter] = buffer and right now that should be listToProcess[10] = buffer Note that the debugger says that listToProcess[10]=null why!? . the other thread just reads the items it does not modify them. At first I thought that maybe the other thread was making that item = null but that is not the case. why am I experiencing this behavior?
In case you want to see my code here it is:
Semaphore sem = new Semaphore(0, 1000000);
bool w;
bool done = false;
// this task is responsible for parsing text created by main thread. Main thread
// reads text from the stream and places chunks in listToProces[]
var task1 = Task.Factory.StartNew(() =>
{
sem.WaitOne(); // wait so there are items on list (listToProcess) to work with
// counter to identify which chunk of char[] in listToProcess we are ading to the dictionary
int indexOnList = 0;
while (true)
{
if (listToProcess[indexOnList] == null)
{
if (done)
break;
w = true;
sem.WaitOne();
w = false;
if (done)
break;
if (listToProcess[indexOnList] == null)
{
throw new NotFiniteNumberException();
}
}
// add chunk to dictionary
ProcessChunk(listToProcess[indexOnList]);
indexOnList++;
}
}); // close task1
bool releaseSem = false;
// this main thread is responsible for placing the streamreader into chunks of char[] so that
// task1 can start processing those chunks
int counter = 0;
while (true)
{
char[] buffer = new char[2048];
// unparsedDebugInfo is a streamReader object
var charsRead = unparsedDebugInfo.Read(buffer, 0, buffer.Length);
if (charsRead < 1)
{
listToProcess[counter] = pattern;
break;
}
listToProcess[counter] = buffer;
counter++;
if (releaseSem)
{
sem.Release();
releaseSem = false;
}
if (counter == 10 || w)
{
releaseSem = true;
}
}
done = true;
sem.Release();
task1.Wait();
Edit
Sorry in other words why do I hit this break point:
I thought that counter was the problem but maybe I am doing something wrong with the semaphore...
You have a counter++ so the one you updated before that was at index 9, not index 10.
Meaning : your claim that it set
listToProcess[10] = buffer:
Is incorrect: it set
listToProcess[9] = buffer:
I have several threads consuming tasks from a queue using something similar to the code below. The problem is that there is one type of task which cannot run while any other tasks are being processed.
Here is what I have:
while (true) // Threaded code
{
while (true)
{
lock(locker)
{
if (close_thread)
return;
task = GetNextTask(); // Get the next task from the queue
}
if (task != null)
break;
wh.WaitOne(); // Wait until a task is added to the queue
}
task.Run();
}
And this is kind of what I need:
while (true)
{
while (true)
{
lock(locker)
{
if (close_thread)
return;
if (disable_new_tasks)
{
task = null;
}
else
{
task = GetNextTask();
}
}
if (task != null)
break;
wh.WaitOne();
}
if(!task.IsThreadSafe())
{
// I would set this to false inside task.Run() at
// the end of the non-thread safe task
disable_new_tasks = true;
Wait_for_all_threads_to_finish_their_current_tasks();
}
task.Run();
}
The problem is I don't know how to achive this without creating a mess.
Try looking to using a TreadPool and then using the WaitHandle.WaitAll method to determine that all threads have finished executing.
MSDN
WaitHandle.WaitAll(autoEvents); Maybe this is what you want.
class Calculate
{
double baseNumber, firstTerm, secondTerm, thirdTerm;
AutoResetEvent[] autoEvents;
ManualResetEvent manualEvent;
// Generate random numbers to simulate the actual calculations.
Random randomGenerator;
public Calculate()
{
autoEvents = new AutoResetEvent[]
{
new AutoResetEvent(false),
new AutoResetEvent(false),
new AutoResetEvent(false)
};
manualEvent = new ManualResetEvent(false);
}
void CalculateBase(object stateInfo)
{
baseNumber = randomGenerator.NextDouble();
// Signal that baseNumber is ready.
manualEvent.Set();
}
// The following CalculateX methods all perform the same
// series of steps as commented in CalculateFirstTerm.
void CalculateFirstTerm(object stateInfo)
{
// Perform a precalculation.
double preCalc = randomGenerator.NextDouble();
// Wait for baseNumber to be calculated.
manualEvent.WaitOne();
// Calculate the first term from preCalc and baseNumber.
firstTerm = preCalc * baseNumber *
randomGenerator.NextDouble();
// Signal that the calculation is finished.
autoEvents[0].Set();
}
void CalculateSecondTerm(object stateInfo)
{
double preCalc = randomGenerator.NextDouble();
manualEvent.WaitOne();
secondTerm = preCalc * baseNumber *
randomGenerator.NextDouble();
autoEvents[1].Set();
}
void CalculateThirdTerm(object stateInfo)
{
double preCalc = randomGenerator.NextDouble();
manualEvent.WaitOne();
thirdTerm = preCalc * baseNumber *
randomGenerator.NextDouble();
autoEvents[2].Set();
}
public double Result(int seed)
{
randomGenerator = new Random(seed);
// Simultaneously calculate the terms.
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateBase));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateFirstTerm));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateSecondTerm));
ThreadPool.QueueUserWorkItem(
new WaitCallback(CalculateThirdTerm));
// Wait for all of the terms to be calculated.
**WaitHandle.WaitAll(autoEvents);**
// Reset the wait handle for the next calculation.
manualEvent.Reset();
return firstTerm + secondTerm + thirdTerm;
}
}
You can think of this as similar to a data structure that allows any number of readers, or one writer. That is, any number of threads can read the data structure, but a thread that writes to the data structure needs exclusive access.
In your case, you can have any number of "regular" threads running, or you can have one thread that requires exclusive access.
.NET has the ReaderWriterLock and ReaderWriterLockSlim classes that you could use to implement this kind of sharing. Unfortunately, neither of those classes is available on the xbox.
However, it possible to implement a reader/writer lock from a combination of Monitor and ManualResetEvent objects. I don't have a C# example (why would I, since I have access to the native objects?), but there's a simple Win32 implementation that shouldn't be terribly difficult to port.
you can use something like this,
ExecutorService workers = Executors.newFixedThreadPool(10);
for(int i=0; i<input.length; i++) {
Teste task = new Teste(rowArray,max);//your thread class
workers.execute(task);
}
workers.shutdown();//ask for shut down
while(!workers.isTerminated()) {//wait until all finishes.
try {
Thread.sleep(100);//
} catch (InterruptedException exception) {
}
System.out.println("waiting for submitted task to finish operation");
}
Hope this help.
Question: I want to search the subnet for all computers in it.
So I send a ping to all IP addresses in the subnet.
The problem is it works fine if I only scan 192.168.0.".
But if I scan 192.168..*", then I get an "Out of memory" exception.
Why ? Do I have to limit the threads, or is the problem the memory consumed by new ping which doesn't get destructed once finished, or do I need to call gc.collect() ?
static void Main(string[] args)
{
string strFromIP = "192.168.0.1";
string strToIP = "192.168.255.255";
Oyster.Math.IntX omiFromIP = 0;
Oyster.Math.IntX omiToIP = 0;
IsValidIP(strFromIP, ref omiFromIP);
IsValidIP(strToIP, ref omiToIP);
for (Oyster.Math.IntX omiThisIP = omiFromIP; omiThisIP <= omiToIP; ++omiThisIP)
{
Console.WriteLine(IPn2IPv4(omiThisIP));
System.Net.IPAddress sniIPaddress = System.Net.IPAddress.Parse(IPn2IPv4(omiThisIP));
SendPingAsync(sniIPaddress);
}
Console.WriteLine(" --- Press any key to continue --- ");
Console.ReadKey();
} // Main
// http://pberblog.com/post/2009/07/21/Multithreaded-ping-sweeping-in-VBnet.aspx
// http://www.cyberciti.biz/faq/how-can-ipv6-address-used-with-webbrowser/#comments
// http://www.kloth.net/services/iplocate.php
// http://bytes.com/topic/php/answers/829679-convert-ipv4-ipv6
// http://stackoverflow.com/questions/1434342/ping-class-sendasync-help
public static void SendPingAsync(System.Net.IPAddress sniIPaddress)
{
int iTimeout = 5000;
System.Net.NetworkInformation.Ping myPing = new System.Net.NetworkInformation.Ping();
System.Net.NetworkInformation.PingOptions parmPing = new System.Net.NetworkInformation.PingOptions();
System.Threading.AutoResetEvent waiter = new System.Threading.AutoResetEvent(false);
myPing.PingCompleted += new System.Net.NetworkInformation.PingCompletedEventHandler(AsyncPingCompleted);
string data = "ABC";
byte[] dataBuffer = Encoding.ASCII.GetBytes(data);
parmPing.DontFragment = true;
parmPing.Ttl = 32;
myPing.SendAsync(sniIPaddress, iTimeout, dataBuffer, parmPing, waiter);
//waiter.WaitOne();
}
private static void AsyncPingCompleted(Object sender, System.Net.NetworkInformation.PingCompletedEventArgs e)
{
System.Net.NetworkInformation.PingReply reply = e.Reply;
((System.Threading.AutoResetEvent)e.UserState).Set();
if (reply.Status == System.Net.NetworkInformation.IPStatus.Success)
{
Console.WriteLine("Address: {0}", reply.Address.ToString());
Console.WriteLine("Roundtrip time: {0}", reply.RoundtripTime);
}
}
According to this thread, System.Net.NetworkInformation.Ping seems to allocate one thread per async request, and "ping-sweeping a class-B network creates 100's of threads and eventually results in an out-of-memory error."
The workaround that person used was to write their own implementation using raw sockets. You don't have to do that in F#, of course, but there are a number of advantages in doing so.
First: Only start like 1000 pings the first time (in the loop in Main)
Second: Move the following parameters to Program class (member variables)
Oyster.Math.IntX omiFromIP = 0;
Oyster.Math.IntX omiToIP = 0;
Oyster.Math.IntX omiCurrentIp = 0;
object syncLock = new object();
Third: In AsyncPingCompleted do something like this in the bottom:
public void AsyncPingCompleted (bla bla bla)
{
//[..other code..]
lock (syncLock)
{
if (omiToIP < omiCurrentIp)
{
++omiCurrentIp;
System.Net.IPAddress sniIPaddress = System.Net.IPAddress.Parse(IPn2IPv4(omiCurrentIp));
SendPingAsync(sniIPaddress);
}
}
}
Update with complete code example
public class Example
{
// Number of pings that can be pending at the same time
private const int InitalRequests = 10000;
// variables from your Main method
private Oyster.Math.IntX _omiFromIP = 0;
private Oyster.Math.IntX _omiToIP = 0;
private Oyster.Math.IntX _omiCurrentIp = 0;
// synchronoize so that two threads
// cannot ping the same IP.
private object _syncLock = new object();
static void Main(string[] args)
{
string strFromIP = "192.168.0.1";
string strToIP = "192.168.255.255";
IsValidIP(strFromIP, ref _omiFromIP);
IsValidIP(strToIP, ref _omiToIP);
for (_omiCurrentIp = _omiFromIP; _omiCurrentIp <= _omiFromIP + InitalRequests; ++_omiCurrentIp)
{
Console.WriteLine(IPn2IPv4(_omiCurrentIp));
System.Net.IPAddress sniIPaddress = System.Net.IPAddress.Parse(IPn2IPv4(_omiCurrentIp));
SendPingAsync(sniIPaddress);
}
Console.WriteLine(" --- Press any key to continue --- ");
Console.ReadKey();
} // Main
// http://pberblog.com/post/2009/07/21/Multithreaded-ping-sweeping-in-VBnet.aspx
// http://www.cyberciti.biz/faq/how-can-ipv6-address-used-with-webbrowser/#comments
// http://www.kloth.net/services/iplocate.php
// http://bytes.com/topic/php/answers/829679-convert-ipv4-ipv6
// http://stackoverflow.com/questions/1434342/ping-class-sendasync-help
public void SendPingAsync(System.Net.IPAddress sniIPaddress)
{
int iTimeout = 5000;
System.Net.NetworkInformation.Ping myPing = new System.Net.NetworkInformation.Ping();
System.Net.NetworkInformation.PingOptions parmPing = new System.Net.NetworkInformation.PingOptions();
System.Threading.AutoResetEvent waiter = new System.Threading.AutoResetEvent(false);
myPing.PingCompleted += new System.Net.NetworkInformation.PingCompletedEventHandler(AsyncPingCompleted);
string data = "ABC";
byte[] dataBuffer = Encoding.ASCII.GetBytes(data);
parmPing.DontFragment = true;
parmPing.Ttl = 32;
myPing.SendAsync(sniIPaddress, iTimeout, dataBuffer, parmPing, waiter);
//waiter.WaitOne();
}
private void AsyncPingCompleted(Object sender, System.Net.NetworkInformation.PingCompletedEventArgs e)
{
System.Net.NetworkInformation.PingReply reply = e.Reply;
((System.Threading.AutoResetEvent)e.UserState).Set();
if (reply.Status == System.Net.NetworkInformation.IPStatus.Success)
{
Console.WriteLine("Address: {0}", reply.Address.ToString());
Console.WriteLine("Roundtrip time: {0}", reply.RoundtripTime);
}
// Keep starting those async pings until all ips have been invoked.
lock (_syncLock)
{
if (_omiToIP < _omiCurrentIp)
{
++_omiCurrentIp;
System.Net.IPAddress sniIPaddress = System.Net.IPAddress.Parse(IPn2IPv4(_omiCurrentIp));
SendPingAsync(sniIPaddress);
}
}
}
}
I guess the problem is that you are spawning roughly 63K ping requests near-simultaneously. Without further memory profiling it is hard to say which parts consume the memory. You are working with network resources, which probably are limited. Throttling the number of active pings will ease the use of local resources, and also network traffic.
Again I would look into the Task Parallel Library, the Parallel.For construct combined with the Task<T> should make it easy for you.
Note: for .Net 3.5 users, there is hope.
I did something similar to this. The way I solved the problem on my project was to cast the ping instance to IDisposable:
(myPing as IDisposable).Dispose()
So get a list of say 254 ping instances running asynchronously (X.X.X.1/254) and keep track of when all of them have reported in. When they have, iterate through your list of ping instances, run the above code on each instance, and then dump the list.
Works like a charm.
pseudo-code
do
if pings_running > 100 then
sleep 100ms.
else
start ping
endif
loop while morepings
Finally... No ping requried at all...
http://www.codeproject.com/KB/cs/c__ip_scanner.aspx
All I needed to do is to make it thread-safe for debugging.
Changing Add to:
void Add( string m )
{
Invoke(new MethodInvoker(
delegate
{
add.Items.Add(m);
}));
//add.Items.Add( m );
}
Private Sub Add(m As String)
Invoke(New MethodInvoker(Function() Do
add.Items.Add(m)
End Function))
'add.Items.Add(m);'
End Sub