I'm trying to write a client application, which connects to a server on a specified port and receives data.
I found this fine example, but my problem is that the server is sending me data all the time (not closing the conection), and so ReceiveCallback never ends, because client.EndReceive(ar) never returns 0.
So, my WinForm is freezing during receiving data.
The idea is to monitor all the incoming data and make some callbacks on certain occasions.
I'm new to C#, could you point me to right direction? Multithreading?
Have two threads: one for the user interface, and another that reads from the "infinite socket". The infinite thread loops forever reading from the socket in appropriately sized chunks, perhaps doing some preprocessing. It then uses Control.Invoke() to call a method on the UI thread, passing the chunk to it in a parameter. Make sure the chunks are small enough that the UI thread can process them without locking up, say in 0.1 secs.
Related
I have my main Winforms application.
There are 6 Threads working in parrael + main thread, atleast that is what it ment to be .
I have created one Thread that is an Tcp Server, listening on specific port.
listenerThread = new Thread(new ThreadStart(AsynchronousSocketListener.StartListening));
listenerThread.Start();
I have also 5 different Threads that are doing different kind of stuff (for example updating database, counting averages/sums, being TCP Clients etc.)
My question is:
Is it possible that my TCP Server (which is working on one of 6 threads) wont read a message when one of those 5 different threads thread will take the computing power of CPU, and the TCP Server's Thread thread will have to wait ?
Another question: If that could happend, how could i avoid that ?
This is a summary of my comments above
"Is it possible that my TCP Server (which is working on one of 6 threads) wont read a message when one of those 5 different threads thread will take the computing power of CPU, and the TCP Server's Thread thread will have to wait ?"
Received data is buffered to an extent however if your code does not respond in an appropriate time then it could result in dropped data.
The same could be said in a single-core system and another app, say Prime95 is busy not playing nice and calculating prime numbers right next to yours.
Another question: If that could happend, how could i avoid that ?
When handling I/O and I'll focus on TCP here is to perform the minimal amount of processing in your data received handler irrespective of whether that handler is of the form IAsyncResult or async/await.
A good general flow is:
start an asynchronous read
read result of asynchronous read
place result in a queue
loop back to #1
Meanwhile you process the read results from #2 in a different mechanism whether that be a timer; GUI app-idle-loop; or a different thread so long as the thread processing the results has nothing to do with the flow above.
The reason being is that in a scenario involving reasonably high data transfer rates, if you were to read a block of data and then proceed to immediately update that Telerik datagrid UI showing 1000s of rows, there is a high chance that the next read operation will result in dropped data because you didn't respond in a sufficient amount of time.
When talking sockets programming in C# what does the term blocking mean?
I need to build a server component (possibly a Windows service) that will receive data, do some processing and return data back to the caller. The caller can wait for the reply but I need to ensure that multiple clients can call in at the same time.
If client 1 connects and I take say 10 seconds to process their request, will the socket be blocked for client 2 calling in 2 seconds later? Or will the service start processing a second request on a different thread?
In summary, my clients can wait for a response but I must be able to handle multiple requests simultaneously.
Blocking means that the call you make (send/ receive) does not return ('blocks') until the underlying socket operation has completed.
For read that means until some data has been received or the socket has been closed.
For write it means that all data in the buffer has been sent out.
For dealing with multiple clients start a new thread for each client/ give the work to a thread in a threadpool.
Connected TCP sockets can not be shared, so it must be one socket per client anyway.
This means you can't use the socket for anything else on the current executing thread.
It has nothing to do with szerver side.
It means the thread pauses whilst it waits for a response from the socket.
If you don't want it to pause, use the async methods.
Read more: http://www.developerfusion.com/article/28/introduction-to-tcpip/8/
A blocking call will hold the currently executing thread until the call completes.
For example, if you wish to read 10 bytes from a network stream call the Read method as follows
byte[] buf = new byte[10];
int bytesRead = stream.Read(buf, 0, buf.Length);
The currently executing thread will block on the Read call until 10 bytes has been read (or the ReadTimeout has expired).
There are Async variants of Read and Write to prevent blocking the current thread. These follow the standard APM pattern in .NET. The Async variants prevent you having to deal out a Thread (which will be blocked) to each client which increases you scalability.
Blocking operations are usually those that send or receive data and those that establish connections (i.e. listen for new clients or connect to other listeners).
To answer your question, blocking basically means that the control stays within a function or block of code (such as readfile() in c++) until it returns and does not move to the code following this code block.
This can be either in a Single threaded or a Multi-threaded context. Though having blocking calls in a single threaded code is basically recipe for disaster.
Solution:
To solve this in C#, you can simply use asynchronous methods for example BeginInvoke(), and EndInvoke() in the sockets context, that will not block your calls. This is called asynchronous programming method.
You can call BeginInvoke() and EndInvoke() either on a delegate or a control depending on which ASYNCHRONOUS method you follow to achieve this.
You can use the function Socket.Select()
Select(IList checkRead, IList checkWrite, IList checkError, int microSeconds)
to check multiple Sockets for both readability or writability. The advantage is that this is simple. It can be done from a single thread and you can specify how long you want to wait, either forever (-1 microseconds) or a specific duration. And you don't have to make your sockets asynchronous (i.e.: keep them blocking).
It also works for listening sockets. It will report writability. when there is a connection to accept. From experimenting, i can say that it also reports readability for graceful disconnects.
It's probably not as fast as asyncrhonous sockets. It's also not ideal for detecting errors. I haven't had much use for the third parameter, because it doesn't detect an ungraceful disconnect.
You should use one socket per thread. Blocking sockets (synchronous) wait for a response before returning. Non-blocking (asynchronous) can peek to see if any data received and return if no data there yet.
How exactly does a Handle relate to a thread? I am writing a service that accepts an HTTP request and calls a method before returning a response. I have written a test client that sends out 10,000 HTTP requests (using a semaphore to make sure that only 1000 request are made at a time).
If i call the method (the method processed before returning a response) through the ThreadPool, or through a generic Action<T>.BeginInvoke, the service's handles will go way up and stay there until all the request have finished, but the thread count of the service stays pretty much dead.
However, if I synchronously call the method before returning the response, the thread count goes up, but the handle count will will go through extreme peaks and valleys.
This is C# on a windows machine (Server 2008)
Your description is too vague to give a good diagnostic. But the ThreadPool was designed to carefully throttle the number of active threads. It will avoid running more threads than you have CPU cores. Only when a thread gets "stuck" will it schedule an extra thread. That explains why you see the number of threads not increase wildly. And, indirectly, why the handle count stays stable because the machine is doing less work.
You can think of a handle as an abstraction of a pointer. There are lots of things in Windows that use handles (when you open a file at the API level, you get a handle to the file, when you create a window, the window has a handle, a thread has a handle, etc). So, Your handle count probably has to do with operations that are occuring on your threads. If you have more threads running, more stuff is going on at the same time, so you will see more handles open.
I'm writing a server in C# which creates a (long, possibly even infinite) IEnumerable<Result> in response to a client request, and then streams those results back to the client.
Can I set it up so that, if the client is reading slowly (or possibly not at all for a couple of seconds at a time), the server won't need a thread stalled waiting for buffer space to clear up so that it can pull the next couple of Results, serialize them, and stuff them onto the network?
Is this how NetworkStream.BeginWrite works? The documentation is unclear (to me) about when the callback method will be called. Does it happen basically immediately, just on another thread which then blocks on EndWrite waiting for the actual writing to happen? Does it happen when some sort of lower-level buffer in the sockets API underflows? Does it happen when the data has been actually written to the network? Does it happen when it has been acknowledged?
I'm confused, so there's a possibility that this whole question is off-base. If so, could you turn me around and point me in the right direction to solve what I'd expect is a fairly common problem?
I'll answer the third part of your question in a bit more detail.
The MSDN documentation states that:
When your application calls BeginWrite, the system uses a separate thread to execute the specified callback method, and blocks on EndWrite until the NetworkStream sends the number of bytes requested or throws an exception.
As far as my understanding goes, whether or not the callback method is called immediately after calling BeginSend depends upon the underlying implementation and platform. For example, if IO completion ports are available on Windows, it won't be. A thread from the thread pool will block before calling it.
In fact, the NetworkStream's BeginWrite method simply calls the underlying socket's BeginSend method on my .Net implementation. Mine uses the underlying WSASend Winsock function with completion ports, where available. This makes it far more efficient than simply creating your own thread for each send/write operation, even if you were to use a thread pool.
The Socket.BeginSend method then calls the OverlappedAsyncResult.CheckAsyncCallOverlappedResult method if the result of WSASend was IOPending, which in turn calls the native RegisterWaitForSingleObject Win32 function. This will cause one of the threads in the thread pool to block until the WSASend method signals that it has completed, after which the callback method is called.
The Socket.EndSend method, called by NetworkStream.EndSend, will wait for the send operation to complete. The reason it has to do this is because if IO completion ports are not available then the callback method will be called straight away.
I must stress again that these details are specific to my implementation of .Net and my platform, but that should hopefully give you some insight.
First, the only way your main thread can keep executing while other work is being done is through the use of another thread. A thread can't do two things at once.
However, I think what you're trying to avoid is mess with the Thread object and yes that is possible through the use of BeginWrite. As per your questions
The documentation is unclear (to me)
about when the callback method will be
called.
The call is made after the network driver reads the data into it's buffers.
Does it happen basically immediately,
just on another thread which then
blocks on EndWrite waiting for the
actual writing to happen?
Nope, just until it's in the buffers handled by the network driver.
Does it happen when some sort of
lower-level buffer in the sockets API
underflows?
If by underflow your mean it has room for it then yes.
Does it happen when the data has been
actually written to the network?
No.
Does it happen when it has been
acknowledged?
No.
EDIT
Personally I would try using a Thread. There's a lot of stuff that BeginWrite is doing behind the scenes that you should probably recognize... plus I'm weird and I like controlling my threads.
In C#, when receiving network data with the BeginReceive/EndReceive methods, is there any reason you shouldn't process the packets as soon as you receive them? Some of the tasks can be decently cpu intensive. I ask because I've seen some implementations that push the packets off into a processing queue and then handle them there. To me this seems redundant because, as far as I know, the async methods also operate on a thread pool.
Generally, you need to receive 'enough' packets to have a data item that is 'processable'.
IMO, It's better to have one thread whose job is receiving data, and another to actually process it.
As Mitch points out, you need to be able to receive enough packets to have a complete message/frame . But there's no reason why you shouldn't start processing that frame immediately and issue another BeginReceive. In fact, if you believe your processing could take some time, you're better off handing it off to the worker thread-pool rather than block a thread from the i/o pool (which is where your callback will fire).
In addition, unless you're expecting a low number of connections, spawning a thread to handle each connection is not a very scalable approach, although it does have the benefit of some simplicity.
I recently wrote an article on pipelining data-processing off a network socket, which you can find here.