WCF Named Pipe Streaming - c#

I have implemented a server/client test rig that uses streamed transfer mode with a net.pipe binding. This is all kind of working however I am hitting an issue when the actual server's stream implementation blocks on an empty buffer. Even if i remove all synchronization, set the concurrency mode to multiple I am having an issue where my client application is blocking on stream.Read.
So my client initiates a connection to the server with a "GetStream" call (on a non-UI thread), the actual stream implementation that's returned by the server is a blocking stream (say NetStream for example) so it will block when there are no bytes available to read. This is causing a complete lockup of the service host so now the client cannot make any further calls until the stream.read operation unblocks.
Can someone shed some light on this behavior?

I have solved a working code for anonymous pipes, named pipes asynchronous and synchronous. From my experience I can tell that, you just cannot have shared memory buffer "empty". Even in Asynchronous, I believe, background threads are created to read and write synchronously.
So at the moment just consider this for a fact that,
the server must first send write to buffer => the client should read it => Client must write back to that buffer => Server must read it.
That cycle has to repeat forever for applications to not freeze. I am beginning to think this is what makes shared memory communication different via server/client socket. In shared memory approach you might not have to worry about synchronization also it dedicates client to a server such that client cannot exist with a server.

You should look at using the older 'async' style Begin/End Read/Write methods to allow asynchronously communication. Unfortunately there is no async/await support for named pipes in .net - however, you can wrap them using the TaskFactory.FromAsync methods.

Related

Is multithreading dangerous in sockets writing?

I'm creating a network protocol based on Tcp, and i am using berkeley socket via C#.
is the socket buffer gonna mixed up when two threads trying to send data via socket.send method at a time?
should i use lock for accessing one thread at a time?
According to MSDN, the Socket class is thread safe. So, that means you don't need to lock the send method and you can safely send and receive on different threads. But be aware that the order is not guaranteed, so the data won't be mixed if you send it all at a single call to the send method instead of chuncked calls.
In addition to that, I would recommend to lock and flush it just in case you don't want the server swapping the responses across multiple concurrent requests. But that doesn't seems to be this case.

How to distribute requests from TCP Listener

I am using TcpListener (Clase) example https://msdn.microsoft.com/es-es/library/system.net.sockets.tcplistener(v=vs.110).aspx in order to process TCP requests.
But It seems like at the same time this TCP Listener is gonna accept multiple requests that should be processed later in a couple of Web Services together and result must be returned to the TCP client.
I am thinking to do following:
Get a stream object for reading and writing NetworkStream stream = client.GetStream(); and save it in special container class.
Put this class to special Queue helper class like this one C#: Triggering an Event when an object is added to a Queue.
When Queue is changed fire implemented event to process the next queue item asynchronously using Task.
Within a Task communicate with Web Services, and send the response to TCP Client.
Please, let me know this architecture is vital and able to resolve the multiple requests to TCP Listener.
I'd recommend you netmq. Have a look https://github.com/zeromq/netmq
Using queue is definetly viable idea, but consider what purpose it serves. It limits how many requests you can process in parallel. You may need to limit that in several cases, and most usual is if each request processing performs CPU-bound work (heavy computations). Then your ability to process a lot of them in parallel is limited, and you may to use queue approach.
In your request processing performs IO-bound work (waiting of web request to complete). This does not consume much server resources, and you can process a lot of such requests in parallel, so most likely in your case no queue is needed.
Even if you use queue, it's very rarely useful to process just one item at a time. Instead, process queue with X threads (where X again depends on if work is CPU or IO bound, for CPU you might be fine with X = number of cores, with IO you need more). If you use too few threads to process your queue - your clients will wait more for basically nothing, and can even fail by timeout.

Sockets terminology - what does "blocking" mean?

When talking sockets programming in C# what does the term blocking mean?
I need to build a server component (possibly a Windows service) that will receive data, do some processing and return data back to the caller. The caller can wait for the reply but I need to ensure that multiple clients can call in at the same time.
If client 1 connects and I take say 10 seconds to process their request, will the socket be blocked for client 2 calling in 2 seconds later? Or will the service start processing a second request on a different thread?
In summary, my clients can wait for a response but I must be able to handle multiple requests simultaneously.
Blocking means that the call you make (send/ receive) does not return ('blocks') until the underlying socket operation has completed.
For read that means until some data has been received or the socket has been closed.
For write it means that all data in the buffer has been sent out.
For dealing with multiple clients start a new thread for each client/ give the work to a thread in a threadpool.
Connected TCP sockets can not be shared, so it must be one socket per client anyway.
This means you can't use the socket for anything else on the current executing thread.
It has nothing to do with szerver side.
It means the thread pauses whilst it waits for a response from the socket.
If you don't want it to pause, use the async methods.
Read more: http://www.developerfusion.com/article/28/introduction-to-tcpip/8/
A blocking call will hold the currently executing thread until the call completes.
For example, if you wish to read 10 bytes from a network stream call the Read method as follows
byte[] buf = new byte[10];
int bytesRead = stream.Read(buf, 0, buf.Length);
The currently executing thread will block on the Read call until 10 bytes has been read (or the ReadTimeout has expired).
There are Async variants of Read and Write to prevent blocking the current thread. These follow the standard APM pattern in .NET. The Async variants prevent you having to deal out a Thread (which will be blocked) to each client which increases you scalability.
Blocking operations are usually those that send or receive data and those that establish connections (i.e. listen for new clients or connect to other listeners).
To answer your question, blocking basically means that the control stays within a function or block of code (such as readfile() in c++) until it returns and does not move to the code following this code block.
This can be either in a Single threaded or a Multi-threaded context. Though having blocking calls in a single threaded code is basically recipe for disaster.
Solution:
To solve this in C#, you can simply use asynchronous methods for example BeginInvoke(), and EndInvoke() in the sockets context, that will not block your calls. This is called asynchronous programming method.
You can call BeginInvoke() and EndInvoke() either on a delegate or a control depending on which ASYNCHRONOUS method you follow to achieve this.
You can use the function Socket.Select()
Select(IList checkRead, IList checkWrite, IList checkError, int microSeconds)
to check multiple Sockets for both readability or writability. The advantage is that this is simple. It can be done from a single thread and you can specify how long you want to wait, either forever (-1 microseconds) or a specific duration. And you don't have to make your sockets asynchronous (i.e.: keep them blocking).
It also works for listening sockets. It will report writability. when there is a connection to accept. From experimenting, i can say that it also reports readability for graceful disconnects.
It's probably not as fast as asyncrhonous sockets. It's also not ideal for detecting errors. I haven't had much use for the third parameter, because it doesn't detect an ungraceful disconnect.
You should use one socket per thread. Blocking sockets (synchronous) wait for a response before returning. Non-blocking (asynchronous) can peek to see if any data received and return if no data there yet.

How much buffer does NetworkStream and TcpClient have?

We are writing a TCPServer and Client program. How much space is there in the TcpClient buffer? Like, at what point will it begin to throw away data? We are trying to determine if the TcpClient can be blocking or if it should go into it's own background thread(so that the buffer can not get full)..
You can get the buffer sizes from TcpClient.ReceiveBufferSize and TcpClient.SendBufferSize .
The available buffer sizes will vary as data is received/acknowledged(or not) at the TCP level. TcpClient is blocking by default.
No data will be thrown away as a result of full buffers, though data could be throw away in under error conditions (such as the peer disappears/crashes/exits etc.)
The MSDN documentation says the default size of the send and receive buffers for TcpClient is 8192 bytes, or 8K. The documentation does not specify a limit as to how big these buffers can be.
As I'm sure you're aware, you send and receive data via the TcpClient using its underlying NetworkStream object. You are in control over whether these are synchronous or asynchronous operations. If you want synchronous behavior, use the Read and Write methods of NetworkStream. If you want asynchronous behavior, use the BeginRead/EndRead and BeginWrite/EndWrite operations.
If you are receiving data as part of some front-end application, I would highly recommend doing this in a secondary thread, whether you do this using the asynchronous methods or synchronously in a separate thread. This will allow your UI to be responsive to the user while still handling the sending and receiving of data in the background.

C# TCP socket, asynchronous read and write at the "same" time (.NET CF), how?

At the first look when using asynchronous methods on the socket level, there shouldn't be problems with sending and receiving data from the associated network stream, or 'directly' on the socket. But, you already probably know, there is.
The problem is particularly highlighted on the Windows Mobile platform and the Compact Framework.
I use asynchronous methods, BeginReceive and the callback function which performs ends a pending asynchronous read for the received data (EndReceive) from the async. result.
As i need to constantly receive data from the socket, there is a loop which waits for the data.
The problems begins when i want to send data. For that purposes before sending some data through the socket i'm "forcing" ends of asynchronous read with EndReceive. There is a great delay on that (sometimes you just can't wait for this timeout). Timeout is too long, and i need to immediately send the data. How? I don't want to close the socket and reconnect.
I use synchronous method Send for sending data (although the results are the same with async. methods BeginSend/EndSend). When sending is finished, i start to receive data again.
Resources that i know about:
stackoverflow.com...properly-handling-network-timeouts-on-windows-ce - comment about timeouts,
developerfusion.com...socket-programming-in-c-part-2/ - solution for simple client/server using asynchronous methods for receiving and synchronous method Send for sending data.
P.S.:I tried to send the data without ending asynchronous receive but then i got SocketException: A blocking operation is currently executing (ErrorCode: 10036).
Thanks in advance! I hope that i'm not alone in this 'one little problem'.. : )
Have you considered to use Poll (or Select for multiple sockets) static method instead of BeginReceive to check if there are data for read? In my opinion this is causing you the trouble.

Categories