c# asynchronous socket accessed by multiple threads - c#

I would like to use the C# asynchronous io model for my socket. I have multiple
threads that need to send over the socket. What is the best way of handling this?
A pool of N sockets,with access controlled by lock? Or is asynch send thread-safe
for multiple threads accessing a single socket?
THanks!
Jacko

The async methods already create new threads to send the data. This will probably add unnecessary overhead to your application. If you already have multiple threads, you can create an IDispoable type of object to represent access to the socket and a manager which will control the checkin and checkout for the socket. The socket would checkin itself when the IDisposable dispose method is called. This way you can control what methods your threads can perform on the socket as well.
If the socket is already checked out by another thread, the manager would simply block until it's available.
using (SharedSocket socket = SocketManager.GetSocket())
{
//do things with socket
}

System.Threading.Semaphore is something that becomes handy in synchronizing and racing conditions

Related

Is multithreading dangerous in sockets writing?

I'm creating a network protocol based on Tcp, and i am using berkeley socket via C#.
is the socket buffer gonna mixed up when two threads trying to send data via socket.send method at a time?
should i use lock for accessing one thread at a time?
According to MSDN, the Socket class is thread safe. So, that means you don't need to lock the send method and you can safely send and receive on different threads. But be aware that the order is not guaranteed, so the data won't be mixed if you send it all at a single call to the send method instead of chuncked calls.
In addition to that, I would recommend to lock and flush it just in case you don't want the server swapping the responses across multiple concurrent requests. But that doesn't seems to be this case.

Accessing NetMQ sockets from multiple threads

Is it safe to access a NetMQ socket from multiple threads, as long as they are not using it simultaneously?
For example,is the following scenario OK:
Thread A uses a socket.
Thread A ends.
Thread B uses the same socket.
If not,
must the sole operating thread be the very same who created the socket?
Technically you can. However how can you guarantee that it is actually not used concurrently? I suggest using a lock if you want to use the socket from multiple threads. Also take a look at NetMQQueue, is new and not documented, thread safe for enqueueing only. It might help you solve synchronization threads between NetMQ Sockets as you can poll on it with the Poller.
https://github.com/zeromq/netmq/blob/master/src/NetMQ.Tests/NetMQQueueTests.cs

TCP Client through multithreading in C#

I am connecting to Gmail account using TCP client for reading emails. It returns SslStream for the TCP connection. It works fine for single thread environment but performance is very poor in terms of speed.
I need to optimize the project so that its speed can be increased. I have implemented multithreading which increases the speeed but application gets hang at some point.
Is it thread safe to use TCP connection (global member)?
OR Can I create multiple TCP connections and pass to the thread method to increase the speed ?
OR is there any other better way for doing this?
TCPClient m_TCPclient
SslStream sslStream;
private void createTCP()
{
// creating tcp and sslstream
}
private void authenticateUser()
{
// authenticating the user
}
private void getUserdata()
{
// iterating folders and its items
foreach(string emailID in IDList)
{
//Thread implementation
}
With regard to thread safety, take a quick glance at the documentation for the TcpClient and SslStream:
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
I think what you may want to look at is using the Asynch methods of the stream to deal with the hanging when you perform IO.
Neither TCPClient nor SslStream objects are thread safe. You would have to add thread synchronization to avoid race conditions to avoid hanging. However, your application speed will still be dependent on the single tcp client which essentially renders your multi threading useless in terms of tcp throughput.
Have each thread create its own connection and stream objects instead. This will in turn increase your tcp throughput which is most likely the bottleneck of your application.
To synchronize the threads so they don't read the same information, have the main thread fetch a list of emails and pass a subset of the email list to each of the threads which in turn fetch those emails using their own connections.
You can also use caching to avoid getting the same information every time you restart the application.

Alternative to a bunch of while (true) loops when awaiting messages or client connections? (TcpClient C#)

I thought C# was an event-driven programming language.
This type of thing seems rather messy and inefficient to me:
tcpListener.Start();
while (true)
{
TcpClient client = this.tcpListener.AcceptTcpClient();
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientCommunication));
clientThread.Start(client);
}
I also have to do the same kind of thing when waiting for new messages to arrive. Obviously these functions are enclosed within a thread.
Is there not a better way to do this that doesn't involve infinite loops and wasted CPU cycles? Events? Notifications? Something? If not, is it bad practice to do a Thread.Sleep so it isn't processing as nearly as often?
There is absolutely nothing wrong with the method you posted. There are also no wasted CPU cycles like you mentioned. TcpClient.AcceptTcpClient() blocks the thread until a client connects to it, which means it does not take up any CPU cycles. So the only time the loop actually loops is when a client connects.
Of course you may want to use something other than while(true) if you want a way to exit the loop and stop listening for connections, but that's another topic. In short, this is a good way to accept connections and I don't see any purpose of having a Thread.Sleep anywhere in here.
There are actually three ways to handle IO operations for sockets. The first one is to use the blocking functions just as you do. They are usually used to handle a client socket since the client expects and answer directly most of the time (and therefore can use blocking reads)
For any other socket handlers I would recommend to use one of the two asynchronous (non-blocking) models.
The first model is the easiest one to use. And it's recognized by the Begin/End method names and the IAsyncResult return value from the Begin method. You pass a callback (function pointer) to the Begin method which will be invoked when something has happened. As an example take a look at BeginReceive.
The second asynchronous model is more like the windows IO model (IO Completion Ports) . It's also the newest model in .NET and should give you the best performance. As SocketAsyncEventArgs object is used to control the behavior (like which method to invoke when an operation completes). You also need to be aware of that an operation can be completed directly and that the callback method will not be invoked then. Read more about RecieveAsync.

Sockets terminology - what does "blocking" mean?

When talking sockets programming in C# what does the term blocking mean?
I need to build a server component (possibly a Windows service) that will receive data, do some processing and return data back to the caller. The caller can wait for the reply but I need to ensure that multiple clients can call in at the same time.
If client 1 connects and I take say 10 seconds to process their request, will the socket be blocked for client 2 calling in 2 seconds later? Or will the service start processing a second request on a different thread?
In summary, my clients can wait for a response but I must be able to handle multiple requests simultaneously.
Blocking means that the call you make (send/ receive) does not return ('blocks') until the underlying socket operation has completed.
For read that means until some data has been received or the socket has been closed.
For write it means that all data in the buffer has been sent out.
For dealing with multiple clients start a new thread for each client/ give the work to a thread in a threadpool.
Connected TCP sockets can not be shared, so it must be one socket per client anyway.
This means you can't use the socket for anything else on the current executing thread.
It has nothing to do with szerver side.
It means the thread pauses whilst it waits for a response from the socket.
If you don't want it to pause, use the async methods.
Read more: http://www.developerfusion.com/article/28/introduction-to-tcpip/8/
A blocking call will hold the currently executing thread until the call completes.
For example, if you wish to read 10 bytes from a network stream call the Read method as follows
byte[] buf = new byte[10];
int bytesRead = stream.Read(buf, 0, buf.Length);
The currently executing thread will block on the Read call until 10 bytes has been read (or the ReadTimeout has expired).
There are Async variants of Read and Write to prevent blocking the current thread. These follow the standard APM pattern in .NET. The Async variants prevent you having to deal out a Thread (which will be blocked) to each client which increases you scalability.
Blocking operations are usually those that send or receive data and those that establish connections (i.e. listen for new clients or connect to other listeners).
To answer your question, blocking basically means that the control stays within a function or block of code (such as readfile() in c++) until it returns and does not move to the code following this code block.
This can be either in a Single threaded or a Multi-threaded context. Though having blocking calls in a single threaded code is basically recipe for disaster.
Solution:
To solve this in C#, you can simply use asynchronous methods for example BeginInvoke(), and EndInvoke() in the sockets context, that will not block your calls. This is called asynchronous programming method.
You can call BeginInvoke() and EndInvoke() either on a delegate or a control depending on which ASYNCHRONOUS method you follow to achieve this.
You can use the function Socket.Select()
Select(IList checkRead, IList checkWrite, IList checkError, int microSeconds)
to check multiple Sockets for both readability or writability. The advantage is that this is simple. It can be done from a single thread and you can specify how long you want to wait, either forever (-1 microseconds) or a specific duration. And you don't have to make your sockets asynchronous (i.e.: keep them blocking).
It also works for listening sockets. It will report writability. when there is a connection to accept. From experimenting, i can say that it also reports readability for graceful disconnects.
It's probably not as fast as asyncrhonous sockets. It's also not ideal for detecting errors. I haven't had much use for the third parameter, because it doesn't detect an ungraceful disconnect.
You should use one socket per thread. Blocking sockets (synchronous) wait for a response before returning. Non-blocking (asynchronous) can peek to see if any data received and return if no data there yet.

Categories