I need to build an application (a server) that handles data sent from a Client via TCP. I must be able to support (at least) 2000 connections. I've made an attempt to write the TCP Server, but I find when I start to reach 600/700 connections, that the response from my server slows greatly (it actually slows down over time as more and more connections are received). I don't normally write networking code so I'm sure there are many concepts I've not fully comprehended and could be improved upon.
The main purpose of my server is to:
Handle data sent from clients and store it in a sql database.
Decide (based
upon the last message received) what the correct response should be to the client.
Queue up a list of responses and
send them to the client one at a time.
This needs to happen for all clients. Below is the code I have implemented:
private readonly TcpListener tcpListener;
private readonly Thread listenThread;
private bool run = true;
public Server()
{
this.tcpListener = new TcpListener(IPAddress.Any, AppSettings.ListeningPort); //8880 is default for me.
this.listenThread = new Thread(new ThreadStart(ListenForClients));
this.listenThread.Start();
}
private void ListenForClients()
{
this.tcpListener.Start();
while (run) {
TcpClient client = this.tcpListener.AcceptTcpClient();
//create a thread to handle communication with connected client
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComm));
clientThread.Start(client);
}
}
private void HandleClientComm(object client)
{
Queue responseMessages = new Queue();
while (run)
{
try
{
lastMessage = clientStream.GetMessage();
if (lastMessage.Length > 0)
{
// Process logic here...
//an item may be added to the response queue, depending on logic.
}
if (responseMessages.Count > 0)
{
clientStream.WriteMessage(msg);
clientStream.Flush();
// sleep for 250 milliseconds (or whats in the config).
Thread.Sleep(AppSettings.MessageSendDelayMilliseconds);
}
}
catch (Exception ex)
{
break;
}
}
tcpClient.Close();
}
And finally, here's an extension class I wrote to help me out:
namespace System.Net.Sockets
{
using System.Text;
public static class NetworkSteamExtension
{
private static readonly ASCIIEncoding Encoder = new ASCIIEncoding();
public static string GetMessage(this NetworkStream clientStream)
{
string message = string.Empty;
try
{
byte[] bMessage = new byte[4068];
int bytesRead = 0;
while (clientStream.DataAvailable)
{
bytesRead = clientStream.Read(bMessage, 0, 4068);
message += Encoder.GetString(bMessage, 0, bytesRead);
}
}
catch (Exception)
{
}
return message;
}
public static void WriteMessage(this NetworkStream clientStream, string message)
{
byte[] buffer = Encoder.GetBytes(message);
clientStream.Write(buffer, 0, buffer.Length);
clientStream.Flush();
}
}
}
Lots of articles on the subject people are using sockets instead. I've also read that .net 4.5 async / await is the best way to implement a solution.
I would like to make sure I take advantage of the newest features in .net (even 4.5.2 if it will help) and build a server that can handle at least 2000 connections. Can someone advise?
Many thanks!
OK, we need to fix some API usage errors and then the main problem of creating too many threads. It is well established that many connections can only be handled efficiently with async IO. Hundreds of connections counts as "too many".
Your client processing loop must be async. Rewrite HandleClientComm so that it uses async socket IO. This is easy with await. You need to search the web for ".net socket await".
You can continue to use synchronous accept calls. No downside there. Keep the simple synchronous code. Only make async those calls that have a significant avgWaitTime * countPerSecond product. That will be all socket calls, typically.
You are assuming that DataAvailable returns you the number of bytes in any given message. TCP does not preserve message boundaries. You need to do that youself. The DataAvailable value is almost meaningless because it can underestimate the true data that will arrive in the future.
It's usually better to use a higher level serialization protocol. For example, protobuf with length prefix. Don't use ASCII. You probably have done that only because you didn't know how to do it with a "real" encoding.
Dispose all resources using using. Don't use the non-generic Queue class. Don't flush streams, this does nothing with sockets.
Why are you sleeping?
Related
There are multiple posts that describe the performance benefit of keeping a TCP connection open, instead of closing and opening each time you need to read or write. For example:
Best practice: Keep TCP/IP connection open or close it after each transfer?
I'm communicating with an RPC based device that takes json commands. The example I have from the device vendor opens and closes a connection each time they send a command. This is what I currently do via TcpClient in a using statement, but I'd like to see if there's anyway I could improve upon what I've already done. In fact, I had attempted this when starting the project, but couldn't figure out how to do so, so closed each time out of frustration and necessity. My latest experiment using sockets because all posts indicate doing so as a necessity for lower level control:
public class Connection
{
private Socket tcpSocket = null;
public string IpAddress = "192.168.0.30";
public int Port = 50002;
public Connection(string ipAddress, int port)
{
this.IpAddress = ipAddress;
this.Port = port;
}
public void Connect()
{
DnsEndPoint ipe = new DnsEndPoint(this.IpAddress, this.Port);
Socket tempSocket =
new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
tempSocket.Connect(ipe);
if (tempSocket.Connected)
{
this.tcpSocket = tempSocket;
this.tcpSocket.NoDelay = true;
this.tcpSocket.
//this.tcpSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive,true);
Console.WriteLine("Successfully connected.");
}
else
{
Console.WriteLine("Error.");
}
}
public void Disconnect()
{
this.tcpSocket.Disconnect(true);
this.tcpSocket.Dispose();
Console.WriteLine("Successfuly closed.");
}
public string SendCommand()
{
string path = #"C:\Users\me\Desktop\request.json";
string request = File.ReadAllText(path);
Byte[] bytesSent = Encoding.UTF8.GetBytes(request);
this.tcpSocket.Send(bytesSent);
this.tcpSocket.Shutdown(SocketShutdown.Send);
var respBytes = ReceiveAll();
string s = System.Text.Encoding.UTF8.GetString(respBytes, 0, respBytes.Length);
return s;
}
public byte[] ReceiveAll()
{
var buffer = new List<byte>();
var currByte = new Byte[1];
var byteCounter = this.tcpSocket.Receive(currByte, currByte.Length, SocketFlags.None);
while (this.tcpSocket.Available > 0)
{
currByte = new Byte[1];
byteCounter = this.tcpSocket.Receive(currByte, currByte.Length, SocketFlags.None);
if (byteCounter.Equals(1))
{
buffer.Add(currByte[0]);
}
}
return buffer.ToArray();
}
}
Console app:
static void Main(string[] args)
{
Connection s = new Connection();
s.Connect();
Console.WriteLine(s.SendCommand());
Console.WriteLine(s.SendCommand());
Thread.Sleep(5000);
s.Disconnect();
Console.ReadKey();
}
This approach works once. The first time I call send command. It doesn't the second time (throws an exception), because I call socket.Shutdown() on Send in my SendCommand(). I do so because of this post:
TCPClient not receiving data
However, there doesn't seem to be a way to re-enable the ability to Send after calling Shutdown(). So now I just don't know if it's even possible to keep a tcp connection open if you have to both read and write. Moreover, I can't really find a useful example online. Does anyone know how to do so in .NET? Is this even possible?
TCP/IP is a streaming protocol. To pass messages with it you need a “framing protocol” so peers can determine when a message is finished.
One simple way to signal the end of a message is to close the socket when you’ve sent the last byte. But this prevents socket reuse. See the evolution of HTTP for an example of this.
If this is what your device does, there’s no way to reuse a socket.
If it is possible to keep the connection open for more messages depends on the application protocol. There is no way to enforce this if the protocol does not supports it. Thus, ask the vendor or look into the protocol specification (if it exists) for information if and how this is supported.
However, there doesn't seem to be a way to re-enable the ability to Send after calling Shutdown().
There is no way. TCP write shutdown means that one does not want to send any more information. It is impossible to take this back. If the protocol supports multiple message exchanges then it needs to have a different way to detect the end of a message than calling Shutdown.
I am trying to send commands to the server , like for example requesting the server to send back the list of files in it's directory. The problem is that when I send the "list" command to the server, I have to send it twice in order for the server to send back the list of files to the client. I am sure that the server receives the command in both times as on the server side I print the result that is supposed to be sent to the client on the console and it appears both times.
I am using C# and TCPListeners to listen for incoming responses or commands, and TCPClient to send responses or commands between the server and the client.
The client code
private TcpListener tcpListener = new TcpListener(9090);
private void button3_Click(object sender, EventArgs e)
{
Byte[] bytesToSend = ASCIIEncoding.ASCII.GetBytes("list");
try
{
TcpClient clientSocket = new TcpClient(serverIPFinal, 8080);
if (clientSocket.Connected)
{
NetworkStream networkStream = clientSocket.GetStream();
networkStream.Write(bytesToSend, 0, bytesToSend.Length);
// networkStream.Close();
// clientSocket.Close();
thdListener = new Thread(new ThreadStart(listenerThreadList));
thdListener.Start();
}
}
catch
{
isConnectedLbl.Text = "Server not running";
}
}
//Listener Thread to receive list of files.
public void listenerThreadList()
{
tcpListener.Start();
while (true)
{
handlerSocket = tcpListener.AcceptSocket();
if (handlerSocket.Connected)
{
Control.CheckForIllegalCrossThreadCalls = false;
lock (this)
{
if (handlerSocket != null)
{
nSockets.Add(handlerSocket);
}
}
ThreadStart thdstHandler = new
ThreadStart(handlerThreadList);
Thread thdHandler = new Thread(thdstHandler);
thdHandler.Start();
}
}
}
//Handler Thread to receive list of files.
public void handlerThreadList()
{
Socket handlerSocketList = (Socket)nSockets[nSockets.Count - 1];
NetworkStream networkStreams = new NetworkStream(handlerSocketList);
int requestRead = 0;
string dataReceived;
byte[] buffer = new byte[1024];
//int iRx = soc.Receive(buffer);
requestRead = networkStreams.Read(buffer, 0, 1024);
char[] chars = new char[requestRead];
System.Text.Decoder d = System.Text.Encoding.UTF8.GetDecoder();
int charLen = d.GetChars(buffer, 0, requestRead, chars, 0);
dataReceived = new System.String(chars);
Console.WriteLine(dataReceived);
MessageBox.Show(dataReceived);
//tcpListener.Stop();
thdListener.Abort();
}
The Server code:
TcpListener tcpListener = new TcpListener(8080);
public void listenerThreadCommands()
{
tcpListener.Start();
while (true)
{
handlerSocket = tcpListener.AcceptSocket();
if (handlerSocket.Connected)
{
Control.CheckForIllegalCrossThreadCalls = false;
connections.Items.Add(
handlerSocket.RemoteEndPoint.ToString() + " connected.");
// clientIP = handlerSocket.RemoteEndPoint.ToString();
lock (this)
{
nSockets.Add(handlerSocket);
}
ThreadStart thdstHandler = new
ThreadStart(handlerThreadCommands);
Thread thdHandler = new Thread(thdstHandler);
thdHandler.Start();
//tcpListener.Stop();
//handlerSocket.Close();
}
}
}
//Handler Thread to receive commands
public void handlerThreadCommands()
{
Socket handlerSocketCommands = (Socket)nSockets[nSockets.Count - 1];
NetworkStream networkStream = new NetworkStream(handlerSocketCommands);
int requestRead = 0;
string dataReceived;
byte[] buffer = new byte[1024];
requestRead = networkStream.Read(buffer, 0, 1024);
char[] chars = new char[requestRead];
System.Text.Decoder d = System.Text.Encoding.UTF8.GetDecoder();
int charLen = d.GetChars(buffer, 0, requestRead, chars, 0);
dataReceived = new System.String(chars);
//connections.Items.Add(dataReceived);
if (dataReceived.Equals("list"))
{
localDate = DateTime.Now;
Files = Directory.GetFiles(System.IO.Directory.GetCurrentDirectory())
.Select(Path.GetFileName)
.ToArray();
String FilesString = "";
for (int i = 0; i < Files.Length; i++)
{
FilesString += Files[i] + "\n";
}
String clientIP = handlerSocketCommands.RemoteEndPoint.ToString();
int index = clientIP.IndexOf(":");
clientIP = clientIP.Substring(0, index);
WriteLogFile(logFilePath, clientIP, localDate.ToString(), " ", "list");
Console.WriteLine(clientIP);
Console.WriteLine(FilesString);
Byte[] bytesToSend = ASCIIEncoding.ASCII.GetBytes(FilesString);
try
{
WriteLogFile(logFilePath, clientIP, localDate.ToString(), " ", "list-response");
TcpClient clientSocket = new TcpClient(clientIP, 9090);
if (clientSocket.Connected)
{
NetworkStream networkStreamS = clientSocket.GetStream();
networkStreamS.Write(bytesToSend, 0, bytesToSend.Length);
networkStreamS.Close();
clientSocket.Close();
networkStream.Close();
//tcpListener.Stop();
// handlerSocketAuthenticate.Close();
}
}
catch
{
Console.WriteLine("Cant send");
}
}
else if (dataReceived.Equals("downloadfile"))
{
// handlerSocketAuthenticate.Close();
// tcpListener.Stop();
networkStream.Close();
thdListenerDownload = new Thread(new ThreadStart(listenerThreadDownloading));
thdListenerDownload.Start();
}
else
{
String clientIP1 = handlerSocketCommands.RemoteEndPoint.ToString();
int index = clientIP1.IndexOf(":");
clientIP1 = clientIP1.Substring(0, index);
// handlerSocketAuthenticate.Close();
CommandExecutor(dataReceived, clientIP1);
}
}
There are so many different things wrong with the code you posted, it's hard to know where to start, and it's impossible to have confidence that in the context of a Stack Overflow, one could sufficiently address all of the deficiencies. That said, in the interest of helping, it seems worth a try:
Sockets are bi-directional. There is no need for the client to use TcpListener at all. (By convention, the "server" is the endpoint that "listens" for new connections, and the "client" is the endpoint that initiates new connections, by connecting to a listening server.)You should just make a single connection from client to server, and then use that socket both for sending to and receiving from the server.
You are setting the CheckForIllegalCrossThreadCalls property to false. This is evil. The exceptions that occur are there to help you. Setting that property to false disables the exceptions, but does nothing to prevent the problems that the exceptions are designed to warn you about.You should use some mechanism to make sure that when you access UI objects, you do so only in the thread that owns those objects. The most primitive approach to this is to use Control.Invoke(). In modern C#, you are better off using async/await. With TcpClient, this is easy: you already are using GetStream() to get the NetworkStream object that represents the socket, so just use the asynchronous methods on that object, such as ReadAsync(), or if you wrap the stream in a StreamWriter and StreamReader, use the asynchronous methods on that object, such as ReadLineAsync().
You are checking the Connected property of the TcpClient object. This is pointless. When the Connect() method returns, you are connected. If you weren't, an exception would have been thrown.
You are not sufficiently synchronizing access to your nSockets object. In particular, you use its indexer in the handlerThreadList() method. This is safe when using the object concurrently only if you have guaranteed that no other thread is modifying the object, which is not the case in your code.
You are writing to the stream using ASCII encoding, but reading using UTF8 encoding. In practice, this is not really a problem, because ASCII includes only the code points 0-127, and those map exactly to the same character code points in UTF8. But it's really bad form. Pick one encoding, stick with it.
You are accepting using AcceptSocket(), but then just wrapping that in a NetworkStream anyway. Why not just use AcceptTcpClient() and call GetStream() on that? Both Socket and TcpClient are fine APIs, but it's a bit weird to mix and match in the same program, and will likely lead to some confusion later on, trying to keep straight which you're using where and why.
Your code assumes that the handlerThreadCommands() method will always be called in exactly the same order in which connections are accepted. That is, you retrieve the current socket with nSockets[nSockets.Count - 1]. But, due to the way Windows thread scheduling works, it is entirely possible that two or more connections could be accepted before any one of the threads meant to handle the connection is started, with the result that only the most recent connection is handled, and it is handled by those multiple threads.
You are assuming that command strings will be received as complete units. But this isn't how TCP works. TCP guarantees only that if you receive a byte, it will be in order relative to all the bytes sent before it. But you can receive any number of bytes. In particular, you can receive just a single byte, or you can receive multiple commands concatenated with each other, or you can receive half a command string, then the other half later, or the second half of one command and the first half of the next, etc. In practice, these problems don't show up in early testing because the server isn't operating under load, but later on they very well may be. And the code needs to be designed from the outset to work properly under these conditions; trying to patch bad code later is much more difficult.
I can't say that's the above are the only things wrong with the code, but they are most glaring, and in any case I think the above is sufficient food for thought for you at the moment.
Bottom line: you really should spend more time looking at good networking examples, and really getting to understand how they work and why they are written the way they do. You'll need to develop a good mental model for yourself of how the TCP protocol works, and make sure you are being very careful to follow the rules.
One resource I recommend highly is The Winsock Programmer's FAQ. It was written long ago, for a pre-.NET audience, but most of the information contained within is still very much relevant when using the higher-level networking APIs.
Alternatively, don't try to write low-level networking code yourself. There are a number of higher-level APIs that use various serialization techniques to encode whole objects and handle all of the lower-level network transport mechanics for you, allowing you to concentrate on the value-added features in your own program, instead of trying to reinvent the wheel.
I'm having an issue with ZeroMQ, which I believe is because I'm not very familiar with it.
I'm trying to build a very simple service where multiple clients connect to a server and sends a query. The server responds to this query.
When I use REQ-REP socket combination (client using REQ, server binding to a REP socket) I'm able to get close to 60,000 messages per second at server side (when client and server are on the same machine). When distributed across machines, each new instance of client on a different machine linearly increases the messages per second at the server and easily reaches 40,000+ with enough client instances.
Now REP socket is blocking, so I followed ZeroMQ guide and used the rrbroker pattern (http://zguide.zeromq.org/cs:rrbroker):
REQ (client) <----> [server ROUTER -- DEALER --- REP (workers running on different threads)]
However, this completely screws up the performance. I'm getting only around 4000 messages per second at the server when running across machines. Not only that, each new client started on a different machine reduces the throughput of every other client.
I'm pretty sure I'm doing something stupid. I'm wondering if ZeroMQ experts here can point out any obvious mistakes. Thanks!
Edit: Adding code as per advice. I'm using the clrzmq nuget package (https://www.nuget.org/packages/clrzmq-x64/)
Here's the client code. A timer counts how many responses are received every second.
for (int i = 0; i < numTasks; i++) { Task.Factory.StartNew(() => Client(), TaskCreationOptions.LongRunning); }
void Client()
{
using (var ctx = new Context())
{
Socket socket = ctx.Socket(SocketType.REQ);
socket.Connect("tcp://192.168.1.10:1234");
while (true)
{
socket.Send("ping", Encoding.Unicode);
string res = socket.Recv(Encoding.Unicode);
}
}
}
Server - case 1: The server keeps track of how many requests are received per second
using (var zmqContext = new Context())
{
Socket socket = zmqContext.Socket(SocketType.REP);
socket.Bind("tcp://*:1234");
while (true)
{
string q = socket.Recv(Encoding.Unicode);
if (q.CompareTo("ping") == 0) {
socket.Send("pong", Encoding.Unicode);
}
}
}
With this setup, at server side, I can see around 60,000 requests received per second (when client is on the same machine). When on different machines, each new client increases number of requests received at server as expected.
Server Case 2: This is essentially rrbroker from ZMQ guide.
void ReceiveMessages(Context zmqContext, string zmqConnectionString, int numWorkers)
{
List<PollItem> pollItemsList = new List<PollItem>();
routerSocket = zmqContext.Socket(SocketType.ROUTER);
try
{
routerSocket.Bind(zmqConnectionString);
PollItem pollItem = routerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += RouterSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
dealerSocket = zmqContext.Socket(SocketType.DEALER);
try
{
dealerSocket.Bind("inproc://workers");
PollItem pollItem = dealerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += DealerSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
// Start the worker pool; cant connect
// to inproc socket before binding.
workerPool.Start(numWorkers);
while (true)
{
zmqContext.Poll(pollItemsList.ToArray());
}
}
void RouterSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(routerSocket, dealerSocket);
}
void DealerSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(dealerSocket, routerSocket);
}
void RelayMessage(Socket source, Socket destination)
{
bool hasMore = true;
while (hasMore)
{
byte[] message = source.Recv();
hasMore = source.RcvMore;
destination.Send(message, message.Length, hasMore ? SendRecvOpt.SNDMORE : SendRecvOpt.NONE);
}
}
Where the worker pool's start method is:
public void Start(int numWorkerTasks=8)
{
for (int i = 0; i < numWorkerTasks; i++)
{
QueryWorker worker = new QueryWorker(this.zmqContext);
Task task = Task.Factory.StartNew(() =>
worker.Start(),
TaskCreationOptions.LongRunning);
}
Console.WriteLine("Started {0} with {1} workers.", this.GetType().Name, numWorkerTasks);
}
public class QueryWorker
{
Context zmqContext;
public QueryWorker(Context zmqContext)
{
this.zmqContext = zmqContext;
}
public void Start()
{
Socket socket = this.zmqContext.Socket(SocketType.REP);
try
{
socket.Connect("inproc://workers");
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not create worker, error: {0}", ze.Message);
return;
}
while (true)
{
try
{
string message = socket.Recv(Encoding.Unicode);
if (message.CompareTo("ping") == 0)
{
socket.Send("pong", Encoding.Unicode);
}
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not receive message, error: " + ze.ToString());
}
}
}
}
Could you post some source code or at least a more detailed explanation of your test case? In general the way to build out your design is to make one change at a time, and measure at each change. You can always move stepwise from a known working design to more complex ones.
Most probably the 'ROUTER' is the bottleneck.
Check out these related questions on this:
Client maintenance in ZMQ ROUTER
Load testing ZeroMQ (ZMQ_STREAM) for finding the maximum simultaneous users it can handle
ROUTER (and ZMQ_STREAM, which is just a variant of ROUTER) internally has to maintain the client mapping, hence IMO it can accept limited connections from a particular client. It looks like ROUTER can multiplex multiple clients, only as long as, each client has only one active connection.
I could be wrong here - but I am not seeing much proof to the contrary (simple working code that scales to multi-clients with multi-connections with ROUTER or STREAM).
There certainly is a very severe restriction on concurrent connections with ZeroMQ, though it looks like no one know what is causing it.
I have done done performance testing on calling a native unmanaged DLL function with various methods from C#:
1. C++/CLI wrapper
2. PInvoke
3. ZeroMQ/clrzmq
The last might be interesting for you.
My finding at the end of my performance test was that using the ZMQ binding clrzmq was not useful and produced a factor of 100 performance overhead after I tried to optimize the PInvoke calls within the source code of the binding. Therefore I have used the ZMQ without a binding but with PInvoke calls.these calls must be done with the cdecl convention and with the option "SuppressUnmanagedCodeSecurity" to get most speed.
I had to import just 5 functions which was fairly easy.
At the end the speed was a bit slower than a PInvoke call but with the ZMQ-in my case over "inproc".
This may give you the hint to try it without the binding, if speed is interesting for you.
This is not a direct answer for your question but may help you to increase performance in general.
A client need to build several tcp connections to server simultaneously.
My Server's code is below.
while (_running)
{
if (!_listener.Pending())
{
Thread.Sleep(100);
continue;
}
TcpClient client = _listener.AcceptTcpClient();
}
And my client's code is below.
for (int i = 0; i < num; i++)
{
TcpClient tcp = new TcpClient();
tcp.Connect(_server);
}
The first connection is success. But the second connection is failed due to server's no response(Actually server are listening tcp connection).
However, if I add Thread.Sleep(1500) after each of tcp.Connect(), all connections are success. But this situation is only true if there are one client and one server. If there are many clients then how can I ensure each connection that can be accepted by server? Also why I add Thread.Sleep can make such connections succeed?
I had the same task. I looked for canonical implementation of this task for .Net with no luck.
The approach I use now is descibed below.
Main idea
We need listener to receive connection, give the connection to the handler, and as soon as possible start listen for a new connection.
Implementation
AutoResetEvent _stopEvent = new AutoResetEvent(false);
object _lock = new object();
public void StartListening()
{
_listener.BeginAcceptTcpClient(ConnectionHandler, null);
_stopEvent.WaitOne();//this part is different in my original code, I don't wait here
}
public void StopListening()
{
lock(_lock)
{
listener.Stop();
listener = null;
}
_stopEvent.Set();//this part is different in my original code
}
void ConnectionHandler(IAsyncResult asyncResult)
{
lock(_lock)
{
if(_listener == null)
return;
var tcpClient = _listener.EndAcceptTcpClient(asyncResult);
var task = new MyCustomTask(tcpClient);
ThreadPool.QueueUserWorkItem(task.Execute);
_listener.BeginAcceptTcpClient(ConnectionHandler,null);
}
}
I am still not very confident in calling _listener.BeginAcceptTcpClient in ConnectionHandler, but I haven't found alternative way.
Since there are still no satisfied answers and I finally use different approach to handle my case. I found out that using class Socket is faster and more stable than using TcpListener and TcpClient. I tried different approach to use TcpListener and TcpClient. Firstly, I used TcpListener.AcceptTcpClient to listen client with and without TcpListener.Pending but there is still possibly ignoring some client connection. Sencondly, I used asynchronous method, TcpListener.BeginAcceptTcpClient and TcpListener.EndAcceptTcpClient but still no succeeded, still ignoring some client connection. Finally using Socket.Accept instead of TcpListener.AcceptTcpClient, the former one has nearly no delay and really fast to response to client.
I am creating my first app its a little server. I just wanted to know whats the best way to accept multiple connections but not be flooded, say 10 connections in 10 secounds then if flooded close the listener. Would threads or thread pool help me do this.
I added the Threadpool but not sure on how i should be using it for my application.
Please take a look at my code below and see what I need to do to make it secure and not get flooded.
Thanks
class Listener
{
public static TcpListener _listener;
private static TcpClient _client;
private static NetworkStream _clientStream;
public Listener(string ip, Int32 port)
{
ThreadPool.SetMaxThreads(50, 100);
ThreadPool.SetMinThreads(50, 50);
// Set the TcpListener IP & Port.
IPAddress localAddr = IPAddress.Parse(ip);
_listener = new TcpListener(localAddr, port);
}
public void Start() // Run this on a separate thread, as
{ // we did before.
_listener.Start();
Console.WriteLine("Starting server...\n");
Console.WriteLine("Listening on {0}:{1}...", Globals._localIP, Globals._port);
while (Globals._Listen)
{
try
{
if (!_listener.Pending())
{
Thread.Sleep(500); // choose a number (in milliseconds) that makes sense
continue; // skip to next iteration of loop
}
Globals._requestCounter += +1;
// Get client's request and process it for web request.
ProcessRequest();
}
catch (SocketException e)
{
// Listener Error.
}
catch (InvalidOperationException er)
{
}
}
_listener.Stop();
}
public static void Stop()
{
Globals._Listen = false;
}
}
static void Main(string[] args)
{
// Set listener settings.
var server = new Listener(Globals._localIP, Globals._port);
// Start the listener on a parallel thread:
Thread listenerThread = new Thread(server.Start);
listenerThread.Start();
Thread.Sleep(500);
}
For TCP in .NET I highly recommend using WCF rather than trying to roll your own. For your needs there is a "TCP port sharing service", you just need to enable it. Also things like throttling, message size limts are all already taken care of you just need to configure it. There are also a variety of ways of using WCF net.tcp, it can do streaming, peer to peer, full duplex etc, so there are very few scenarios where you have to roll your own.