Number of web socket clients that could be opened at once - c#

I am simply looking at the sample code found here:
When I run the server portion and start multiple instances of the client I notice that when I start around 40-50 of them at the same time (using Process.Start()) that sometimes some clients fail to connect.
Why does this happen? What actually stops all these clients from connecting at once? Is there a request limit hidden somewhere?

Are you sure the limitation is not on the server?
I use ClientWebSocket to do some simplistic stress test on my WebSocket component and I can reach thousands or connections and almost a 100% throughput of my NIC. However, I do not create a process for each call. You can see the test console app source code or just download the executable here.

Related

Time out error occur on c# application

When i am posting data from c# application(Windows server) to PHP page which runs on another server(Ubuntu) using POST method,
i am posting minimum 1000 request per second to PHP page,
c# application is a multi threading application, once it receives the data it post the data to php page
when i continuously posting datas i'm getting posting timeout error on c# application, once i restart the application it works for few hours.
[Note: due to php takes time to finish the task so new request are waiting , it creates queue and its waiting time exceed more than 2 min and im getting timeout error].
Both of our server use maximum 50% of CPU and RAM usage
i check on both c# code and PHP code both are working fine there is no issues or bugs
and i check on mysql configuration also fine but i dont know about apache config.
Apache config are set as default
what i think is may be i should config apache or php to handle 1000 request per second, i dont know exactly because same code working fine until clients request
increased.
thanks in advance buddy :)
I think you might be hitting a TCP Port Exhaustion issue. If you are making many sequential calls to another server, and dont manage the TCP connections properly your OS will not immediately release the TCP Port connection it created for the outgoing call, and will assign further OS resources to the next call. I think the default TCP Port release time can be as high as 2 minutes.
See How do I prevent Socket/Port Exhaustion? for further details. To be sure we'd need to see your C# code to see how you are releasing the resource you use when creating the WebClient call.
If it is a port exhaustion issue, then you are going to have to manage your outgoing calls to the PHP server using a manually created pool of WebClient instances - even releasing the WebClient may not immediately release the OS resources that the WebClient made use of.
thank u for kind reply bro,
it was config issue on ubuntu server i didnt enable fast cgi now its works fine

WCF prevent server disconnect

I have a small client/server application. I was using a hand-coded TCP connection to allow the client to control the server, but now I've converted it to WCF. This saved me a whole bunch of code, but it also gave me a whole new set of problems to fix...
The latest problem is that after a while, the server disconnects the client. I do not want this to ever happen, under any circumstances. Currently the client gets about a quarter of the way through its run, and then explodes with fire because the server has dropped the connection. I need to stop this happening.
I was able to write a trivial WCF client/server pair that replicates the problem. It seems that if the client calls a method, waits 15 minutes, and then calls a second method, the second call throws an exception babbling something about the socket having been closed. If I reduce the delay, everything works fine.
I read in another answer somewhere that setting ReceiveTimeout should fix this. However, when I tried it, this only fixes the problem under .NET; when running under Mono, it still breaks. Since Mono is the actual target platform, this isn't very helpful.
(Think about SSH - you would not want an SSH server to disconnect you just because you didn't type anything for a while. Perhaps you issued a long-running shell command or something... Just because the server hasn't received any data from you doesn't mean nothing is happening! It certainly doesn't mean your connection should get dropped...)
All code is C#. The server is a self-hosting console app. The client is also a console app. All configuration is in code. Binding is NetTcpBinding with default settings.
What can I do to allow the client to run to completion successfully?
I have a few ideas, but none of them are pretty:
Manually send heartbeat messages. (Yuck!)
Detect disconnection and automatically reconnect? (Again, yuck.)
Turn on "reliable mode". (I'm guessing that since the server deliberately ends the session, this won't help.)
Create one connection per method call. (That's going to be quite a lot of code...)
Stop using WCF?
In the end I "fixed" this by having the client make a new connection for every single command. This works acceptably because the client doesn't send commands all that often. It's annoying having to write the connect/disconnect code a dozen times though...

Webservice for serial port devices

I want to create a remote webservice for an application that is now avaliable only localy. This application controlls three devices (each is controlled separately) connected on serial port. The problem is that I don't know how to take care of passing back information that a device return requested data. For example - I send move command to the motion device (which is very slow and can take a minute or more). Can I just set a big timeout on the client side (and server side) and return for example a true/false if operation is completed or is this a bad idea? Is SOAP with big timeouts ok?
And the other question is if Mono on Linux (Ubuntu 9.10, Mono 2.4) is stable enought for making a web service or should I chose Java or some other language?
I'm open for recommendations.
Thanks for your help!
Using big timeouts is not a good idea. It wastes resources on both the server and the client and you will not be able to detect a "true" timeout condition, when the server is unavailable for example, before the allocated timeout expires.
You really have two options. The first is to use polling. Return immediately from the motion request command, acknowledging the reception of the command (and not the completion of it). Then send requests in regular intervals, asking whether the command is completed or not.
The other alternative requires the client to be able to register a callback endpoint, which the server will call when the motion completes. This makes the whole process asynchronous, but requires the client to be able to operate in server mode. This is very easy to do with WCF - I don't know however if this functionality is available in Mono.
Not directly related to your question..., but consider com0com and its friends hub4com and com2tcp.

Fast way to check if a server is accessible over the network in C#

I've got a project where I'm hitting a bunch of custom Windows Performance Counters on multiple servers and aggregating them into a database. If a server is down, I want to skip it, and just continue on with my day.
Currently I'm checking to see if a server is live by doing a DirectoryInfo on a share that I've got to look at later in the process anyways, then checking the .Exists property.This is my current code snippet for testing:
DirectoryInfo di = new DirectoryInfo(machine.Share_Path);
if (!di.Exists)
{
log.Warn("Could not access " + machine.Name + "! Maybe its down?");
continue; // Skips to the next server in my loop where this snippet exists.
}
This works, but its pretty slow. It takes about 68 seconds on average for the di.Exists bit to finish its work, and I ideally need to know within a second whether or not a server is accessible. Pinging also isn't an option since a server can be pingable but not "live" in our environment.
I'm still kind of fresh to the .NET world, so I'm open to any advice people can offer.
Thanks in advance.
-Weegee
Ping First, Ask Questions Later
Why not ping first, and then do the di.Exists if you get a response?
That would allow you to fail early in the case that is not reachable, and not waste the time for machines that are down hard.
I have, in fact, used this method successfully before.
MSDN Ping Documentation
Paralellize
Another option you have is to paralellize the checking, and action on the servers as they are known to be available.
You could use the Paralell.ForEach() method, and use a thread-safe queue along with a simple consumer thread to do the required action. Combined with the checking method above, this could alleviate almost all of your bottleneck on the up/down checking.
Knock on the Door
Yet another method would be to ckeck if the required remote service is running (either by hitting its port directly or by querying it with WMI).
Since WMI is almost always running when a machine is up, your connection should be very quick to either succeed or fail.
The only "quick" way I think to see if it's up without relying on ping would be to create a socket, and see if you can actually connect to the port of the service you're trying to reach.
This would be the equivalent of telnet servername 135 to see if it's up.
Specifically...
Create a .NET TCP socket client (System.Net.Sockets.TcpClient)
Call BeginConnect() as an asynchronous operation, to connect to the server in question on one of the RPC ports that your directory exists code would use anyway (TCP 135, 139, or 445).
If you don't hear back from it within X milliseconds, call Close() to cancel the connection.
Disclaimer: I have no idea what effect this would have on any threat/firewall protection that may see this type of Connect / Disconnect with no data sent activity as a threat.
Opening Socket to a specific port usually does the trick. If you really want it to be fast, be sure to set the NoDelay property on the new socket (Nagle algorithm) so there is no buffering.
Fast will largely depend on latency but this is probably the fastest way I know to connect to an endpoint. It's pretty simple to parallelize using the async methods. How fast you can check will largely depend on your network topology but in tests for 1000 servers (latency between 0-75ms) I've been able to get connectivity state in ~30 seconds. Not scientific data at all but should give you the idea.
Also, don't ever do this through UNC file shares because if the server no longer exists you will have a lot of dangling connections that take forever to timeout. So if you have a lot of servers with invalid DNS records and you try to poll them you will bring Windows down completely over time. Things like File.Exists and any file access will cause this.
The "Full-Blown" option would be to install a monitoring tool like SCOM (System Center Operations Manager), this has an SDK you can use to query SCOM for (performance) and maintenance information avout machines being monitored. Might be a bridge to far though....
Telnet is another option. Try telnetting to the target machine to see if it responds.
Create a small Windows Service that you install on your target machine, have the sys admin stop it when they perform maintenance on the target machine (just use batch file to net stop / net start the service)

Client/Server connection woes

I've written a client/server model in C# using .Net remoting. If I have the client connected to the server, then kill the server and restart it without trying to call any server methods from the client whilst the server is down, I can reconnect happily.
If I close the server then try to ping the server from the client (which I do from a separate thread to avoid an endless wait) then when the server comes back online, the client can never talk to it and my Ping thread that was fired during the downtime waits forever deep in the guts of the remoting libraries. I try to Abort this (if trying to Join the thread fails after a short time) but it won't abort. I'm wondering if this is part of the problem.
If I start up another client, then that client can talk to the server just fine. I figured I needed to restart some aspect of the original client but cannot see what would need to be shut down. I certainly null the server I'm connected to and call Activator.GetObject with the same address (something the second client does to connect to the server, which works fine), but re-getting the server doesn't help at all.
The server is running a as singleton via RegisterWellKnownServiceType.
I would start with wireshark and use it to see what is really going across the wire.
Is .NET remoting a requirement, or could you consider moving to WCF instead? The protocols are better factored and more clearly exposed when needed.
I was solving a similar problem. I had a working .NET remoting application using configuration files for the remoting and the routines of the .NET remoting I had to integrate into a larger application. I integrated this into the larger project, by the Activator.GetObject returned an instance of the proxy. As soon as there was a call of a member from the proxy instance, it ended up inside the member call and could not get off. The larger application contained various configuration files already thus the .NET remoting configurations I placed right there along with another configs for another thihs, and there was the crux of the matter. After I placed the .NET remoting configurations into a new empty config(s) file, the .NET remoting in the larger application started to work.

Categories