When i am posting data from c# application(Windows server) to PHP page which runs on another server(Ubuntu) using POST method,
i am posting minimum 1000 request per second to PHP page,
c# application is a multi threading application, once it receives the data it post the data to php page
when i continuously posting datas i'm getting posting timeout error on c# application, once i restart the application it works for few hours.
[Note: due to php takes time to finish the task so new request are waiting , it creates queue and its waiting time exceed more than 2 min and im getting timeout error].
Both of our server use maximum 50% of CPU and RAM usage
i check on both c# code and PHP code both are working fine there is no issues or bugs
and i check on mysql configuration also fine but i dont know about apache config.
Apache config are set as default
what i think is may be i should config apache or php to handle 1000 request per second, i dont know exactly because same code working fine until clients request
increased.
thanks in advance buddy :)
I think you might be hitting a TCP Port Exhaustion issue. If you are making many sequential calls to another server, and dont manage the TCP connections properly your OS will not immediately release the TCP Port connection it created for the outgoing call, and will assign further OS resources to the next call. I think the default TCP Port release time can be as high as 2 minutes.
See How do I prevent Socket/Port Exhaustion? for further details. To be sure we'd need to see your C# code to see how you are releasing the resource you use when creating the WebClient call.
If it is a port exhaustion issue, then you are going to have to manage your outgoing calls to the PHP server using a manually created pool of WebClient instances - even releasing the WebClient may not immediately release the OS resources that the WebClient made use of.
thank u for kind reply bro,
it was config issue on ubuntu server i didnt enable fast cgi now its works fine
Related
I am simply looking at the sample code found here:
When I run the server portion and start multiple instances of the client I notice that when I start around 40-50 of them at the same time (using Process.Start()) that sometimes some clients fail to connect.
Why does this happen? What actually stops all these clients from connecting at once? Is there a request limit hidden somewhere?
Are you sure the limitation is not on the server?
I use ClientWebSocket to do some simplistic stress test on my WebSocket component and I can reach thousands or connections and almost a 100% throughput of my NIC. However, I do not create a process for each call. You can see the test console app source code or just download the executable here.
I have a small client/server application. I was using a hand-coded TCP connection to allow the client to control the server, but now I've converted it to WCF. This saved me a whole bunch of code, but it also gave me a whole new set of problems to fix...
The latest problem is that after a while, the server disconnects the client. I do not want this to ever happen, under any circumstances. Currently the client gets about a quarter of the way through its run, and then explodes with fire because the server has dropped the connection. I need to stop this happening.
I was able to write a trivial WCF client/server pair that replicates the problem. It seems that if the client calls a method, waits 15 minutes, and then calls a second method, the second call throws an exception babbling something about the socket having been closed. If I reduce the delay, everything works fine.
I read in another answer somewhere that setting ReceiveTimeout should fix this. However, when I tried it, this only fixes the problem under .NET; when running under Mono, it still breaks. Since Mono is the actual target platform, this isn't very helpful.
(Think about SSH - you would not want an SSH server to disconnect you just because you didn't type anything for a while. Perhaps you issued a long-running shell command or something... Just because the server hasn't received any data from you doesn't mean nothing is happening! It certainly doesn't mean your connection should get dropped...)
All code is C#. The server is a self-hosting console app. The client is also a console app. All configuration is in code. Binding is NetTcpBinding with default settings.
What can I do to allow the client to run to completion successfully?
I have a few ideas, but none of them are pretty:
Manually send heartbeat messages. (Yuck!)
Detect disconnection and automatically reconnect? (Again, yuck.)
Turn on "reliable mode". (I'm guessing that since the server deliberately ends the session, this won't help.)
Create one connection per method call. (That's going to be quite a lot of code...)
Stop using WCF?
In the end I "fixed" this by having the client make a new connection for every single command. This works acceptably because the client doesn't send commands all that often. It's annoying having to write the connect/disconnect code a dozen times though...
I am developing a Windows RT application that needs to get data from a MVC WebApi server.
The problem is that the response can take from few seconds to 3 minutes.
Which is the best approach to solve it?
For now, I call async to the web api and put a long timeout value to avoid exceptions. Is it a good way? I do not like too much because the server have a open connection opened all time. Can it affect significantly to the server performance?
Is there some thing like "callback" but for web services? I mean that the server calls to the client to send the data.
Yes, there are ways to get server to callback client, for example WCF duplex communication. However, such techniques will usually keep the connection open (in most cases this is TCP session). Most web servers do not support numerous concurrent requests and thus each prolonged call to the server will increment the number of concurrently connected clients. This will lead to heavy resource utilisation at the point where it shouldn't be. If you have many clients, such architecture is bound to fail.
REST requests shall be lightweight, small and fast. Consider using a database to store temporary results and worker servers, to process the load. This is a server-side problem, not client-side.
Finally I solved it using WebSockets (thanks oleksii). It keeps the connection open but I avoid to poll for the result repeatedly. Now, when the server finishes the process, sends the data directly to the client. WebSockets is a protocol that relays over TCP and has been standardized.
http://en.wikipedia.org/wiki/WebSocket
I've recently started hosting a side project of mine on the new Azure VMs. The app uses Redis as an in-memory cache. Everything was working fine in my local environment but now that I've moved the code to Azure I'm seeing some weird exceptions coming out of Booksleeve.
When the app first fires up everything works fine. However, after about 5-10 minutes of inactivity the next request to the app experiences a network exception (I'm at work right now and don't have the exact error messages on me, so I will post them when I get home if people think they're germane to the discussion) This causes the internal MessageQueue to close, which results in every subsequent Enqueue() throwing an exception ("The Queue Is Closed").
So after some googling I found this SO post: Maintaining an open Redis connection using BookSleeve about a DIY connection manager. I can certainly implement something similar if that's the best course of action.
So, questions:
Is it normal for the RedisConnection to close periodically after a certain amount of time?
I've seen the conn.SetKeepAlive() method but I've tried many different values and none seem to make a difference. Is there more to this or am I barking up the wrong tree?
Is the connection manager idea from the post above the best way to handle this scenario?
Can anyone shed any additional light on why hosting my Redis instance in a new Azure VM causes this issue? I can also confirm that if I run my local environement against the Azure Redis VM I experience this issue.
Like I said, if it's unusual for a Redis connection to die after inactivity, I will post the stack traces and exceptions from my logs when I get home.
Thanks!
UPDATE
Didier pointed out in the comments that this may be related to the load balanacer that Azure uses: http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/12/windows-azure-load-balancer-timeout-details.aspx
Assuming that's the case, what would be the best way to implement a connection manager that could account for this goofy problem. I assume I shouldn't create a connection per unit of work right?
From other answers/comments, it sounds like this is caused by the azure infrastructure shutting down sockets that look idle. You could simply have a timer somewhere that performs some kind of operation periodically, but note that this is already built into Booksleeve: when it connects, it checks what the redis connection timeout is, and configures a heartbeat to prevent redis from closing the socket. You might be able to piggy-back this to prevent azure closing the socket too. For example, in a redis-cli session:
config set timeout 30
should configure redis (on the fly, without having to restart) to have a 30 second connection timeout. Booksleeve should then automatically take steps to ensure that there is a heartbeat shortly before 30 seconds. Note that if this is successful, you should also edit your configuration file so that this setting applies after the next restart too.
The Load Balancer in Windows Azure will close the connection after X amount of time depend on total connection load on load balancer and because of it you will get a random timeout in your connection.
As I am not well known to Redis connections I am unable to suggest how to implement it correctly however in general the suggested workaround is the have a heartbeat pulse to keep your session alive. Have you have chance to look for the workaround suggested in blog and try to implement in Redis, if that works out for you?
I want to create a remote webservice for an application that is now avaliable only localy. This application controlls three devices (each is controlled separately) connected on serial port. The problem is that I don't know how to take care of passing back information that a device return requested data. For example - I send move command to the motion device (which is very slow and can take a minute or more). Can I just set a big timeout on the client side (and server side) and return for example a true/false if operation is completed or is this a bad idea? Is SOAP with big timeouts ok?
And the other question is if Mono on Linux (Ubuntu 9.10, Mono 2.4) is stable enought for making a web service or should I chose Java or some other language?
I'm open for recommendations.
Thanks for your help!
Using big timeouts is not a good idea. It wastes resources on both the server and the client and you will not be able to detect a "true" timeout condition, when the server is unavailable for example, before the allocated timeout expires.
You really have two options. The first is to use polling. Return immediately from the motion request command, acknowledging the reception of the command (and not the completion of it). Then send requests in regular intervals, asking whether the command is completed or not.
The other alternative requires the client to be able to register a callback endpoint, which the server will call when the motion completes. This makes the whole process asynchronous, but requires the client to be able to operate in server mode. This is very easy to do with WCF - I don't know however if this functionality is available in Mono.
Not directly related to your question..., but consider com0com and its friends hub4com and com2tcp.