web calls never timing out - c#

I have a number of applications using various web technologies such as SOAP, WCF Services, or just simple XmlReader. However they all seem to suffer the same problem of missing the timeout and hanging infinitely if the internet connection suffers problems at the wrong time.
I have set the timeout in all scenarios to something small, e.g. for wcf
closeTimeout="00:00:15" openTimeout="00:00:15"
receiveTimeout="00:00:15" sendTimeout="00:00:15"
or for soap
_Session.Timeout = (int)TIMEOUT.TotalMilliseconds;
These timeouts do generally get hit, however it appears there is some special case where if the internet drops out at the right time, the call will hang and never time out (using synchronous calls).
I was considering starting up a timer every time I make a call, and using the appropriate .Abort() function if the timer expires to cancel the call. However, I was wondering if there was a simpler way to fix the issue and ensure the timeout gets hit.
Does anyone know why this occurs, and if so what a clean/simple/good way is to ensure the calls always time out?

I can guess at why it occurs, but without giving a solution :(
I suspect it's getting caught up on DNS resolution. I've seen various situations where that "doesn't count" - e.g. where it ends up happening on the initiating thread of an asynchronous call, or where it's definitely not been included in timeouts.
If you're able to reproduce this by pulling out your network cable, I'd suggest using Wireshark to confirm my guess - that would at least suggest further avenues for investigation. Maybe there's a DNS timeout somewhere in the .NET stack which is normally infinite but which can be tweaked, for example.

Related

C# UDP Socket.ReceiveFrom timeout without using BeginReceiveFrom or Exceptions

I'm trying to implement a basic UDP client. One of its functions is the ability to probe computers to see if a UDP server is listening. I need to scan lots of these computers quickly.
I can't use the Socket.BeginReceiveFrom method and run a timeout waiting for it to complete, because callbacks may occur after the timeout is over, and seeing as many computers are being probed quickly, I found that later callbacks ended up using modified data as a new probe was already underway when the callback was finally invoked.
I can't use the Socket.ReceiveFrom method and set a Socket.ReceiveTimeout because the SocketException being thrown+handled takes a long time (not sure why, I'm not running much code to handle it), meaning it takes about 2 seconds per computer rather than 100ms like hoped.
Is there any way of running a timeout on a synchronous call to ReceiveFrom without using exceptions to determine when the call has failed/succeeded? Or is there a tactic I've not yet taken that you think could work?
Any advice is appreciated.
I decided to rewrite the probe code using TCP.
However, I later discovered the Socket.ReceiveFromAsync method which, seeing as it only receives a single datagram per call, would have made life easier.

How to ensure message delivery to SOAP endpoint?

I'm consuming a notoriously unreliable SOAP web service in a c# MVC project. I have no control over that service. It will occasionally just stop responding for extended periods. Is there a simple way, from the client side, to continue to retry sending the message, at a set interval, until I receive a response back from the service?
I've looked at MSMQ and ws-reliability but, it appears to me, both options require some control over the web service. Is there anything out there to help me to do this from the client side?
As you probably already figured out, your problem is a huge problem for many. Look up "idempotent" and "webservice". Idempotency means a lot more than just being able to ensure a request/response, but a search will give you plenty of good stuff to read.
If "stop responding for extended periods" means seconds while seldomly called upon, DarkWanderer showed a pretty brute force solution to such a problem.
But if you have many calls, sleeping may eat up your working threads, so then you simply have to rely on some kind of queue.
If your calls are non-transactional and non-critical, you could surely code your own queing mechanism. While this may seem easy, it may still require threading, complex callbacks, logging, active error handling and what not. Many poor souls have reported that what started as a simple solution was turned into a maintainance nightmare.
And now I see that one of your requirements is that it must survive an app-pool recycling. Then we are in the last category of critical (and maybe transactional) queing.
Why I would recommend MSMQ from the very start. It handles all your problems and the api is in .net and really good nowadays. Yes, it will add complexity to your overall solution, but that stems from your problem domain.
while (true) {
try {
var response = webServiceClient.CallMethod();
if (response.Successful())
break;
Sleep(retryInterval);
} catch {}
}
And that means, you just need to keep calling the web-service, no message queue or something is required. Does that answer your question?

IIS7 stops working after 5 requests

Here is my problem:
I have just been brought onto a massive asp.net C# project and I've been charged with fixing some performance issues (not my area of expertise). More specifically after 5 - 7 redirects/ajax calls the web server stops responding and the whole page (and eventually the browser) freezes.
I don't think this is a coding issue as I've set up break points in a few pages (Page_Load method) and after the 5 requests it does not even reach the break points.
I don't believe this is related to this issue as I've increased the browser's maximum connections per server parameter and I got the same behavior. Furthermore after these 5 request in one browser IE, the application stops working in FF as well.
This is not a resource issue as the w3wp.exe process never exceeds 500MB memory.
One thing I've noticed when using Fiddler and other tools to monitor the requests is that the server takes a very long time when loading image files (png, jpg). I don't know if this is relevant.
I've enabled failed request tracing on the server and the only thing I've noticed is that some request fail with a 401 error even dough I've set Anonymous Authentication to enabled.
Here is the exact message
MODULE_SET_RESPONSE_ERROR_STATUS
ModuleName ManagedPipelineHandler
Notification 128
HttpStatus 401
HttpReason Unauthorized
HttpSubStatus 0
ErrorCode 0
ConfigExceptionInfo
Notification EXECUTE_REQUEST_HANDLER
ErrorCode The operation completed successfully. (0x0)
This message is sometimes thrown with ModuleName: ScriptModule
I have already wasted 2 days on this thing and I'm running out of ideas so any suggestions would be appreciated.
Like any large generic problem, your best bet in diagnosing the issue is to figure out how to break down the issue into smaller parts, how to hypothesize the issues, and how to validate or invalidate your hypotheses. My first inclination would be to hypothesize that the server-side processes in this particular are taking a long time, causing your client requests to block, making the whole thing seem frozen.
From there, I would attempt to replicate the long running server side processes by creating isolated client side tests - perhaps if the URLs are HTTP gets, I would test the same URLs individually. If they were HTTP posts, I'd create an isolated test form if feasible to see what happens with each request. If a long running server side process is found then you have a starting point.
If there are no long running server side processes then it may be JavaScript / client side coding issues that need to be looked into. But definitely when you're working a large, unfamiliar project, your best bet is to figure out how to break down the issue into smaller components that can then be tested
I solved the issue finally. Here is what I did:
Experimented with IIS settings and App_Pool recycling and noticed that there is nothing wrong with the way it handles requests that actually reach it.
I focused on the Http.sys module and noticed that in the log files there were a lot of Timer_ConnectionIdle and Client_Reset errors.
After some more experimentation and a lot of Google searches, I accidentally found this answer and it solved my issue. As the answer suggests the problem was caused by the AVG antivirus installed and incorrectly configured on the server.
Thanks for all the help and suggestions.
If it's ajax calls that are causing your browser to freeze, make sure they are not blocking ajax calls.
Just appending to Shan's answer, which is a good one.
First off, there is obviously a code issue as this is by no means 'normal' behavior for IIS.
That said, you must isolate it as Shan indicated. For example, given the server itself no longer accepts connections then we can pretty well eliminate javascript as the source of the problem and relegate it to being just a symptom.
Typically when a worker process spins into space like this it is due to either an infinite loop or an issue where multiple threads are trying to lock the same resource. I bet if you let it run long enough IIS itself will timeout, kill and restart the process.
With that in mind you want to look for any type of multithreaded garbage (which I highly recommend you don't do in a web server) or for anything that indicates a tight infinite loop. A loop is going to become apparent if you execute the requests individually. A multi-threaded issue will only show up if you happen to get a collision.
Run various performance counters on the web server. Also, once it locks up, let it sit that way for awhile. Once IIS performs it's own reset on the worker process go look for indicators in the event log.

Service call works in main thread, but crashes when multithreaded

My company has an application that keeps track of information related to web sites that are hosted on various machines. A central server runs a windows service that gets a list of sites to check, and then queries a service running on those target sites to get a response that can be used to update the local data.
My task has been to apply multithreading to this process to reduce the time it takes to run through all the sites (almost 3000 sites that take about 8 hours to run sequentially). The service runs through successfuly when it's not multithreaded, but the moment I spread out the work to multiple threads (testing with 3 right now, plus a watcher thread) there's a bizarre crash that seems to originate from the call to the remote services that are supposed to provide the data. It's a SOAP/XML call.
When run on the test server, the service just gives up and doesn't complete it's task, but doesn't stop running. When run through the debugger (Dev Studio 2010) the whole thing just stops. I'll run it, and seconds later it'll stop debugging, but not because it completed. It does not throw an exception or give me any kind of message. With breakpoints I can walk through to the point where it just stops. Event logging leads me to the same spot. It stops on the line of code that tries to get a response from the web service on the other sites. And again: it only does that when multithreaded.
I found some information that suggested there's a limit to the number of connections that defaults to 2. The proposed solution is to add some tags to the app.config, but that hasn't solved the problem...
<system.net>
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
I still think it might be related to the number of allowed connections, but I have been unable to find information around it online very well. Is there something straightforward I'm missing? Any help would be much appreciated.
No crash however bizarre will escape the stack-dump. Try going through that dump and see if it points out to some obvious function.
Are you using some third party tool or some other component for the actual service call ? If yes, then please check the documentation/contact-the-person-who-wrote-it, to confirm that their components are thread safe. If they are not, you have large task ahead. :) (I have worked on DB which are not safe, so trust me it is not very uncommon to find few global static variables thrown around..)
Lastly if you are 100% sure that this is due multiple threads then, put a lock in your worked thread. Initially say it covers entire main-while-loop. Therotically it should not crash not as even though it is multi-threaded, you have serialized the execution.
Next step is to reduce to scope of the thread. Say, there are three functions in the
main-while-loop , say f1(), f2(), f3(), then start locking f2() and f3() while leaving f1 unlocked... If things work out, then problem is somewhere in f2 or f3().
I hope you got the idea of what I am suggest
I know this is like blind man guessing elephant, but that is the best you can do, if your code uses LOT many external component which are not adequately documented.

need advice for type of TCP server to cater for this type of application

The requirement of the TCP server:
receive from each client and send
result back to same client (the
server only do this)
require to cater for 100 clients
speed is an important factor, ie:
even at 100 client connections, it should not be laggy.
For now I have been using C# async method, but I find that I always encounter laggy at around 20 connections. By laggy I mean taking around almost 15-20 seconds to get the result. At around 5-10 connections, time to get result is almost immediate.
Actually when the tcp server got the message, it will interact with a dll which does some processing to return a result. Not exactly sure what is the workflow behind it but at small scale you do not see any problem, so I thought the problem might be with my TCP server.
Right now, I thinking of using a sync method. Doing so, I will have a while loop to block the accept method, and spawn a new thread for each client after accept. But at 100 connections, it is definitely overkill.
Chance upon IOCP, not exactly sure, but it seems to be like a connection pool, as the way it handles tcp is quite like the normal way.
For these TCP methods I am also not sure whether it is a better option to open and close connection each time message needs to be passed. On average, message are passed from each client at around 5-10 min interval.
Another alternative might be to use a web, (looking at generic handler) to form only 1 connection with the server. Any message that needs to be handled will be passed to this generic handler, which then sends and receive message from the server.
Need advice from especially those who did TCP in large scale. I do not have 100 PC for me to test out, so quite hard for me. Language wise C# or C++ will do, I'm more familar with C#, but will consider porting to C++ for the speed.
You must be doing it wrong. I personally wrote C# based servers that could handle 1000+ connections, sending more than 1 message per second, with <10ms response time, on commodity hardware.
If you have such high response times it must be your server process that is causing blocking. Perhaps contention on locks, perhaps plain bad code, perhaps blocking on external access leading to thread pool exhaustion. Unfortunately, there are plenty of ways to screw this up, and only few ways to get it right. There are good guidelines out there, starting with the fundamentals covered in Rick Vicik's High Performance Windows Programming articles, going over the SocketAsyncEventArgs example which covers the most performant way of writing socket apps in .Net since the advent of Socket Performance Enhancements in Version 3.5 and so on and so forth.
If you find yourself lost at the task ahead (as it seems you happen to be) I would urge you to embrace an established communication framework, perhaps WCF with a net binding, and use the declarative service model programming of WCF. This way you'll piggyback on the WCF performance. While this may not be enough for some, it will get you far enough, much further than you are right now for sure, with regard to performance.
I don't see why C# should be any worse than C++ in this situation - chances are that you've not yet hit upon the 'right way' to handle the incoming connections. Spawning off a separate thread for each client would certainly be a step in the right direction, assuming that workload for each thread is more I/O bound than CPU intensive. Whether you spawn off a thread per connection or use a thread pool to manage a number of threads is another matter - and something to determine through experimentation and also whilst considering whether 100 clients is your maximum!

Categories