Determining response time for IIS requests - c#

We have a web service that is called by a third party. Third party's rules are that the service must return a response within 10 seconds.
We log all of the processing time, from when we receive the request, to when the web method exits.
Our problem: our third party is telling us that we are taking over 10 seconds, but according to our logs, we are finished processing well within time limit.
I suspect that the third party is having a connectivity problem, and that the time is lost after we complete processing, but while the response is coming down the wire. Our in-application logging can't captuer that timing (that I know of) because our web method has already returned.
Is there any IIS logging feature that we could use capture the time spent returning the response?

The time-taken entry in your logs accounts for the time it takes for the client to send the acknowledgment back to the server or reset the connection.
http://blogs.msdn.com/b/mike/archive/2008/11/13/time-vs-time-taken-fields-in-iis-logging.aspx
You'll want to also make use of the win32-status to determine when there is an issue.

Related

Force all requests through one ServerPipe with Fiddler

I'm trying to route matching domain requests through a single ServerPipe,
but I cant figure out how to get fiddler to reuse that ServerPipe for all ClientPipes
Normal behaviour of a chromium based browser is to open up to 6 connections to a server for a given domain (when there are multiple resources to get). When proxying through fiddler, this normally results in up to 6 ServerPipes. I want to permit the first to establish a server connection, skip connecting the next 5 and then have all client requests use the first server connection. From the browsers point of view it will still have 6 client connections to fiddler.
Here is where I'm at -
Let the first CONNECT request pass as normal, which establishes the first connection to the server
When the next 5 client CONNECT's come in, set the x-ReplyWithTunnel flag on the session in the BeforeRequest handler. This will bypass creating a new server connection, but respond to the client as though a server connection was successfully established
The client sends a bunch of GET requests down the 6 client pipes to fiddler requesting the resources from the server. Those in the first pipe (with the actual server pipe) complete.
Those GET requests in the other 5 tunnels present to fiddler, but no response is processed and sent back to the client.
I've tried all manner of ideas but cannot get fiddler to reuse the single server connection for all 6 client pipes.
All GET requests are to the same domain, so ServerPipe reuse should be ok, no?
Is this even possible ?
If yes, what am I missing ?
Finally worked it out after significant effort.
Under the hood Fiddler uses a "pool" of Pipes to each server.
In my initial post, I was only allowing the creation of a single connection per domain. When the first GET rolled in for a domain, that session takes the only Pipe out of the pool for its use. When the other 5 GET's arrive, they cant find any Pipes in the pool and subsequently fail.
To work around the problem, one needs to throttle the subsequent GET's and only allow them to run one at a time, once each prior GET is complete.
When this orderly process occurs the Pipe will be put back in the pool, and the next GET in the queue will successfully locate a Pipe to use. Unicorns and rainbows then appear.
I used a SemaphoreSlim to "queue" the threads in the BeforeRequest and a .Set() in the BeforeResponse per domain. The basic functionality works for my needs, but a full implementation would require dealing with things like failed Pipes, hanging GET's and flag overrides.

Bot Framework - Prevent GatewayTimeout for Long Operation

I've built a bot using botframework V4 for .Net that replies to user for both email and directline channels.
However, some of the request takes more than 15 seconds to complete, therefore I'd receive a GatewayTimeout error:
These requests are heavy (fetch some data from the database, fetch other data from another server via API calls, process the data, generate HTML and send them back to the user...) therefore nothing can be done to shorten the process.
I am aware that the gateway timeout delay is by design (the 15 seconds), but the problem is that the channel automatically retries the request after a small period of time and I end up receiving multiple emails for the same query (approx. 1 minute apart each).
I noticed as well that the directline replies are much faster than email ones (websocket vs SMTP), therefore this is mainly occurring with the email channel only. Noting that the emails are kept under 300KB as per this comment but can easily have a size close to this limit.
Therefore, is there a way to:
Increase the timeout delay?
Disable the automatic retries?
Or perhaps a certain workaround to prevent this issue?
Remember that your bot is a web app that exposes an HTTP endpoint, and every activity sent to your bot is an API call. Long-running API calls should be designed to return a response immediately and do their processing asynchronously. For example, consider the Recognize Text Computer Vision API. It just returns an Operation-Location where the actual result will become available later.
For bot Framework bots, all you have to do to send a message to the channel after the turn already ended is to send a proactive message. It's often also a good idea to design your bot to give the user an indication that the result is coming, such as by sending a preliminary "processing" message or a typing indicator, but that's probably unwanted in the case of the email channel. Eric Dahlvang explained this in the issue you linked to:
If the developer knows the response will take longer than 15 seconds, it is possible, depending on the channel, to start a separate thread to handle the long running process, return a valid status code on the receiving thread, and when the process finishes, send a proactive message from the background thread.

Throttle outgoing connection to external API

I'm currently developing website in asp core 2.2. This site use external API. But I have one big problem and don't know how to solve this. This external API has limit 10 reguest per IP/s. If 11 user click button on my site and call API at the same time, the API can cut me off for a couple hours. The API owner tells clients to take care of not exceeding the limit. Can you have any idea how doing this?
ps. Of course, a million users are a joke, but I want the site to be publicly available :)
That 10 request/s is a hard limit and it seems like theres no way around it. So you have to solve it on your end.
There are couple options:
Calls that API directly using Javascript. This way each user will be able to do 10 request/s instead of 10 request/s for all users (recommended)
Queue the requests and only send out at most 10/s (highly not recommended, kills your thread pool and can block everyone from accessing your site when the speed of input coming is > output)
Drop the request on server side when you are reaching that 10/s limit and have the client retry at a later time. (wait time will be infinite when speed of input coming is > output)
And depending on the content returned by the API you might be able to cache it on server side to avoid having to request it from the 3rd party again.
In this scenario you would need to account for the possibility that you can't process requests in real time. You wouldn't want to have thousands of requests waiting on access to a resource that you don't control.
I second the answer about calling the API from the client, if that's an option.
Another option is to keep a counter of current requests, limit it to ten, and return a 503 error if a request comes in that exceeds that capacity. That's practical if you really don't expect to exceed ten concurrent requests often or ever but want to be sure that in the odd chance that it happens it doesn't shut down this feature of your site.
If you actually expect large volumes where you would exceed ten concurrent requests then you would need to queue the requests, but do it in a process separate from your web application. As mentioned, if you have tons of requests waiting for the same resource your application will become overloaded. You could enqueue the request with an entirely different process, and then the client would have to poll your application with occasional requests to see if there's a response.
The big flaw in this last scenario is that it means your users could end up waiting a long time because your application depends on a finite resource that you cannot scale. You can manage it in a way that keeps your application from failing, but not in a way that makes it respond quickly.

Performance Issue in Windows.Web.Http.HttpClient

I have been using the Windows.Web.Http.HttpClient for my API Requests. My HttpClient is a singleton. I analyzed the resource timing for my API calls with the Network Profiler in Visual Studio. In the Timings split-up, I see that the Waiting (TTFB) part takes the most time (about 275ms. Sometimes it goes as high as 800ms).
As per this doc, waiting time is the
Time spent waiting for the initial response, also known as the Time To
First Byte. This time captures the latency of a round trip to the
server in addition to the time spent waiting for the server to deliver
the response.
When trying the same API call in different platforms mac(NSUrlSession) or android, the waiting time is significantly lower in same network. My question is whether this waiting time delay is dependant on the HttpClient implementation? If not is there anything which needs to be changed in my NetworkAdapter code?
The TTFB is a function of the round-trip time (RTT) and the server’s response time (SRT), both of which are mostly outside of the client OS’ control. Just as a basic sanity check, I would recommend measuring the TTFB using Scenario 1 of the HttpClient SDK Sample app. One possible explanation would be that the Windows device doesn’t have the same network setup as the Mac/Android devices (e.g., are they all connected via WiFi? If so, are they all using the same band (2.4 GHz or 5 GHz)?). However, the most likely explanation is that the HTTP request being sent out by HttpClient differs from the one sent out by NSUrlSession (e.g., in terms of the headers), resulting in a different server-side processing time.
The TTFB is very site-dependent. Here's what I’m seeing on my end using the VS2017 Network Profiler with the HttpClient SDK sample app:
Bing.com:
Amazon.com:
Microsoft.com:

php soap client and c# soap server

I am currently working on a project which I think using soap as part of it would be a good idea but I can't find how it will work in the way that I need.
I have a C# Console Application called ConsoleApp, ConsoleApp will also have a PHP web interface. What I'm thinking of doing, is the PHP web interface controls the ConsoleApp in some way, so I click a button on the web interface, then this does a sends a soap request to a soap service and then the soap service, sends the information on to the consoleApp, and the result is returned back to the SoapService and then returned back to PHP.
This seems like it would need to separate soap services, one for php to interface with and one within the ConsoleApp but this doesn't sound right, I think I might be misunderstanding the purpose of Soap.
How can this be achieved. Thanks for any help you can provide
UPDATE
As requested I thought I'd add a bit more information on what I am trying to achieve.
In the console app, it is acting as an email server sending out emails that are given to the program and then being sent on, and if it can't send it retries a couple of times until the email goes into a failed state.
The web interface will provide a status of what the email server is doing, i.e. how many emails are incoming, how many are yet to be processed, how many have sent and how many have failed.
From the web page you will be able to shutdown or restart the email server or put one of the failed emails back into the the queue to be processed.
The idea is, when the user adds a failed email back into the queue it sends a soap message that the console app will receive, add the information back into the queue, log the event in the console apps log file, increment a counter which is how it keep track of emails that need to be processed. Once this has been done it should then send a response back to the web interface to say whether or not the email was successfully added back into the queue or whether it failed for some reason.
I don't really want to keep on polling the database every so many seconds as there could be the potential for their to be a large number of emails that will be being processed so polling the database would put a large load on the MySQL server which I don't want, which is why I thought soap as the email server would only need to do something when it receives a soap request to do something.
Thanks for any help.
Every web service is going to need a client (in your case PHP) and a server (ConsoleApp). Even though there are two endpoints, it is still one web service. Your PHP will send a SOAP request which ConsoleApp will receive, process and respond to with a SOAP response.
So when someone clicks the button on the web page, you can use JavaScript to build and send the SOAP envelope in the browser. The alternative is to POST the values to a PHP page that will build and send the SOAP.
I have to admit though, your scenario sounds a unusual. I personally haven't heard of web pages talking directly with console apps. Web pages usually talk to web servers, and the servers are usually the ones issuing atypical requests, like your request to ConsoleApp. While it is technically possible, but I think it is going to be harder then you are expecting.
Personally, I would ditch SOAP in favor of a much more simple and scalable solution. Assuming you have access to a database, I would have the PHP create a record in the database when the user clicks the button. ConsoleApp would then poll the database every X seconds to look for new records. When it finds a new record, it processes it.
This has the benefit of being simple (database access is almost always easier than SOAP) and scalable (you could easily run an arbitrary number of ConsoleApps to process all of the incoming requests if you are expecting heavy loads). Also, neither the PHP page nor the ConsoleApp have a direct dependency on the other so each individual component is less likely to cause a failure in the whole system.

Categories