I have been using the Windows.Web.Http.HttpClient for my API Requests. My HttpClient is a singleton. I analyzed the resource timing for my API calls with the Network Profiler in Visual Studio. In the Timings split-up, I see that the Waiting (TTFB) part takes the most time (about 275ms. Sometimes it goes as high as 800ms).
As per this doc, waiting time is the
Time spent waiting for the initial response, also known as the Time To
First Byte. This time captures the latency of a round trip to the
server in addition to the time spent waiting for the server to deliver
the response.
When trying the same API call in different platforms mac(NSUrlSession) or android, the waiting time is significantly lower in same network. My question is whether this waiting time delay is dependant on the HttpClient implementation? If not is there anything which needs to be changed in my NetworkAdapter code?
The TTFB is a function of the round-trip time (RTT) and the server’s response time (SRT), both of which are mostly outside of the client OS’ control. Just as a basic sanity check, I would recommend measuring the TTFB using Scenario 1 of the HttpClient SDK Sample app. One possible explanation would be that the Windows device doesn’t have the same network setup as the Mac/Android devices (e.g., are they all connected via WiFi? If so, are they all using the same band (2.4 GHz or 5 GHz)?). However, the most likely explanation is that the HTTP request being sent out by HttpClient differs from the one sent out by NSUrlSession (e.g., in terms of the headers), resulting in a different server-side processing time.
The TTFB is very site-dependent. Here's what I’m seeing on my end using the VS2017 Network Profiler with the HttpClient SDK sample app:
Bing.com:
Amazon.com:
Microsoft.com:
Related
I'm currently developing website in asp core 2.2. This site use external API. But I have one big problem and don't know how to solve this. This external API has limit 10 reguest per IP/s. If 11 user click button on my site and call API at the same time, the API can cut me off for a couple hours. The API owner tells clients to take care of not exceeding the limit. Can you have any idea how doing this?
ps. Of course, a million users are a joke, but I want the site to be publicly available :)
That 10 request/s is a hard limit and it seems like theres no way around it. So you have to solve it on your end.
There are couple options:
Calls that API directly using Javascript. This way each user will be able to do 10 request/s instead of 10 request/s for all users (recommended)
Queue the requests and only send out at most 10/s (highly not recommended, kills your thread pool and can block everyone from accessing your site when the speed of input coming is > output)
Drop the request on server side when you are reaching that 10/s limit and have the client retry at a later time. (wait time will be infinite when speed of input coming is > output)
And depending on the content returned by the API you might be able to cache it on server side to avoid having to request it from the 3rd party again.
In this scenario you would need to account for the possibility that you can't process requests in real time. You wouldn't want to have thousands of requests waiting on access to a resource that you don't control.
I second the answer about calling the API from the client, if that's an option.
Another option is to keep a counter of current requests, limit it to ten, and return a 503 error if a request comes in that exceeds that capacity. That's practical if you really don't expect to exceed ten concurrent requests often or ever but want to be sure that in the odd chance that it happens it doesn't shut down this feature of your site.
If you actually expect large volumes where you would exceed ten concurrent requests then you would need to queue the requests, but do it in a process separate from your web application. As mentioned, if you have tons of requests waiting for the same resource your application will become overloaded. You could enqueue the request with an entirely different process, and then the client would have to poll your application with occasional requests to see if there's a response.
The big flaw in this last scenario is that it means your users could end up waiting a long time because your application depends on a finite resource that you cannot scale. You can manage it in a way that keeps your application from failing, but not in a way that makes it respond quickly.
When i am posting data from c# application(Windows server) to PHP page which runs on another server(Ubuntu) using POST method,
i am posting minimum 1000 request per second to PHP page,
c# application is a multi threading application, once it receives the data it post the data to php page
when i continuously posting datas i'm getting posting timeout error on c# application, once i restart the application it works for few hours.
[Note: due to php takes time to finish the task so new request are waiting , it creates queue and its waiting time exceed more than 2 min and im getting timeout error].
Both of our server use maximum 50% of CPU and RAM usage
i check on both c# code and PHP code both are working fine there is no issues or bugs
and i check on mysql configuration also fine but i dont know about apache config.
Apache config are set as default
what i think is may be i should config apache or php to handle 1000 request per second, i dont know exactly because same code working fine until clients request
increased.
thanks in advance buddy :)
I think you might be hitting a TCP Port Exhaustion issue. If you are making many sequential calls to another server, and dont manage the TCP connections properly your OS will not immediately release the TCP Port connection it created for the outgoing call, and will assign further OS resources to the next call. I think the default TCP Port release time can be as high as 2 minutes.
See How do I prevent Socket/Port Exhaustion? for further details. To be sure we'd need to see your C# code to see how you are releasing the resource you use when creating the WebClient call.
If it is a port exhaustion issue, then you are going to have to manage your outgoing calls to the PHP server using a manually created pool of WebClient instances - even releasing the WebClient may not immediately release the OS resources that the WebClient made use of.
thank u for kind reply bro,
it was config issue on ubuntu server i didnt enable fast cgi now its works fine
We have an application (a meta-search engine) that must make 50 - 250 outbound HTTP connections in response to a user action, frequently.
The way we do this is by creating a bunch of HttpWebRequests and running them asynchronously using Action.BeginInvoke. This obviously uses the ThreadPool to launch the web requests, which run synchronously on their own thread. Note that it is currently this way as this was originally a .NET 2.0 app and there was no TPL to speak of.
Using ETW (our event sources combined with the .NET framework and kernal ones) and NetMon is that while the thread pool can start 200 threads running our code in about 300ms (so, no threadpool exhaustion issues here), it takes up a variable amount of time, sometimes up to 10 - 15 seconds for the Windows kernel to make all the TCP connections that have been queued up.
This is very obvious in NetMon - you see around 60 - 100 TCP connections open (SYN) immediately (the number varies, but it's never more then around 120), then the rest trickle in over a period of time. It's as if the connections are being queued somewhere, but I don't know where and I don't know how to tune this to we can perform more concurrent outgoing connections. Perfmon Outbound Connection Queue stays at 0 but in the Connections Established counter you can see an initial spike of connections then a gradual increase as the rest filter through.
It does appear that latency to the endpoints to which we are connecting play a part, as running the code close to the endpoints that it connects to doesn't show the problem as significantly.
I've taken comprehensive ETW traces but there is no decent documentation on many of the Microsoft providers, which would be a help I'm sure.
Any advice to work around this or advice on tuning windows for a large amount of outgoing connections would be great. The platform is Win7 (dev) and Win2k8R2 (prod).
It looks like slow DNS queries are the culprit here. Looking at the ETW provider "Microsoft-Windows-Networking-Correlation", I can trace the network call from inception to connection and note that many connections are taking > 1 second at the DNS resolver (Microsoft-Windows-RPC).
It appears our local DNS server is slow/can't handle the load we are throwing at it and isn't caching aggressively. Production wasn't showing as severe symptoms as the prod DNS servers do everything right.
I am required to create a high performance application where I will be getting 500 socket messages from my socket clients simultaneously. Based on my logs i could see that my dual core system is processing 80 messages at a time.
I am using Async sockets (.BeginRecieve) and i have set NoDelay to true
From the logs from my clients and my server i could see that the message i wrote from my client is read by my server after 3-4 sec.
My service time of my application should be lot lesser.
First, you should post your current code so any potential bugs can be identified.
Second, if you're on .NET 3.5, you might want to look at the SocketAsyncEventArgs enhancements.
Start looking at your resource usages:
CPU usage - both on the overall system, as well as your specific process.
Memory usage - same as above.
Networking statistics.
Once you identify where the bottleneck is, both the community and yourself will have an easier time looking at what to focus on next in improving the performance.
A review of your code may also be necessary - but this may be more appropriate for https://codereview.stackexchange.com/.
When you do a socket.listen, what is your backlog set to? I can't speak to .net 4.0, but with 2.0 I have seen a problem where once your backlog is filled up (too many connection attempts too fast) then some of the sockets will get a TCP accept and then a TCP Reset. The Client then may or may not attempt to reconnect later again. This causes a connection bottleneck rather than a data throughput or a processing bottleneck.
I want to create a remote webservice for an application that is now avaliable only localy. This application controlls three devices (each is controlled separately) connected on serial port. The problem is that I don't know how to take care of passing back information that a device return requested data. For example - I send move command to the motion device (which is very slow and can take a minute or more). Can I just set a big timeout on the client side (and server side) and return for example a true/false if operation is completed or is this a bad idea? Is SOAP with big timeouts ok?
And the other question is if Mono on Linux (Ubuntu 9.10, Mono 2.4) is stable enought for making a web service or should I chose Java or some other language?
I'm open for recommendations.
Thanks for your help!
Using big timeouts is not a good idea. It wastes resources on both the server and the client and you will not be able to detect a "true" timeout condition, when the server is unavailable for example, before the allocated timeout expires.
You really have two options. The first is to use polling. Return immediately from the motion request command, acknowledging the reception of the command (and not the completion of it). Then send requests in regular intervals, asking whether the command is completed or not.
The other alternative requires the client to be able to register a callback endpoint, which the server will call when the motion completes. This makes the whole process asynchronous, but requires the client to be able to operate in server mode. This is very easy to do with WCF - I don't know however if this functionality is available in Mono.
Not directly related to your question..., but consider com0com and its friends hub4com and com2tcp.