Throttle outgoing connection to external API - c#

I'm currently developing website in asp core 2.2. This site use external API. But I have one big problem and don't know how to solve this. This external API has limit 10 reguest per IP/s. If 11 user click button on my site and call API at the same time, the API can cut me off for a couple hours. The API owner tells clients to take care of not exceeding the limit. Can you have any idea how doing this?
ps. Of course, a million users are a joke, but I want the site to be publicly available :)

That 10 request/s is a hard limit and it seems like theres no way around it. So you have to solve it on your end.
There are couple options:
Calls that API directly using Javascript. This way each user will be able to do 10 request/s instead of 10 request/s for all users (recommended)
Queue the requests and only send out at most 10/s (highly not recommended, kills your thread pool and can block everyone from accessing your site when the speed of input coming is > output)
Drop the request on server side when you are reaching that 10/s limit and have the client retry at a later time. (wait time will be infinite when speed of input coming is > output)
And depending on the content returned by the API you might be able to cache it on server side to avoid having to request it from the 3rd party again.

In this scenario you would need to account for the possibility that you can't process requests in real time. You wouldn't want to have thousands of requests waiting on access to a resource that you don't control.
I second the answer about calling the API from the client, if that's an option.
Another option is to keep a counter of current requests, limit it to ten, and return a 503 error if a request comes in that exceeds that capacity. That's practical if you really don't expect to exceed ten concurrent requests often or ever but want to be sure that in the odd chance that it happens it doesn't shut down this feature of your site.
If you actually expect large volumes where you would exceed ten concurrent requests then you would need to queue the requests, but do it in a process separate from your web application. As mentioned, if you have tons of requests waiting for the same resource your application will become overloaded. You could enqueue the request with an entirely different process, and then the client would have to poll your application with occasional requests to see if there's a response.
The big flaw in this last scenario is that it means your users could end up waiting a long time because your application depends on a finite resource that you cannot scale. You can manage it in a way that keeps your application from failing, but not in a way that makes it respond quickly.

Related

How to troubleshoot MaxConcurrentSessions exceeded in IIS hosted WCF Service

I'm way out of my comfort zone so bear with me on providing the relevant information. We have just moved a IIS hosted WCF service to a new server and clients calling this service started experiencing timeouts. It does ok for about 10 minutes after recycling the app pool and then everything begins timing out. We enabled WCF tracing where I can see that its saying the MaxConcurrentSessions has been exceeded. The documentation says that value defaults to 2 x [# of processors] so it should be 200 for us.
The server is behind a load balancer, but is currently the only server. We notice the connections hang out at around 6 per second in Performance Monitor but will climb up to around 30 when the timeouts happen and continue climbing up from there.
The clients are connecting using a wsHttpBinding TransportWithMessageCredential security. The service validates the credentials provided in the message using the asp.net membership provider in a custom UserNamePasswordValidator configured for use on the server binding behavior. The clients do not enable reliableSession on their bindings. The service uses the default SessionMode and InstanceContextMode which I believe are Allowed and PerSession respectively? We do not call Close on the service proxies because in past investigation, I've found this only sets a flag on the option preventing it from being re-used and ours always go out of scope anyway...but now doing testing to see if this does close the connection.
If I'm interpreting the WCF trace log correctly (and I don't understand the majority of what I'm reading there) it appears we are processing around 30-40 messages per minute and that each request is completed in less than 300ms (usually much less, on rare occasions nearly 1s.) I determined the number of messages by counting the Processing message n messages over a few 1 min spans. So if we're getting 40 per minute and it takes 100s for those connections/sessions to timeout and close, we would still only have about 68 open at once before the first ones begin to time out. Not close to the 200 limit. Does the connection for a single client request get more than one session?
The strange thing is we didn't have any timeouts before and copied the service and web.config straight over to the new server. I believe the server and IIS versions were upgraded (server 2016, IIS 10.) Can you please help me identify and provide the relevant information to track down the problem causing these timeouts?
Edit:
From my reading, everything seems to indicate that the client must call Close otherwise the server will leave the connection open until it times out. However, in our test, we see one connection created in perf. mon. but it remains open after Close has been called anyway. So I can't determine if the need to call close is a rumor or if we are misinterpretting our monitoring. The real test would be to call Close everywhere and see if it eliminates our timeouts.
After increasing our MaxConcurrentSessions to 400, in performance monitor, we saw the number of concurrent sessions and instances steadily rise by about 1 per second up to about 225 where it finally leveled off and it's hovering around there. So it seems like sessions are not being closed.
Well we figured it out. There was nothing that just popped up and told us what the problem was and it took a lot of brain storming, but here's what we did:
Enabled WCF tracing. Went through the traces and was able to understand enough to basically see that the traffic didn't look out of the ordinary. All of the events seemed to be for the expected amount and types of service calls. Viewing in svctraceviewer, It didn't seem to be a DOS attack or anything like that. We just used the default configuration from that link, but it looks like it can be very customized to provide the specific information you're after if you know what that is.
What really helped in this case was finding the WCF Performance Counters. Initially we were using ASP.NET performance counters to look at sessions open which was not the right metric. This codeproject guide helped us enable the WCF performance counters to give us an insight into the number of sessions and the limit in real time.
It also helped to brush up on how WCF sessions and instances are related as well as creation of a security context:
https://www.codeproject.com/Articles/188749/WCF-Sessions-Brief-Introduction
http://webservices20.blogspot.com/2009/01/wcf-performance-gearing-up-your-service.html
https://learn.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/hh273122(v=vs.100)
We were able to see the percentage of the max WCF sessions being used, and observed it climbing higher and higher towards the default limit of 200 (100 per processor) but eventually level off between 150 and 200. This leveling off, together with far more sessions existing at a given time than the average number of requests per minute seen in our WCF tracing, indicated that sessions were closing but seemed to be remaining open until they timed out rather than closing as soon as the server completed the request.
Somewhere on Stack Overflow, that I've been unable to find, I once asked about the purpose of the [ClientBase<TChannel>.Close][4] method (a.k.a. the close method of a WCF service proxy) and, somewhat incorrectly, came to the conclusion that all it did is set a flag on the proxy object marking it closed so that it couldn't be used again. The documentation's description of the method seems in line with that:
Causes the ClientBase<TChannel> object to transition from its current
state into the closed state.
Well at the point that I would call Close, my references always just go out of scope anyway allowing garbage collection to clean it up so that seemed pointless. But I think a key factor was that that was regarding basicHttpBindings which are stateless. In this case, we are using wsHttpBindings which are stateful which means the server leaves keeps the session and leaves the connection open after it completes the request so that subsequent calls from the client can be made on the same connection. So, though I couldn't find any documentation or track down in the source code where it happens, it seems WCF clients must call Close on their service proxy after they make their last request in order to tell the server it can close the connection and free up that session slot. I didn't have the opportunity to look for a message sent to the server upon calling Close to do this, but we were able to observe, using the Performance Counter, the number of sessions dropping from 1 to 0 where before it would remain at 1 after our client called the service.
But we're saying a WCF client, who we may have no control over, is able to harm server performance and possibly create a denial of service if they aren't diligent in their coding and remembering to call Close and the server has no control over its own performance?? That sounds like a recipe for disaster. Well there are two things you can do on the server to mitigate this. First you can increase the max number of sessions. In our case we were hovering around 175 but occasionally under traffic spikes exceeding the 200. We bumped it up to 800 temporarily to ensure we wouldn't exceed the max. The trade-off is dedicating more server resources to holding those sessions that will probably never be used again until they time out. Luckily, the server also controls the timeout. The service can control the length these sessions are held open using the ReceiveTimeout and the InactivityTimeout. Both default to 10 minutes but the lesser of the two will be used. If you're thinking, "Receive timeout sounds wrong. That controls the amount of time the service can take to receive a large message", you're not alone. However, that's incorrect. On the server side:
ReceiveTimeout – used by the Service Framework Layer to initialize the session-idle timeout which controls how long a session can be idle before timing out.
And on the client-side it is not used. So we set our ReceiveTimeout to 30 seconds and the sessions dropped significantly. That may have actually been too low because some spots in code that do re-use the service proxy (making multiple calls in a loop for instance, or doing some data processing in between calls) are now getting an error when trying to call the service after the session has been closed. So you will have to find the right balance. But best practice, it seems, is to close your connections.
One gotcha to watch out for is using Dispose on your service proxy. I had always tried typing .dispo to see if intellisense would popup the Dispose method on my proxy and found that it didn't so assumed it didn't implement IDisposable and didn't need to be closed or disposed. It turns out it does implement IDisposable but it does it explicitly so you'd have to cast it as an IDisposable to call Dispose on it. But wait! Don't go putting your proxy in a using statement just yet. The implementation of Dispose sillily just calls Close on the proxy which will throw an exception if the proxy is in the faulted state (i.e. if a service call threw an exception). So you can't safely do something like this:
using(MyWcfClient proxy = new MyWcfClient())
{
try
{
proxy.Calculate();
}
catch(Exception)
{
}
}
because if Calculate throws an exception, the closing bracket of the using block will also throw an exception when it tries to dispose your proxy. Instead you just have to call Close after your last service method call. Evidently you can also call Abort in the catch, but I'm not sure if that actually communicates with the server to end the session.
MyWcfClient proxy = new MyWcfClient
try
{
proxy.Calculate();
proxy.Close();
}
catch(Exception)
{
proxy.Abort();
}
Addendum
We surmise the reason we started experiencing this when moving servers and were not experiencing it before is we were using Barracuda products before and are now using Oracle and perhaps the old load balancer or firewall was closing open connections for us.

ASP.NET - Requests Queued and Requests Rejected

I am running an ASP.NET service. The service starts returning - "Service Unavailable - 503" under high loads.
Previously it was able to cope with these loads, I still am investigating why that is happening now.
I see a high requests rejected rate (via the ASP.NET perf counter) ; however the requests queued rate (via the ASP.NET perf counter) varies from deployment to deployment from 1 to 150. For some deployments that show a high requests rejected rate, I can correlate that to the high requests queued rate. However, for some deployments the requests queued is low - 1-5 but the requests rejected rate is high.
Am I missing something here? Any pointers on how to investigate this issue further?
I'd take a peek with a profiler, to see if your getting load in other areas that you weren't before such as syncronous DB and network calls.
Look at newRelic (simple to use) and identify the bottlenecks, so simple code changes may help you get out of your immediate hole.
Moving forward look into making the code base more async (if it isn't already).

How to send email with delay?

I have ASP.NET MVC application and I need to send email in "X" minutes(for each user time is different) to user after he leaves the page.
How can I do it?
Http is stateless and the time response is sent execution of page is finished. You need an application that will be sending mail even when website is not accessed by some body for a significant time interval. You can put the mails that need to be send after an interval of time in the database. Another application could be a Windows service that will pool the database after fixed interval of time let's say 30 seconds and send the mails which have reached the send time.
The solution I would choose depends on the needed scale and reliability of the system you're building.
If it's a low scale (i.e. 1 server with not too many users at the same time), non mission-critical system (i.e. it's OK if from time to time some emails are not actually sent, for example if your server crashes), then the solution can be as simple as managing a queue in memory with a thread that would wake periodically to send emails to the users that recently left the page.
If you need to build something that would be very reliable and potentially have to send a very large number of emails in a short time, and if your system has to scale to a lot of machines, then you would want to build a solution based on a queue in some storage, where as many machines as needed would pick items and handle them. An API such as Windows Azure Queue Service can be a good fit for this if you need a really high scale and reliability.

Sending Email Notifications Immediately/On-Demand Versus Sending them Via Scheduler/Cron

In your opinion (hopefully one that is formed based on fact, as opposed to emotion) what is the better way to send out email notifications from a website?
For example, say User A on your site requests a friendship with User B, at which point you would generate an email to send to User B.
The question is - when is the best time to send the email? Immediately, as part of the same execution path, or scheduling the email as part of a batch?
Like I said, my question is rather generalized, so you can assume different architectures - one server dedicated to hosting, another dedicated to emailing, a single server, cloud hosting, etc... I'm curious about all answers, really.
As I see it:
With immediate emails, you get timely emails, but you can potentially bog down your server by sending too many emails should your website receive a lot of traffic. That being said, because you're not sending a batch of emails, they are all one-offs.
If you batch your emails and have a scheduled task or cron job pick them up and send them, your emails are not as immediate - so assume you decrease the interval so that batches are sent every 1 minute. The issue, as I see it is concurrency - if another batch kicks off before the first one completes, you could risk sending double emails if you don't appropriately flag or lock what you're sending.
In my personal experience, when I've had emails sent off immediately on a high traffic site, performance wasn't impacted too much, though a number of emails failed to send out.
Thoughts?
I would say definitely schedule them. There has to be a tollerance in terms of user request and server action on it, as if someone is able to make someone other a friend, it also (I hope) is able to refuse a friendship with the same person. If so, what if I make fast accept and refuse clicks on your website. ?
You have 2 options in this case, imo:
like a SO does, add some timing on user clicks (you can not accept and refuse in 2 seconds)
or you can, but at this point final message to the person whom friendship was accepted/requested is scheduled on the server and will send, say, after 30 minutes (or less, matter of architect choice)
Hope this helps.

ASP.NET background processing blocks status or UI feedback

I know this question has been asked many times, but my problem is a little different.
I have page which lets user download and upload excel file. During downloading excel, it takes approx 2 mins to generate the file. I have added checkpoints which updates the database with status like (started processing, working on header ...etc). I have done the same thing for upload.
I also have a ajax request which checks the database in fixed interval and prints status to user to give feedbacks like (started processing, working on header ...etc).
The problem is, i get the feedback only when the process is complete. It looks like the session is blocked during the background process and any other request(ajax) are only completed once the background process is over. ajax makes approx 10 requests within 4 sec intervals.I get the 10 response back only in the end.
I have tried two iframes and also frames, one running the ajax and other running the process, Doesn't work. i tried separate browser(Process running in IE, ajax running in FF) and that works (so i now my code works). Can anybody advise? Thanks
p.s. My environment is IIS 6, ASP.NET 3.5 with MVC 1.0 browser is IE6.0
Your browser has a limitation on the number of connections that can be working concurrently.
I believe IE has a limitation of 2 connections. That means that even if you are running AJAX requests you can only have two requests running concurrently at the same time.
That is most likely why you're not seeing results until the end, because it's processing other connections and doesn't get to the status request until it's already done. That also explains why it works when you do it from different browsers, because you don't suffer from the same connection limitation.
Here's an article that details the issue.
This is exactly what i was looking for
(asynchronous-processing-in-asp-net-mvc-with-ajax-progress-bar)
Using delegate BeginInvoke of IAsyncResult helped with the blocked session

Categories