I'm using Gmail API .NET client to send/get emails.
Recently I started getting exceptions with this message for some gmail accounts, for both sending/getting emails:
Google.Apis.Requests.RequestError
User-rate limit exceeded. Retry after 2018-09-25T13:31:30.444Z [429]
Errors [
Message[User-rate limit exceeded. Retry after 2018-09-25T13:31:30.444Z] Location[ - ] Reason[rateLimitExceeded] Domain[usageLimits]
]
I'd like to know if it's possible to check a per-user quota usage for my project for a specific account, in the console I found this:
In the Queries per 100 seconds per user there are no numbers, and the hint from the question mark icon just says: Per user quota usage is not displayed.
From Gmail API Docs we can find :
Per User-rate limit: 250 quota units per user per second, moving average (allows short bursts)
messages.send method consumes 100 quota units
messages.get method consumes 5 quota units
messages.list method consumes 5 quota units
messages.attachments.get method consumes 5 quota units
I don't think I'm reaching 250 quota units per second for any user, yet I'd like to make sure and check that on Google Console for a specific user account. Is that possible?
I've heard of exponential backoff, which is suitable if you indeed make many calls. In my case, I shouldn't be making many calls, so I'd like to investigate that and fix, rather than just implementing a backoff.
The console doesn't say the per-user quota usage because it is different for every user - it doesn't make sense to list every single user's quota usage.
Exponential back-off is recommended. Not only does it allow your usage to be throttled to the rate limit, but it also is the correct way to handle server-side errors.
Related
I know this has been asked a few times but I'm trying to track down what my exact issue could be.
I've got a C# app, which queues up messages to be sent (using Azure Storage Queues) and these are processed by an Azure Webjob. We're using the twilio-csharp nuget package to send the messages.
The code to send a message is pretty simple:
MessageResource.Create(
body: message.Message,
from: new Twilio.Types.PhoneNumber(TwilioFromNumber),
to: new Twilio.Types.PhoneNumber(message.SendToPhoneNumber));
By default, the Webjob will process up to 16 messages at a time but to combat this issue we've set:
context.BatchSize = 2;
context.NewBatchThreshold = 0;
So, at any given point, we're not making more than 2 requests at a time.
Even with this low threshold, we still see these errors in the log periodically:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: TextMessageFunctions.SendTextMessage ---> Twilio.Exceptions.ApiException: Too Many Requests
at Twilio.Clients.TwilioRestClient.ProcessResponse(Response response)
Some other thoughts:
The answer on this question, from a Twilio Developer Evangelist, suggests the REST API's concurrency limit is 100 by default. Is this still true or there a way for me to check this on my account? There's no way we're close to 100. We never queue up more than 20-30 messages at a time, and that is on the extreme end of things.
We're using a Toll-Free US number to send from. According to Twilio, we should be able to queue up 43,200 messages on their end.
That same article says:
Notice: You can send messages to Twilio at a rapid rate, as long as the requests do not max out Twilio's REST API concurrency limit.
This makes me think I'm doing something wrong, because surely "a rapid rate" could be more than 2 requests at a time (and I still wonder about the rate of 100 mentioned above). Can we truly not call the Twilio API with 2 concurrent requests without getting this error?
Twilio developer evangelist here.
There has been a bit of a change in the concurrency limits recently that has affected you here. New accounts are now receiving a much lower concurrency allowance for POST requests, as low as 1 concurrent request. This was to combat a recent rise in fraudulent activity.
I am sure your activity isn't fraudulent, so here's what you should do:
For now reduce your batch size to 1 so that you only make 1 request at a time to the Twilio API.
Add code to catch errors and if they are 429 response, re-queue the job to happen later (with exponential back off if possible)
Get in touch with Twilio Sales to talk to them about your use case and request an increased concurrency limit
I am sure this limit is not going to be the long term solution to the issues we were facing and I am sorry that you are experiencing problems with this.
I created a program that uploads videos to YouTube using the YouTube Data API.
My program's flow is:
Login google account in google page using the URL
https://accounts.google.com/o/oauth2/v2/auth?access_type=offline&prompt=consent&include_granted_scopes=true&client_id={YouTubeAppId}&redirect_uri={RedirectURL}&response_type=code&scope={Uri.EscapeDataString(https://www.googleapis.com/auth/youtube.readonly https://www.googleapis.com/auth/youtube.upload https://www.googleapis.com/auth/youtube.force-ssl https://www.googleapis.com/auth/userinfo.profile)}
Exchange code to access token: POST https://oauth2.googleapis.com/token
Get user's channel's IDs: GET https://youtube.googleapis.com/youtube/v3/channels?part=snippet,statistics&mine=true&access_token={AccessToken}.
Then I select the video and split it to 1Mb chunks.
Get upload URL: POST https://www.googleapis.com/upload/youtube/v3/videos?uploadType=resumable&part=snippet,status,contentDetails&access_token={token} with model
Send chunks of video using URL from Location header in previous step.
Also I've got 10min timer which refreshes access token.
foreach (SocialLogin item in logins.Where(x => (x.UpdatedOn.AddSeconds(x.ExpirationSeconds) - DateTime.Now).TotalSeconds < 600))
{
SocialLogin youModel = await _youtubeService.RefreshTokenAsync(item);
youModel.UpdatedOn = DateTimeOffset.Now;
youModel.ExpirationSeconds = 3600;
await _loginRepository.UpdateAsync(youModel);
}
This function gets all YouTube logins from the database and refreshes tokens for those logins that expire less than 10 minutes using POST on the URL https://oauth2.googleapis.com/token.
I uploaded couple of 15mb videos and it works well, but after it Queries per day quota counted about 2500 queries. I added breakpoints for all methods which make API calls and counted them all. There was about 50 requests at all. After couple of hours I sent another 2 30mb videos and this quota had about 8000 queries. How is this quota calculated? And what does it calculate?
Now it's 0 but yesterday it was 8120.
First, you'll have to acknowledge that the YouTube Data API does not account the amount of megabytes of video content the you're uploading via its Videos.insert endpoint.
Then you'll have to acknowledge that the API is not accounting for the number of endpoint calls.
Given that each endpoint has attached a quota cost, the YouTube Data API is accounting for the sum of units of quota cost of all endpoint calls you make on a particular day.
For example, having a daily quota allocation of 10000 units, since the cost of one video upload is 1600 units, you're allowed to upload at most 6 videos, irrespective of their actual size (if not considering the cost of other API calls that you made).
We are using APIM for all our API requests and enabled Application Insights to make sure we get all information like country, request body, IP address, HTTP status code, etc.
We are using AppInsights API to get data APIM data, as on UI, there is a limit of 10K per query.
https://api.applicationinsights.io/v1/apps/
It was working fine till we had a limited number of calls on APIM like 7K/10K per day.
Now we are getting around 40k-80K data per day.
Now when I write a Kusto query in the AppInsights UI, it give me counts 38,648, 29,493, 26,847 for 3 days.
requests
|where url contains 'abc'
|where timestamp >= startofday(datetime('30-Apr-20')) and timestamp <= endofday(datetime('02-May-20'))
| summarize count(),
avg(duration)
by bin(timestamp , 1d)
But when I run an API query request, it gives me records around 54K whereas i should get records around 94K.
When it runs for days where requests are more 150+, still it gives records around 54K.
I checked with the limit on the number of queries. they are talking about 200 per second 30 seconds and 86400 per day. Nothing is mentioned about data size.
It seems, there is a limitation on data size from AppInsights API
When I download for 30-Apr to 01-May, file download size is around 74K
When I download for 30-Apr to 02-May, still file download size is around 74K
I have used AppInsights API in C# console application and using webClient.(DownloadString/DownloadFIle) method to get this data.
Query as follows
https://api.applicationinsights.io/v1/apps/<code/query?query=requests|where url contains 'abc'|where timestamp >= startofday(datetime('30-Apr-20'))and timestamp <= endofday(datetime('02-May-20'))
You have to set the sampling value to '100'.
How to integrate Azure API Management with Azure Application Insights
Sampling (%) decimal Values from 0 to 100 (percent).
Specifies what percentage of requests will be logged to Azure Application Insights. 0% sampling means zero requests logged, while 100% sampling means all requests logged.
This setting is used for reducing performance implications of logging requests to Azure Application Insights (see the section below).
We are using the Partner WSDL in our C# integration with Salesforce and we are receiving the following error when trying to update more than 200 records:
Error updating Contact: EXCEEDED_ID_LIMIT: record limit reached. cannot submit more than 200 records into this call
How do we go about increasing this number? Is it possible or are we stuck with 200 records?
Thanks ahead of time for your resonse.
You can only update 200 records in a single request, you need to chunk your update up into sets of 200 and make multiple calls.
The web service administrators have probably limited each call to 200 records as a safeguard. This means less load on their servers and quicker response to the client.
You probably cannot change this limit unless you contact the web service administrator directly.
For now you should keep the limit in mind and make multiple requests of 200 records each instead of a single request.
Note: Web services that limit the number of records returned per request will sometimes return an ID number. This usually allows the client to continue picking up records where they left off. Keep an eye out for this.
We have a web service using ServiceStack (v3.9.60) that is currently gets an average (per New Relic monitoring) of 600 requests per minute (load balanced with two Windows 2008 web servers.)
The actual time spend in the coded request Service (including Request Filter) takes about an average of 5ms (From what we see from recorded log4net logs.) It is offloading the request to an ActiveMQ endpoint and automatic have ServiceStack generate a 204 (Return204NoContentForEmptyResponse enabled with "public void Post(request)")
On top of that we have:
PreRequestFilters.Insert(0, (httpReq, httpRes) =>
{
httpReq.UseBufferedStream = true;
});
since we use the raw body to validate a salted hash value (passed as a custom header) during a Request Filter for approval reasons that it comes from a correct source.
Overall we see in New Relic that the whole web service call takes an average around 700ms, which is a lot compared to the 5ms it actually takes to perform the coded process. So when we looked deeper in the data New Relic reports we saw some requests periodically take quite some time (10-150 seconds per request.) Drilling down in the reporting of New Relic we see that Applying the Pre-Request Filter takes time (see image below.) We were wondering why this could be the case and if it was related to the buffered stream on the Http Request object and what possibly could be done to correct this?
EDIT
Have been playing around with this some and still haven't found an answer.
Things done:
Moved the Virtual Folder out from a sub-folder location of the actual site folder (there are about 11 other Web Services located under this site)
Assigned this Web Service to use its own Application Pool so it is not shared with the main site and other Web Services under the site
Added the requirement to Web.Config for usage of Server GC as Phil suggested
Disabled the pre-request filter that turned on the usage of buffered stream (and bypass the code that used the RawBody)
Added more instrumentation to New Relic for a better drill-down (see image below)
I'm starting to wonder if this is a Windows Server/IIS limitation due to load. But would like to hear from someone that is more familiar with such.