I have to display total billing amount for a Azure subscription. Currently I am using Resource Usage API (Preview) and Resource RateCard API(Preview) which is giving me very large detailed data for Rate for each meter and its usage.
As per this article, I've to multiply Rate card details with the Usage Details.
As in my case, I've to show only total billing amount lets say 500 USD, its more than what I required and includes lot of manual work.
Is there any simple way to directly request total amount.
Calling Usage API:
string requesturl = String.Format("https://management.azure.com/subscriptions/{0}/providers/Microsoft.Commerce/UsageAggregates?api-version=2015-06-01-preview&reportedstartTime=2015-05-15+00%3a00%3a00Z&reportedEndTime=2015-05-16+00%3a00%3a00Z", subscriptionId);
Calling RateCard API:
string requesturl = String.Format("{0}/{1}/{2}/{3}",
"https://management.azure.com",
"subscriptions",
subscriptionId,
"providers/Microsoft.Commerce/RateCard?api-version=2015-06-01-preview&$filter=OfferDurableId eq 'MS-AZR-0121p' and Currency eq 'USD' and Locale eq 'en-US' and RegionInfo eq 'US'");
Saurabh - unfortunately, we don't have an API that gives you the outstanding bill / running total of open billing period, but the combination of Usage and RateCard API can get you close.
We are actively discussing this scenario, but you might also want to reach out to 2 partners who have done the integration with the 2 Azure Billing APIs, that might be interested in building out the "get outstanding bill" API - Cloudyn and Cloud Cruiser.
Related
I know this has been asked a few times but I'm trying to track down what my exact issue could be.
I've got a C# app, which queues up messages to be sent (using Azure Storage Queues) and these are processed by an Azure Webjob. We're using the twilio-csharp nuget package to send the messages.
The code to send a message is pretty simple:
MessageResource.Create(
body: message.Message,
from: new Twilio.Types.PhoneNumber(TwilioFromNumber),
to: new Twilio.Types.PhoneNumber(message.SendToPhoneNumber));
By default, the Webjob will process up to 16 messages at a time but to combat this issue we've set:
context.BatchSize = 2;
context.NewBatchThreshold = 0;
So, at any given point, we're not making more than 2 requests at a time.
Even with this low threshold, we still see these errors in the log periodically:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: TextMessageFunctions.SendTextMessage ---> Twilio.Exceptions.ApiException: Too Many Requests
at Twilio.Clients.TwilioRestClient.ProcessResponse(Response response)
Some other thoughts:
The answer on this question, from a Twilio Developer Evangelist, suggests the REST API's concurrency limit is 100 by default. Is this still true or there a way for me to check this on my account? There's no way we're close to 100. We never queue up more than 20-30 messages at a time, and that is on the extreme end of things.
We're using a Toll-Free US number to send from. According to Twilio, we should be able to queue up 43,200 messages on their end.
That same article says:
Notice: You can send messages to Twilio at a rapid rate, as long as the requests do not max out Twilio's REST API concurrency limit.
This makes me think I'm doing something wrong, because surely "a rapid rate" could be more than 2 requests at a time (and I still wonder about the rate of 100 mentioned above). Can we truly not call the Twilio API with 2 concurrent requests without getting this error?
Twilio developer evangelist here.
There has been a bit of a change in the concurrency limits recently that has affected you here. New accounts are now receiving a much lower concurrency allowance for POST requests, as low as 1 concurrent request. This was to combat a recent rise in fraudulent activity.
I am sure your activity isn't fraudulent, so here's what you should do:
For now reduce your batch size to 1 so that you only make 1 request at a time to the Twilio API.
Add code to catch errors and if they are 429 response, re-queue the job to happen later (with exponential back off if possible)
Get in touch with Twilio Sales to talk to them about your use case and request an increased concurrency limit
I am sure this limit is not going to be the long term solution to the issues we were facing and I am sorry that you are experiencing problems with this.
I have been following the IBM API documentation and SDK documentation for .NET for IBM Watson Assistant.
I can see in the documentation that there is rate limiting applied. So, I am curious on how to obtain
X-RateLimit-Reset: the time the current timer expires (in UNIX epoch time)
X-RateLimit-Remaining: the number of requests that remain in the current time window
X-RateLimit-Limit: the total number of requests allowed within the time window
I have used the API function assistantService.ListLogs(workspaceId: workspaceId, filter: filter, cursor: Pagination.NextCursor);, but ran into:
ServiceResponseException: The API query failed with status code TooManyRequests: Too Many Requests | x-global-transaction-id: | error: {"error":"Rate limit exceeded","code":429}
Some questions:
Is it possible to change these parameters? If yes, how?
After 429 code exception, what is the normal time that is needed to wait, in order to send a new request?
In C# Using AssistantService doing a call to assistantService.ListLogs(...) How I can obtain the response headers?
Is it possible to change the number of X-RateLimit-Limit?
I don't have experience with .NET, but some with Watson Assistant. The rate limit information is returned in the HTTP response header. You (your code) sends a http request to Watson Assistant and it sends back a http response. In that response you have the header and payload. Check the header.
The rate depends on your IBM Watson Assistant service plan. So you can change it by upgrading. The value X-RateLimit-Reset is the time when the period for measurement expires. So you can check that to see when new requests are possible again.
I created a program that uploads videos to YouTube using the YouTube Data API.
My program's flow is:
Login google account in google page using the URL
https://accounts.google.com/o/oauth2/v2/auth?access_type=offline&prompt=consent&include_granted_scopes=true&client_id={YouTubeAppId}&redirect_uri={RedirectURL}&response_type=code&scope={Uri.EscapeDataString(https://www.googleapis.com/auth/youtube.readonly https://www.googleapis.com/auth/youtube.upload https://www.googleapis.com/auth/youtube.force-ssl https://www.googleapis.com/auth/userinfo.profile)}
Exchange code to access token: POST https://oauth2.googleapis.com/token
Get user's channel's IDs: GET https://youtube.googleapis.com/youtube/v3/channels?part=snippet,statistics&mine=true&access_token={AccessToken}.
Then I select the video and split it to 1Mb chunks.
Get upload URL: POST https://www.googleapis.com/upload/youtube/v3/videos?uploadType=resumable&part=snippet,status,contentDetails&access_token={token} with model
Send chunks of video using URL from Location header in previous step.
Also I've got 10min timer which refreshes access token.
foreach (SocialLogin item in logins.Where(x => (x.UpdatedOn.AddSeconds(x.ExpirationSeconds) - DateTime.Now).TotalSeconds < 600))
{
SocialLogin youModel = await _youtubeService.RefreshTokenAsync(item);
youModel.UpdatedOn = DateTimeOffset.Now;
youModel.ExpirationSeconds = 3600;
await _loginRepository.UpdateAsync(youModel);
}
This function gets all YouTube logins from the database and refreshes tokens for those logins that expire less than 10 minutes using POST on the URL https://oauth2.googleapis.com/token.
I uploaded couple of 15mb videos and it works well, but after it Queries per day quota counted about 2500 queries. I added breakpoints for all methods which make API calls and counted them all. There was about 50 requests at all. After couple of hours I sent another 2 30mb videos and this quota had about 8000 queries. How is this quota calculated? And what does it calculate?
Now it's 0 but yesterday it was 8120.
First, you'll have to acknowledge that the YouTube Data API does not account the amount of megabytes of video content the you're uploading via its Videos.insert endpoint.
Then you'll have to acknowledge that the API is not accounting for the number of endpoint calls.
Given that each endpoint has attached a quota cost, the YouTube Data API is accounting for the sum of units of quota cost of all endpoint calls you make on a particular day.
For example, having a daily quota allocation of 10000 units, since the cost of one video upload is 1600 units, you're allowed to upload at most 6 videos, irrespective of their actual size (if not considering the cost of other API calls that you made).
We are using APIM for all our API requests and enabled Application Insights to make sure we get all information like country, request body, IP address, HTTP status code, etc.
We are using AppInsights API to get data APIM data, as on UI, there is a limit of 10K per query.
https://api.applicationinsights.io/v1/apps/
It was working fine till we had a limited number of calls on APIM like 7K/10K per day.
Now we are getting around 40k-80K data per day.
Now when I write a Kusto query in the AppInsights UI, it give me counts 38,648, 29,493, 26,847 for 3 days.
requests
|where url contains 'abc'
|where timestamp >= startofday(datetime('30-Apr-20')) and timestamp <= endofday(datetime('02-May-20'))
| summarize count(),
avg(duration)
by bin(timestamp , 1d)
But when I run an API query request, it gives me records around 54K whereas i should get records around 94K.
When it runs for days where requests are more 150+, still it gives records around 54K.
I checked with the limit on the number of queries. they are talking about 200 per second 30 seconds and 86400 per day. Nothing is mentioned about data size.
It seems, there is a limitation on data size from AppInsights API
When I download for 30-Apr to 01-May, file download size is around 74K
When I download for 30-Apr to 02-May, still file download size is around 74K
I have used AppInsights API in C# console application and using webClient.(DownloadString/DownloadFIle) method to get this data.
Query as follows
https://api.applicationinsights.io/v1/apps/<code/query?query=requests|where url contains 'abc'|where timestamp >= startofday(datetime('30-Apr-20'))and timestamp <= endofday(datetime('02-May-20'))
You have to set the sampling value to '100'.
How to integrate Azure API Management with Azure Application Insights
Sampling (%) decimal Values from 0 to 100 (percent).
Specifies what percentage of requests will be logged to Azure Application Insights. 0% sampling means zero requests logged, while 100% sampling means all requests logged.
This setting is used for reducing performance implications of logging requests to Azure Application Insights (see the section below).
I'm using Gmail API .NET client to send/get emails.
Recently I started getting exceptions with this message for some gmail accounts, for both sending/getting emails:
Google.Apis.Requests.RequestError
User-rate limit exceeded. Retry after 2018-09-25T13:31:30.444Z [429]
Errors [
Message[User-rate limit exceeded. Retry after 2018-09-25T13:31:30.444Z] Location[ - ] Reason[rateLimitExceeded] Domain[usageLimits]
]
I'd like to know if it's possible to check a per-user quota usage for my project for a specific account, in the console I found this:
In the Queries per 100 seconds per user there are no numbers, and the hint from the question mark icon just says: Per user quota usage is not displayed.
From Gmail API Docs we can find :
Per User-rate limit: 250 quota units per user per second, moving average (allows short bursts)
messages.send method consumes 100 quota units
messages.get method consumes 5 quota units
messages.list method consumes 5 quota units
messages.attachments.get method consumes 5 quota units
I don't think I'm reaching 250 quota units per second for any user, yet I'd like to make sure and check that on Google Console for a specific user account. Is that possible?
I've heard of exponential backoff, which is suitable if you indeed make many calls. In my case, I shouldn't be making many calls, so I'd like to investigate that and fix, rather than just implementing a backoff.
The console doesn't say the per-user quota usage because it is different for every user - it doesn't make sense to list every single user's quota usage.
Exponential back-off is recommended. Not only does it allow your usage to be throttled to the rate limit, but it also is the correct way to handle server-side errors.