How to measure Azure CosmosDB Request Units per web request? - c#

In an ASP.NET Core Web Application I want to measure how many request units have been spent in a single web request. With this information I could identify "expensive" calls and look for optimizing them.
I know how to option the request unit count from a single CosmosDB Rest call. But this is a layered application where the persistence level that interacts with the ComosDB does not have access to the Request-Object in ASP.NET Core.
Is there a way to optain somehow some kind of request id? I wonder how Application Insights keeps track what are dependent internal calls for a specific web request.
Or is there a way to get this information in Application Insights?

This depends on multiple things including which SDK you are using.
If you are using the default Cosmos DB SDK also known as v2 SDK, then (assuming you enabled Application Insights support) Cosmos DB will only log it's dependency calls if you are using HTTP/HTTPS connection. TCP mode won't be captured by Application Insights. This means you would either have to use HTTPS which is bad in terms of performance or code something custom.
If you are using Cosmonaut then it comes out of the box with a general purpose EventSource which tracks each call as a dependency no matter the connection type and it also collects multiple metrics such as the RUs and a lot more. You would need to reference the Cosmonaut.ApplicationInsights nuget package and initialise the AppInsightsTelemetryModule like this:
AppInsightsTelemetryModule.Instance.Initialize(TelemetryConfiguration.Active);
or use the IoC alternative of:
services.AddSingleton(AppInsightsTelemetryModule.Instance);
This will give you logging for every action with detailed metrics such as the following (including the request charge):
You can then use a query like this to see spikes and further investigate, or just query for requests with Cosmos dependencies which exceed a threshold.
dependencies
| where type contains "Cosmos" and customDimensions.RequestCharge != ""
| summarize sum(toint(customDimensions.RequestCharge)) by bin(timestamp, 1m)
PS: You don't have to use the CosmosStore if you don't need it. Using the CosmonautClient instead of the DocumentClient will do the logging job as well.

This is available in the CosmosDB REST API response's header. You will need to create a correlation between your web call and the CosmosDB operations and then aggregate.
From the docs:
x-ms-request-charge This is the number of normalized requests a.k.a.
request units (RU) for the operation. For more information, see
Request units in Azure Cosmos DB.

Related

Storing user data in session store or retrieving from database in each request in asp.net core?

I am migrating the my old ASP.NET Web Forms project to ASP.NET Core Web API and Frontend Angular. in my older application storing user information instace and it's values(like assigned groups,permissions,and other user information).i am going to use JWT i can't store all information in JWT,so should i continue session in my asp.net core application or retrieve this information from database in each request?
Is there any other Best practices are available in modern application development?
Multiple options for this, depending on what you need:
Angular cache. If the data is not sensitive, you can use the rxjs observables to cache some data on the application side. Nothing wrong with some data stored on the browser. Since you are coming from a full postback application, the SPA caching is most times equivalent to old Session object.
Depending on the implementation you might need some cache on the server side too. Since as mentioned you'll have multiple servers, I'd suggest only caching lookups and such, not user related data. If you implement stickyness with servers and sessions (not recommended), this is still an option.
Distributed cache. You might have heard of Redis and such? This is also an option to store cache data to a third service, accessible by all server instances.
It all comes down to complexity vs speed. If the queries are simple enough and lightning fast, then it might be useless to store them in any cache anyway.

What is the best way to provide Asp.Net Core health information to end users?

Context
I'm looking for a way to tell my users that one of the dependencies of my Asp.Net Core site is down and that therefore the site is currently not available. Preferably using an error page.
Asp.Net Core provides the Health Checks functionality to manage the logic of dependency health checking, but it only provides an endpoint meant for load-balancers.
There is also a kind of dashboard functionality available for the health checks but that is not meant as an error page for end users, it is aimed at administrators.
Why am I looking into this functionality?
I am using Azure Front Door. This product works as a load balancer. It can look at the health status endpoint provided by Asp.Net Core health checks and will take unhealthy backend nodes out of rotation.
However, it does not offer custom error pages and in the case that all backend nodes are down, it will assume that all nodes are healthy. One of the dependencies of my site is an external service that, if it is down, will be down for all instances of my site. It contains e.g. the user accounts that are needed for users to interact with my site. Therefore, I believe I need to implement an error page in the Asp.Net Core site that will show an error page when that external dependency is down.
Suggested solution
One of my ideas would be to have middleware that, when the site is degraded, always returns 503 Service Unavailable or throw an exception. The Asp.Net Core Status Page functionality could then turn that in an appropriate error page.
Question 1
Is this the best architecture? How have other people done this?
Question 2
What is the most practical way to access the current health status?
Technically it is possible to call the HealthCheckService.CheckHealthAsync(...) method directly, but awaiting that method takes some time (especially if one of the dependency services does not respond). Therefore it is not a good idea to make that blocking in the request pipeline.
I could use a Health Check Publisher to cache the health status by publishing it to some custom HealthStatusCache service, but it feels a bit like a workaround. Is this how other people would do it?

Prevent a public service from overusing

I have a web api 2 that I want to host on azure-app-service. The service should be called by javascript applications so as far as I know it has to be open to public (right?).
However, if I let it be totally open it is vulnerable to DOS. What is the best way to do that?
The first thing that came to my mind was to implement a custom IP Filter that keeps requests from last x minutes and let the one with less than y occurrence pass.
Is there any other way? Is there any specific way to do it on the azure without writing code?
This is not a broad question! I think it is clear what I am asking!
I have a service on Azure and I want to protect it from overusing. How broad is that?!?!
If it's a public API (ie. something a mobile app would talk to), it has to be.. well, public of course. :)
If your users have to sign up before consuming your API (or if this is an option), you could use API keys. That does not prevent DoS, and is not a form of authentication if given to clients, but at least you can quickly revoke offending keys to somewhat mitigate DoS.
Other than that, your primary concern with regard to DoS is application level DoS. You should try to avoid API calls that put a strain on your backend, limit response sizes (which probably implies paging on a client), etc. With these things done in your API, let your provider deal with the network level stuff.
By default azure services are protected against DDOS,MITM attacks and all communication is via https and encrypted.
As far the application design goes you need to take care of the following ;
SQL Injection,Session hijacking, Cross-site-scripting, Application-level MITM and Application-level DDoS.
Further you can do vulnerability check on your app services using Tinfoil security scanning tests.
https://azure.microsoft.com/en-us/blog/web-vulnerability-scanning-for-azure-app-service-powered-by-tinfoil-security/
Also using azure API management service you can use the API gateway to control API calls, routing, enforce usage quotas and also do throttling based on the traffic to the API.
https://azure.microsoft.com/en-in/documentation/articles/api-management-howto-product-with-rules/

ASP.NET WebApi: How to check and create per-request context?

In a self hosted ASP.NET Web Api, how can I:
Detect from a class if there's an "ambient" web api context. This is needed to avoid passing in metadata information on every service call. I'm looking for the equivalent of
System.Web.HttpRequest.Current != null
How can I attach metadata information associated with the current request. Again some of this metadata is just so prevalent that including them on every method and calls is way too painful. Think transaction, multi-tenant architecture and credentialings. I need a way to make this sort of information flow through between requests without cluttering the code.
In another word I also need the equivalent of this as explained here:
HttpContext.Current.Items["user" + X.ToString()]
I think I can still access them, as long as the WebApi is hosted on IIS, but I have got these self hosted and I need a way to keep track of the ambient UoW information - how can I do so?
A few notes:
I have also contemplated using per request DI and injecting a
request context into the managers, there are however a ton of legacy
code that wasn't set up for that (some of which are static) and I
don't have the guts to blow up production by doing such a major
refactor.
I have also used a thread static, static variable - the problem with such is that the thread gets recycled, and the process hosts multiple services, some of which aren't even WebApi... so sometimes my managers thought it's handling a WebApi request when in fact it's serving a WCF one.
The HttpRequestMessage instance has Properties dictionary that is intended for holding arbitrary per-request context.

New Relic ASP.NET Web API

I am trying to use New Relic's .NET Agent in my Web API but all requests are being shown as System.Web.Http.WebHost.HttpControllerHandler, which is exactly what the docs known issues section says
MVC 4 (Note: New Relic provides limited support for the ASP .NET Web API for MVC4. All Web API transactions will appear as
HttpControllerHandler, not as the name of the web API controller.)
I am looking for any workaround that results in a more human readable dashboard, Is there any configuration in my app or IIS that I could change to have a more meaningful metric in my dashboard? Or is there a way of implementing the API calls in order to change this behavior?
NewRelic has released an update to the .NET Agent which should solve your problem.
See: https://newrelic.com/docs/releases/dotnet. "Improved WebAPI support. You should now see Web Transactions grouped by [controller].[action] rather than all WebAPI transactions reporting as System.Web.Http.WebHost.HttpControllerHandler."
you may get some better results by setting transaction names via the API. But, until New Relic improves overall support for ASP.NET Web API, there isn't a way to arbitrarily stuff things into web transactions.
https://newrelic.com/docs/dotnet/the-net-agent-api
SetTransactionName()
Also, if you specify certain methods to trace, when things are slow and a transaction trace gets generated, you'll see these custom method tracers appear in the trace details tree view.
https://newrelic.com/docs/dotnet/CustomInstrumentation
This is quite an old post but I spent quite a bit of time looking into a similar issue and in my case these delays were only appearing during POST where http message had request content.
In the long run, this was due to network performance problems (mobile clients) and the POST was trying to read the body of the message which is taking a long time to transmit. The take away being that these delays while showing in the controller handler were actually just waiting for the request body to be transmitted.

Categories