I am trying to use New Relic's .NET Agent in my Web API but all requests are being shown as System.Web.Http.WebHost.HttpControllerHandler, which is exactly what the docs known issues section says
MVC 4 (Note: New Relic provides limited support for the ASP .NET Web API for MVC4. All Web API transactions will appear as
HttpControllerHandler, not as the name of the web API controller.)
I am looking for any workaround that results in a more human readable dashboard, Is there any configuration in my app or IIS that I could change to have a more meaningful metric in my dashboard? Or is there a way of implementing the API calls in order to change this behavior?
NewRelic has released an update to the .NET Agent which should solve your problem.
See: https://newrelic.com/docs/releases/dotnet. "Improved WebAPI support. You should now see Web Transactions grouped by [controller].[action] rather than all WebAPI transactions reporting as System.Web.Http.WebHost.HttpControllerHandler."
you may get some better results by setting transaction names via the API. But, until New Relic improves overall support for ASP.NET Web API, there isn't a way to arbitrarily stuff things into web transactions.
https://newrelic.com/docs/dotnet/the-net-agent-api
SetTransactionName()
Also, if you specify certain methods to trace, when things are slow and a transaction trace gets generated, you'll see these custom method tracers appear in the trace details tree view.
https://newrelic.com/docs/dotnet/CustomInstrumentation
This is quite an old post but I spent quite a bit of time looking into a similar issue and in my case these delays were only appearing during POST where http message had request content.
In the long run, this was due to network performance problems (mobile clients) and the POST was trying to read the body of the message which is taking a long time to transmit. The take away being that these delays while showing in the controller handler were actually just waiting for the request body to be transmitted.
Related
I'm adding tracing to my .NET Core Web API projects.
When I call into my endpoints using Swagger, I see the traces in Zipkin that go across multiple services because of the OpenTelemetry.Instrumentation.Http package that I installed.
So far so good.
The problem comes when I try to hit those same exact endpoints from my actual web application. No traces are captured. Best guess is that something needs to be added to the website as well.
Optimally, what I would like to do is...
start a new span (System.Diagnostics.Activity) if one doesn't come across the wire. That way regardless of what calls me, I can still capture a trace through my various systems.
Anyone have an idea on how to do that?
In an ASP.NET Core Web Application I want to measure how many request units have been spent in a single web request. With this information I could identify "expensive" calls and look for optimizing them.
I know how to option the request unit count from a single CosmosDB Rest call. But this is a layered application where the persistence level that interacts with the ComosDB does not have access to the Request-Object in ASP.NET Core.
Is there a way to optain somehow some kind of request id? I wonder how Application Insights keeps track what are dependent internal calls for a specific web request.
Or is there a way to get this information in Application Insights?
This depends on multiple things including which SDK you are using.
If you are using the default Cosmos DB SDK also known as v2 SDK, then (assuming you enabled Application Insights support) Cosmos DB will only log it's dependency calls if you are using HTTP/HTTPS connection. TCP mode won't be captured by Application Insights. This means you would either have to use HTTPS which is bad in terms of performance or code something custom.
If you are using Cosmonaut then it comes out of the box with a general purpose EventSource which tracks each call as a dependency no matter the connection type and it also collects multiple metrics such as the RUs and a lot more. You would need to reference the Cosmonaut.ApplicationInsights nuget package and initialise the AppInsightsTelemetryModule like this:
AppInsightsTelemetryModule.Instance.Initialize(TelemetryConfiguration.Active);
or use the IoC alternative of:
services.AddSingleton(AppInsightsTelemetryModule.Instance);
This will give you logging for every action with detailed metrics such as the following (including the request charge):
You can then use a query like this to see spikes and further investigate, or just query for requests with Cosmos dependencies which exceed a threshold.
dependencies
| where type contains "Cosmos" and customDimensions.RequestCharge != ""
| summarize sum(toint(customDimensions.RequestCharge)) by bin(timestamp, 1m)
PS: You don't have to use the CosmosStore if you don't need it. Using the CosmonautClient instead of the DocumentClient will do the logging job as well.
This is available in the CosmosDB REST API response's header. You will need to create a correlation between your web call and the CosmosDB operations and then aggregate.
From the docs:
x-ms-request-charge This is the number of normalized requests a.k.a.
request units (RU) for the operation. For more information, see
Request units in Azure Cosmos DB.
I'm having trouble getting the authentication portion working, particularly the external authentication. I'm using a client project to call my API, which then handles all the OAuth processing.
My issue is that once you authenticate through Facebook, it wants to redirect to my API url, and that redirect url has the access_code needed for authorization of subsequent API calls from the client. Is there a best practice for dealing with this situation? For instance, should I parse the access_code out of the url and somehow send it back to the client project?
Searching for how to handle this yields me vague results. Most everything I come across leads back to one of two links:
This is helpful understanding the high level concept
This implies that you should just dig around in the SPA template and figure it out on your own
neither of which really help me out much in a "how-to" sense.
The client project I'm ultimately working with is a Xamarin project, so I'm looking for C# or Xamarin library code how-tos in particular. If anyone can help, I'd appreciate it.
I do not know how to explain this in technical terms. So let me begin with an example:
Story
I have an online e-commerce site www.ABCStore.com . I built this using MVC 4 (Razor) in Dot Net.
My friend has a travel agency for which his online site is www.DEFAgency.com . He got it built in Java.
Both our websites were up and running. One fine day I got a call from a company FicticiousServiceProvider and they asked me if I would be interested in getting customer feedback as a functionality on my website without having to write any code myself. What they offered was, I would have to include just a single line of code in the footer of my Masterpage(or layout page) and then the customers who log on to the site would see a small icon on the pages and would be able to provide their feedback.
The feedback will not be available directly to me. The FicticiousServiceProvider guys will analyze the data and provide them to me on a regular basis or on a need basis.
There were other services too which they offered.
I was really happy to have a functionality like that, specially without having to write any code. I tried it and it worked fine in my .Net website. My friend(with a java website) also added a single line to his code and it worked for him too.
My questions here are:
What is this process called ?
If I were FicticiousServiceProvider, how would I have developed this using .Net ? I mean, how to develop a functionality so that a consumer can consume the service using a single line provided by the service provider. Data transfer from my site in the form of feedback to the FicticiousServiceProvider is also happening, without me being able to see anything.
How was it possible for FicticiousServiceProvider to provide the functionality to a .Net app and a java app without any change in the line provided by them?
I have given the description from a consumers perspective. Please suggest from a developer's perspective. Many Thanks.
These things, like Google Analytics tracking code, are usually some kind of javascript injecton. It will use javascript to 'inject' a bit of code that sends a request to their servers (what their server side is coded in is irrelevant really). They then handle the request that includes the information they've gathered in javascript on the client side and store it, then use server side software to analyse that data to give out reports, etc..
So to try and answer your question separately.
I'd call the process javascript injection.
You would have to find the best way to send a request to your servers and handle that request. Could be done with ASP.Net MVC quite easily but any server side technology/code that can handle requests and send data to a store.
They use javascript which is separate to any server side code and works across browsers on the client side.
I am creating an ASP.NET MVC website that uses a 3rd party API (web service) as a data source. It is read-only, and to date has been accessed by individuals using desktop applications (most in C#). I would like to consume this API using a web site in order to centralize information and give users historical information, automate certain repetitive tasks, and more easily allow sharing of information among users.
The desktop clients today experience throttling, and if you make repeated requests to the API using a client your IP will be throttled and/or banned. I think that if I made the requests to the API from my website, its IP would be banned the moment it saw any significant use.
Let's assume that I cannot work something out with the API owners. Probably the easiest way to work-around this problem is to do all of the API access using AJAX. When the user visits the website, he makes the requests to the API using AJAX then turns around and posts them to my website. I don't like this idea for multiple reasons-- first, it'll be slow, and second, I could not guarantee that the data sent to my website was genuine. A malicious user could, for whatever reason, send me bad information.
So I thought a better idea would be to establish a man-in-the-middle. The user would still be forced to make the AJAX request, but they would make it to a proxy or something else that I control, which would then forward it on to the real API and intercept the response so I could be a little more certain that the data I retrieved was genuine.
Is it possible to create such a "proxy"? What would it entail? I would like to do it using a .NET technology but I'm open to any and all ideas.
EDIT: It seems I caused confusion by using the word "proxy." I don't want a proxy, what I want is a pass-through that allows me to intercept the response from the API. I could have the client make the request and then subsequently upload it, but I don't want to trust client, I want to trust the API.
Let me explain this in shorter form. There is a client on a user's machine which can make a request to an API to get current information. I would like to create a website that does the same thing, but I am considering the possibility that the API web service may notice that while previously it was receiving ten requests for ten users from ten different IPs, it is now receiving ten requests for ten users from one IP and block that IP seeing it as a bot even though every request was kicked off by a user request just as it had previously. The easiest way to workaround this is to have the user make the request and then upload the response to me, but if I do that I am forced to blindly accept data from a client which is a huge no-no for any website in any situation. If instead I can place something that forwards the request along to the API preserving the IP of the user but is also capable of intercepting the response thereby proving that the data is authoritative, that would be preferred. However, I can't think of a software mechanism to do this-- it seems like it would need to be done at a different layer.
As for legal concerns, this is a widely used API with many applications and users (and there are other websites I have found using the API), but I was unable to find any legal information like terms of service beyond forum postings in the API's tech support section amounting to "don't make repeated requests, obey our caching instructions" etc. I can't find anything that would indicate this is an illegal or incorrect use of the web service.
You could implement your proxy. It wouldn't need to be AJAX though, it could just be a normal web page request that displayed the API results if you wanted.
Either way, in .Net you could do it using ASP.Net MVC. If you wanted AJAX, use a Web API controller action that implements the source API, if you want a web page, just use a regular MVC controller/action.
Inside your controller, you would just make a web request to the source, passing through the parameters.
In order to avoid throttling, you could cache the results of each request you make from your server (using the normal ASP.Net cache), so that if another client attempted to make the same request, or a similar one maybe, you could return the cached results instead of making another request to the API.
You would have to determine how long the results should be cached for, depending on how up to date the data needs to be in your client. E.g. For weather data, caching for an hour would seem OK. For more fast moving data it would have to be less. You have to strike a balance between avoiding throttling and keeping data fresh.
You could also intelligently fetch more data than you need at each request and then filter the result set that you return to your client. This could give you a better cache hit rate.