We had an issue this week where a non-performant SQL query caused an app to start sending 502's as the time it took to run the query passed the Azure App Service request timeout setting (which seems to NOT be user definable)
I'm trying to find a way to detect and alert on these issues. I've gone thru the microsoft documentation but only find recommendations like "scale out" - If I look at Application Insights it doesn't seem to log the 502 since, from the app's perspective, the script ran successfully. Is there a way to log there instances with some context to fix non-performant scripts before a user calls in about a 502? The only option I see is to NOT use the built in load balancing/app gateway features of Azure App Service and roll a separate load balancer/gateway to have more control over it..
Related
I am using different azure service,(Kuberentes cluster,API,Key vault,IOT HUB, ,cosmos db,storage account,datalake,ad b2c,Power BI).I want the failure message and time of these service in my c# (other any other language)application. Is there any api for this purpose ?.or any way to get failure message and time ?
Failure means
failure state or non responding state of azure service.
I just want the any failure or fault message.Not normal message and service message.i didn't find any such kind filter or rest api or type
Since you are already using multiple Azure Services your best bet would be to integrate your application with Azure Application Insights. Application Insights is a monitoring and diagnostics tool provided by Azure. Configuring Application Insights is extremely easy. You can check this link.
Depending upon your framework and choice of language there are multiple options. Once you have installed the Application Insights SDK in your solution, it will automatically start monitoring and reporting all failures. All external dependencies in your application will get automatically tracked and all failures will be logged automatically (in 90% of the scenarios you won't have to write custom code to track these errors). Other parameters like time and failure messages will also get logged. In case you are interested to check which Azure Services are monitored check the link here.
Along with this you will also get the option to log custom messages, events, metrics, exceptions or dependencies.
I don't know the exact purpose of your question ,But if you want to check the service is available or not (failed due to some internal issue of azure) then use resource health check.
https://learn.microsoft.com/en-us/azure/service-health/resource-health-faq
If you want to monitor azure services then you must create a diagnostic setting for each azure service to send its logs to log analytics workspace to use with an azure monitor or for archiving you can use Azure storage archive tier/cool tier or Azure Event hubs to forward outside of Azure(like configuring with Kafka).
For more information visit https://learn.microsoft.com/en-us/azure/azure-monitor/
I have performance issue with ASP.NET Web API app hosted as Azure Web App. After deploying the first request to web service is really slow (we are talking about seconds here). Subsequent requests work just fine without extra delay.
"Always on" feature works fine keeping the app from unloading but this does not solve my issue. I do not want this first request to warm up the service (BTW - should it be warmed up?).
I've used diagnostic and profiling tools in Azure without finding the root cause of this thing. I've used Application Insights as well. It seems like one function of mine needs much more time to execute during this first request - debugging the app locally I did not notice any performance issue with mentioned function.
How can I fix this?
Thanks!
This bit me as well. "Always On" will only make automated calls to your service root - think about slapping the process every time so it won't fall sleep.. We don't use this in our PROD services, we rather have an Azure Availability Test invoking a Ping() endpoint every 5 mins - two birdies, one stone. Besides, AlwaysOn will generate 404 errors in App Insights if you don't have anything in the root..
A totally different thing is warming up each one of the endpoints so they could get JIT-ed and ready, and I have not found anything better than a warm-up script with the whole list of endpoints to call, it is not perfect but it works. So every time you do a deployment o do a restart this will automatically run and your first calls won't be hurt.
Have a look at this article.
I hope this helps
I have been monitoring my ASP.NET application with Application Insights (AI). Lately, I also installed AI Status Monitor to my web server (Windows Server 2012 R2 with IIS) to get more detailed stats about my app. As the documentation says, AI Status Monitor reports dependency diagnostics, ie. calls databases, REST APIs, etc. Thus I thought I would get diagnostics of my database calls that are performed via Entity Framework in my app.
However, no database calls diagnostics appears in my AI for the app. However, AI Status Monitor works because I started to recieve diagnostics about other dependencies but database (ie. blocking communication ports on firewall is not likely to be an issue here).
Has anyone successfuly set up AI Status Monitor to report database diagnostics with Entity Framework? Am I missing any configuration that needs to be added to either app's code or AI Status Monitor?
One possible problem is that you need to add the identity of the IIS application pool to the "Performance Monitor Users" group, that might be your problem; if you are also not getting any performance counter data sent up, then this is definitely the cause of that.
The other possibility is that the "profiling" is not getting enabled on your IIS site. When you launch AI SM on the webserver, there will be a button in the upper right corner "Update config" if the COR profiling has been disabled for IIS (you'd click that button, then a few seconds later click the Restart IIS button). If you are encountering this situation, you might have a conflict if your corporate environment uses SCOM. It will work for a few days, but eventually SCOM will notice a piece of itself is no longer working, and it will override it, and you'll lose dependency data again. There is a conflict between AI SM and SCOM's MMA that won't be resolved until SCOM 2016.
I have database and mvc application hosted on iis. I periodicaly gather data from internet and save them in sql database. And i calculate statistic and graphs from these data and publish them in mvc application.
Problem is that iis have recycling period about 1 hour -> meaning that my timer(function) which gather data from interenet is stoped whenever there is server restart, recycling or there is no request on the web page.
solutions i have found are:
turn of recycling - i don't own srv can't do that.
windows service - 99% hostings don't allow host ws...
So is there any solution, service, framework, which purpose is to gather data and i can be sure that it will not stop after some inactivity time or server restart? or is my logic completely wrong and i need to gather data diferently? can it be done on hosting which i don't own? can it be done using iis?
can it be done using iis?
If the IIS in question has app fabric installed, then that supports an auto start feature, which effectively lets you write 'service like' code which will keep running in the background.
Quick overview here
I am developing a project for college and i need some suggestions over the development. Its a website which shows information from other websites like Links, Images etc.
I have prepared below given model for the website.
A Home.aspx page which shows data from tables (sql server).
I have coded a crawler (in c#) which can crawl (fetch data) required website data.
I want some way through which i can run the crawler at back end for some time interval and it can insert updates in the tables. I want that i can get updated information in my database so that Home.aspx shows updated info. (Its like smaller version of Google News website)
I want to host the wesbite in Shared Hosted Environment (i.e a 3rd party hosting provider company and that may use IIS platform)
I posted simliar situation to different .NET forums and communities and they suggested lot of different things such as
Create a web service (is it really necessary ?)
Use WCF
Create a Console application and run windows task sheduler (is it okay with asp.net (win forms website) and in shared hosted)
Run crawler on local machine and update database accordingly. (No i want everything online) etc etc
Please suggest me a clear way out so that i complete the task. Please suggest elobrated technology and methods which suits my project.
Waiting...
Thanks...
Your shared host constraint really impacts on technologies restrictions.
In theory, the best way to host your crawler would have been a Windows service, since you can take advantage of windows services configuration. A service is always up, can be automatically started at startup, writes errors in event log, can be automatically restarted after failure...
Then, you Home.aspx would have been a regular website in IIS.
If you really stay on a shared host (where you cannot setup a service), I would have make the crawler as a module which is run on your application startup.
Problem is, IIS application pool doesnt live forever if your web site is not in use, and it may stop the crawler. It is configurable, but I dont know how much in a shared host.
In IIS 7.5, think about starting your module at application warm up
Finally if you need to run the crawler at interval times (like every day at midnight), if your shared host does not let you set task scheduling, think about Quartz Framework, which allow you perform task scheduling inside your application (without the intervention of the OS)
Integrate your crawler code into a aspx page
Setup a task scheduler on your host to call that page every X minutes
When the page is called check that localhost has called the page
If localhost called it run the crawl routine and
If localhost hasn't called it throw a 404 eror