I have bunch of asp.net core web api apps running in AKS and below is the observation:
First api request of the day completes in 5009ms.
Subsequent request within few seconds completes in 367ms
After 20 minutes When again I making API call,
First call took again 2590ms, why? Is this cold start behavior? how to avoid it?
Subsequent calls completes in 372ms
I came to know we have some solution for it in App Service where I can make service ALWAYS ON to avoid delay in API response if call makes after idle time.
Question is, what are solution for AKS deployed web api's? Will Kubernetes liveness prob will help here (so far I haven't used it)?
You should always have a liveness probe setup, it is the only way that Kubernetes knows that your container hasn't died.
If you call e.g. api/version on the liveness probe or if you have external dependencies, you could chain those into an api/healthcheck and then you just configure your deployment to this check.
Configure liveness, readiness and startup probes
I'm developing a web api and I hosted it on azure, I have a call that takes about 2.5 seconds in my local machine but takes a lot longer when the app is hosted in azure as you can see in this figure:
it's taking 12.8 seconds which is not expected, why is this happening, and what is the part highlighted in red? why does it take about 10 seconds to start with the first operation in the code? I have "AlwaysOn " on ON so this is not my api going to sleep, also, sometimes the call takes less time (4-6 seconds) which an inconsistency, please enlighten me.
If, CPU usage is not high, one reason could be SNAT port exhaustion / pending, if you have too many open TCP connections (including SQL Server's) then new connection will wait.
You can check that from your app service "Diagnose and solve problems" -> "Availability and Performance" -> "SNAT Port Exhaustion".
If this is the case, this is a good place to start: https://learn.microsoft.com/en-us/aspnet/web-api/overview/advanced/calling-a-web-api-from-a-net-client
Have you tried to increase the tier of your app service plan? This will help you to understand if it's an infrastructure or code problem
I need help to understand where my time problem is. I have a winform/wpf application which communicate with a WCF service through a webapi 2 and a System.Net.Http.HttpClient.
Client => HttpClient => webapi => wcf service.
When I deploy this and run, it takes the first time very long time to get an answer back. But second time and more it is very fast.
If I don't run it for a while it sleeps again.
Why is it so slow in the beginning, what will I look at?
When first call WebApi will initzialize (IIS have to run Api, and by default ISS start api after first call). This take some time. And in IIS You have the default AppPool Idle Time-out (minutes) set to 20 minutes, so after 20 minutes app will go to sleep mode, and IIS have to wake up app.
WebApi why 1st call is slow?
Almost the same problem is with WCF
WCF why 1st call is slow?
So in Your app you have problem with slow 1st api call, and after this you have problem with slow 1st wcf call. You have doubled the slow.
I have a .NET Core Console Application that I need to deploy to Azure and schedule to run once a day. The application is creating a TCP Socket to get market data. I need to schedule it to run in the morning, and the application will receive a close message near the end of the day from the market and automatically close. Approximately run time is estimated at 16 hours, 5 days a week.
Here are the option I've looked researched:
Cloud Service, which might be deprecated (I'm having a hard to validating the comments I've read to this effect)
Service Fabric - but this really looks like it's tailored for stateless applications that can spin up and down for scale. In my case, it should always be a single instance (I do like the self "healing", if my service does go down, it would be great if it is automatically restarted or a new one is spun up)
Azure Web job and azure scheduler. It looks like I could set this to "always on" and add a settings file that has cron configuration, but it seems like a waste of resources to have it "always on". This option also appears to be limited in it's deployment options - I can't set up (that I see) a git integration and auto-deploy. This does seem like the way to go
I'm looking for the pro's and con's of these options above for my use case, or any other options that I might have missed.
There's one thing that seems to be overlooked here. This part:
The application is creating a TCP Socket to get market data.
Is that 80/TCP or 443/TCP? Does it talk HTTP over one of those ports?
Because if your application talks custom protocol over an arbitrary TCP port, you can't use WebJobs. The App Service sandbox does not allow arbitrary port binding. This applies ingress. Egress (outbound), there's no restriction. You can make raw TCP requests from the WebJob to any destination and port.
From https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#network-endpoint-listening:
Network endpoint listening
The only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
There's no need to involve the Azure Scheduler service. WebJobs have a built-in cron implementation which is completely free.
Also, the Always On feature really means hit this site with synthetic requests every couple minutes so it serves a 200 OK, and thus prevent the Application Pool from being unloaded from memory due to inactivity.
I would use the tank. Can't find anything wrong with the tank if you can pick your tank size. There's also very little maintenance with tanks.
Size (id) Cores Ram Net Bandwidth Total disk size
---------------------------------------------------------------
ExtraSmall 1 0.75 GB Low 19 GB
Small 1 1.75 GB Moderate 224 GB
...
ServiceDefinition.csdef if you need to listen on a socket:
<Endpoints>
<InputEndpoint name="aRawTCPEndpoint" protocol="tcp" port="54321" localPort="54321" />
</Endpoints>
Where does your application keep state? Memory/disk/off-the-box database? Cloud Service roles are stateless in nature and if one instance gets sick it's barbecued and a new one is spun up. It's crucial that state be kept off-the-box, in durable storage - Blob/Table storage, Azure SQL, DocumentDB, etc.
Imagine you've build your house 6 years ago. And you've used this material called ClassicBrick in the structure. It is a good material, strong, waterproof, scratch-resistant. But recently this newer and better material - let's call it Armritis (which by the way is designed to be used in BRIDGES not houses, but i digress) came out which everybody tells you is better in every way. Do you tear down the house? Cloud Services are not deprecated and until i see an official Microsoft roadmap telling the opposite i'm not going to entertain this in any way.
On the topic of Service Fabric, it CAN do stateful and it's actually one of its biggest selling points:
From https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-quick-start/:
Create a stateful service
Service Fabric introduces a new kind of service that is stateful. A stateful service can maintain state reliably within the service itself, co-located with the code that's using it. State is made highly available by Service Fabric without the need to persist state to an external store.
Azure Functions is also worth a good look if you can speak HTTP over standard ports.
First, we need to compare the pricing between Cloud Service, Service Fabric, Web job &Scheduler if you doesn't want to wast resources. Here is the pricing calculator. Because your console application job will have special scheduler time to work. It is better to save money if the job is not work. So, Web job(if you have web app meantime) &Scheduler will be good choice to achieve your purpose.
Using a cloud service for such a tiny job is like using a tank to go to work
Service Fabric is mainly for building Micro-services style application not a console apps or jobs that run once a day
Web Jobs require a web app, so you have the remaining option which is the base for web jobs.
You can create the scheduler and make it run every day at specific time or execute it manually on demand
I would go with either one solution, in this priority:
App Service: Web Apps (Web Job)
Virtual Machine
Cloud Service
App Service: Web Apps (Web Job) provides a free plan. And it started to support Web Jobs in a free plan. You will be able to work with files, should you need it. Just as mentioned in other answers, just set a scheduler. If you have doubts and think it is not kosher to do it on a website, then think of it as getting a free website (if you use paid plan) as a bonus. Either way, everything runs on a machine - be it with web-server or without. Maybe you will even start some long-ago-forgotten web project of yours?
Cloud Service and Virtual Machine are both straightforward and simple. Gotta be honest, I haven't used Cloud Service, yet I think you can connect to it via Remote Desktop just like to an ordinary VM. You will have complete control. I would choose Virtual Machine over the Cloud Service though, just because it is cheaper.
Solutions that will NOT work:
Azure Scheduler does not fit you, because it allows only HTTP/S requests and posting messages to Azure Storage queues, Azure Service Bus queues, or Azure Service Bus.
Personally I would go with a WebJob without AlwaysOn, and use Azure Scheduler to fire the WebJob at the desired times using HTTPS. Then the WebJob can do the calls needed to get the data. This does not need AlwaysOn since the Scheduler call wakes it up anyway.
Azure Functions might also be worth a look, though they aren't meant for long-running tasks.
This would also probably be the cheapest option as the Web App can probably run on Free tier (depending on how long the job takes) and the Scheduler can also be Free tier.
I have a WCF service which takes computer ID,IDs on a network as input parameter and saves the computer stats for past 12 hours[like how much time the computer was locked, active, idle etc..] in the database.
Also I have a website from where I can set the scheduling for few computers at some time t for stats[ for past 12 hours as mentioned above]. This scheduling information[computer is and time] will be saved to database.
Now the issue is how to use the WCF service to make sure it runs on that particular scheduling time and also how to show the computer stats on the website when the WCF has been called and stats have been generated. If I use a window service to call WCF service how will I ensure that it runs on that scheduled time, also how to inform the website that the stats have generated.
Any help would be appreciated.
Cheers!
Set up a Schedule Task that will make call to WCF Service Method you want to run using something like cUrl
I believe the website needs refreshing so it can pick the data after the WCF Method gets executed so you can again use cUrl to make a web page call