I'm running Aforge.net's provided two camera test samples on my Dual Core 2.0 GHz laptop with 2 GB RAM. Right now I'm seeing a lot of CPU usage as the application starts displaying visuals from 2 cameras. It's currently consuming 60% to 70% of the entire CPU power. Can anyone tell me why it's consuming that much CPU and how can I avoid it as I have to build a similar application which would require 2 camera vision and I would be using C#.
Could be a lot of things. Turn on a profiler and find out where the time is spent. Then adjust your question!
Related
I have a .NET REST API written using C# MVC5.
The API uses repository that fire hoses necessary data from database, then analyses it and transforms into usable model. The transformation uses a lot of linq to model the data.
On dev (Windows 10), i7 8 core # 3.7ghz, 32gb ram. it takes 10 secs for large test range.
Running on a VM (Windows 2008R2) virtual xeon with 8 virtual cores # 2.99ghz, 8GB RAM takes 300 seconds (5 mins).
Neither exhaust memory, and neither are CPU-bound (CPU touches 50% on the VM, and barely noticed on dev box.)
Same database, code etc.
The API makes use of async api to load some peripheral data whilst it's doing primary job, so I could put some logs in to log time I guess.
What are the common techniques for tackling this problem? Can the CPU speed really be making that much difference?
thanks
EDIT:
FOllowing comment by Pieter, I've increased the VM's memory to 12GB and monitored the performance of the VM whilst executing the operation. It's not the best visual aid (screen shot of TM end of op), but what it did show what that the vCPUs never really went above ~60% and memory - apart from a few mb at beginning of request, never went above 2.7GB.
If IIS / .NET / my operation is not maxing out the resources, what is taking so long?
I have been having some difficulties in identifying the right configurations for effectively scaling my cloud service. I am assuming we just have to use the scale section of the management portal and nothing programmatically?
My current configuration for Web Role is
Medium sized VM (4 GB RAM)
Autoscale - CPUInstance Range - 1 to 10Target CPU - 50 to 80Scale up and down by 1 instance at a timeScale up and down wait time - 5 mins
I used http://loader.io/ site to do load testing by sending concurrent requests to an API. And it could support only 50 -100 users. After that I was getting timeout(10 secs) errors.
My app will be targeting millions of users on a huge scale, so am not really sure how I can efficiently scale to cater to that much load on the server.
I think the problem could be the scale up time which is 5mins(i think its very high), and in management portal, the lowest option is 5mins, so dunno how i can reduce it?
Any suggestions?
Azure's auto-scaling engine examines 60-minute cpu-utilization averages every 5 minutes. This means that every 5 minutes it has a chance to decide if your CPU utilization is too high and scale you up.
If you need something more robust, I'd recommend to think about the following:
CPU Usage is rarely a good indicator for scaling of websites. Look
into Requests/sec or requests/current instead of CPU utilization.
Consider examining the need to scale more frequently (every 1 min?)
Azure portal cannot do this. You'll need either WASABi or
AzureWatch for this
Depending on your usage patterns, consider looking at shorter time averages to make a decision (ie: average over 20 minutes not 60 minutes). Once again, your choices here are WASABi or AzureWatch
Consider looking at the /rate/ of increase
in the metrics and not just the latest averages themselves. IE:
requests/sec rose by 20% in the last 20 minutes. Once again, Azure
autoscaling engine cannot do this, consider either WASABi (which may
do this) or AzureWatch which definitely can do this.
WASABi is an application block from Microsoft (ie: a DLL) that you'll need to configure, host and monitor somewhere yourself. It is pretty flexible and you can override whatever functionality since it is open source.
AzureWatch is a third-party managed service that monitors/autoscales/heals your Azure roles/Virtual Machines/Websites/SQL Azure/etc. It costs money but you let someone else do all the dirty work.
I recently wrote a blog about the comparison of the three products
Disclosure: I'm affiliated with AzureWatch
HTH
Another reason why the minimum time is 5 minutes is because it takes Azure some time to assign additional machines to your Cloud Service and replicate your software onto them. (WebApps dont have that 'problem')
In my work as a saas admin I have found that for Cloud Services this ramp up time after scaling can be around 3-5 minutes for our software package.
If you want to configure scaling within the Azure portal, then my suggestion would be to significantly lower your CPU ranges. As Igorek mentioned Azure scaling looks at the Average over the last 60 minutes.
If a Cloud Service is running at 5% CPU for most of the time, then suddenly it peaks and runs at 99%, it will take some time for the Average to go up and trigger your scale settings. Leaving it at 80% will cause scaling to happen far too late.
RL example:
I manage a portal that runs some CPU intensive calculations. At normal usage our Cloud Services tend to run at 2-5% CPU but on rare occasion we've seen it go up to 99% and stay there for a while.
My first scaling attempt was 2 instances and scaling up with 2 at 80% average CPU, but then it took around 40 minutes for the event to trigger because the Average CPU did not go up that fast. Right now I have everything set to scale when average CPU goes over 25% and what I see is that our Services will scale up after 10-12 minutes.
I'm not saying 25% is the magic number, I'm saying keep in mind that you're working with "average over 60 minutes"
The second thing is that the Azure Portal only shows a limited set of scaling options, and scaling can be set in greater detail when you use Powershell / REST. The 60 minute interval over which the average is calculated for example can be lowered.
I have a ASP.Net project and many reports.Some of my reports have heavy calculation that I calculate them in memory using Linq. When I test this reports on my client CPU usage is about 25%.
My question is why cpu usage does not increase to 80% or more?
When I publish this project on the server does it has this behaviour?
You have 4 cores (or 2 hyper-threader cores), meaning each single thread can take up to 25% of the total computing power (which is shown as 25% CPU in the Task Manager).
Your calculation is probably single threaded.
Can you possibly break your calculation into several threads? That'll spread the load across the cores of your CPU a little more evenly.
Im trying to run simultaneously hundreds of instances of the same app(using C#), and after about 200 instances the GUI starts to slow down dramatically until the point that the load time of the next instance is climbing up to 20s (from 1s).
The test maching is :
xeon 5520
12gb ram
windows 2008 web 64 bit
at max load (200 instances) the cpu is at about 20% and ram 45%, so im sure its not a hardware issue.
I already tried configuring Session size and SharedSection in the registry of the windows but it doesnt seem to help.
I also tried to running the app in the background and also on multiple sessions (different sessions) and still the same (i though maybe it a limitation per session).
When the slowdown occures for example on one session i can login to another session and the desktops works without a problem (the first dekstop becomse unusable.)
My question is - is there a way to strip the gdi objects or maybe eliminate the use of the GUI? or is it a windows limitation?
p.s - I cant change the app since its a third pary.
Thanks in advance.
With 200 instances running, the constant context switching is probably hurting performance. Context switching isn't counted in CPU load.
Edit: whoops, wrong link.
Try monitoring context switching on your system
http://technet.microsoft.com/en-us/library/cc938606.aspx
I doubt it's GDI - if you run out of GDI handles/resources you'll notice vast chunks of your windows failing to redraw, rather than everythign slowing down.
The most likely reason for a sudden drop in performance is that you are maxing out your RAM and thrashing your Virtual Memory as all your processes fight for CPU time. Check memory usage, and if it's high, see if you can reduce the footprint of your application. Or apply a "hardware fix" by installing more RAM. Or add Sleeps into your Apps where possible so that they aren't demanding constant timeslices from your CPU (and thus needing to be constantly paged in from VM).
I am developing a application (C# 3.5) which executes multiple parallel jobs. When I tried one or 2 jobs in parallel the CPU utilization is less like 2%. But when I run 50 jobs in parallel the CPU utilization is fine for some 20 minutes (less than 10%).
But it suddenly increased to 99% and PC hangs. I am using DB operation and LINQ operations. If you can give some idea, that can light me up in tuning my application.
And also is there any .NET tools which identifies potential CPU utilization code?
I know its weird to ask just like that which will cause more CPU utilization.
Edit:
For a single job the CPU utilization is not increasing. But it happens with multiple jobs only. I donno which causes more CPU utilization. Any help is appreciated.
Such a tool is called a profiler. You can see soem profiler recommendations at What Are Some Good .NET Profilers?