Why CPU usage of my Asp.net often is about 25% - c#

I have a ASP.Net project and many reports.Some of my reports have heavy calculation that I calculate them in memory using Linq. When I test this reports on my client CPU usage is about 25%.
My question is why cpu usage does not increase to 80% or more?
When I publish this project on the server does it has this behaviour?

You have 4 cores (or 2 hyper-threader cores), meaning each single thread can take up to 25% of the total computing power (which is shown as 25% CPU in the Task Manager).
Your calculation is probably single threaded.

Can you possibly break your calculation into several threads? That'll spread the load across the cores of your CPU a little more evenly.

Related

Processors, cores, and various other capabilities in C#

I am writing a rather resource-intensive application, and I am trying to find a way to let the user of said app decide what resources, exactly the app should use.
Part of the problem here is knowing what the CPUs are capable of -> I imagine that if the user wants the app to use 5 cores on a AMD Phenom II X6, I can get away with throwing everything at those 5 cores. On the other hand, if the processor is an Intel i7 or an AMD FX-8510, some of those cores share various components. What then should I do, to ensure I don't schedule something accidentally to the hyper-threader scheduler, or something to that extent?
I am trying to avoid the scenario where a CPU chokes because I am throwing everything at the wrong part of it (was an old problem). Any ideas?
Thread scheduler in operating system has all information necessary (number of physical CPU cores, which logical cores shares resources, which cores are parked to safe energy, if hyperthreading is enabled, which cores uses turbo frequency, what resources are used by another programs, etc...) for efficient CPU resource allocation. You program has no knowledge about these things, so best thing you can do is to let OS decide how to allocate resources. Even without any special support in your program, user can limit resources used by program by setting process priority or processor affinity if he wants.
One thing you can try is to create number of threads according with number of CPU cores (returned by Environment.ProcessorCount). If you thing that scheduling one thread per physical core (instead of one thread per logical core) will perform better, you can play with Process.GetCurrentProcess().ProcessorAffinity property - set it to 0x55555555 on CPUs with hyperthreading, but this can also make things worse than before (what if some future CPU will have 3 logical cores per physical core instead of 2, or what if another program set processor affinity to the same cores?).

Increase performance in Long Operations

I have a file encryption program. When the program is encrypting files, it doesn't exceed 25% CPU usage, hence it is slow.
How can I make the OS assign to it more CPU load? (Such as WinRAR, when it compresses files, it reaches 100% from CPU load).
[Edit]: As my cores are 4, it doesn't use more than one core. How can I make it use the rest of cores?
Unless you are otherwise throttling the application it will use as much CPU as the OS allows it to - which should be up to 100% by default. I would guess that some other resource is the bottleneck.
Are you streaming the data to encrypt from a remote location? From a disk that is for some reason quite slow?
If your tool is single threaded program, then it only consumes one core! And the performance will reach 100% on that core in case your program only do a for loop or other kind of loop. If the tool must do I/O then it never has maximum performance. And 25% you see is per all cpu cores. As I remember there some posts show you how to display the percentage of consumption on each cpu core!
Just in case, if you are using v4.0, rather than assigning more CPU load, try using Parallel Framework(PFX). It is optimized for multi core processors..
Parallel.Invoke(() => DoCompress());
Also, Threading in C# is the best threading related resource in the universe.
Sometimes people think a high CPU percent means an efficient program.
If that were so, an infinite loop would be the most efficient of all.
When you have a program that basically processes files off a mechanical hard drive, ideally it should be IO bound, because reading the file simply has to be done.
i.e. The CPU part should be efficient enough that it takes a low percent of time compared to moving the file off disk.
Anything you can do to reduce CPU time will reduce the CPU percent, because I/O takes a larger percentage of the total, and vice-versa.
If you can go back-and-forth between the two, reducing first CPU (program tuning), then I/O (ex. solid-state drive), you can make it really fly.
Then, if the CPU part is still taking longer than you would like, by all means, farm it out over multiple cores.
This has nothing to do with the processor and assigning resources: the tool you use is simply not designed to use all (I guess) 4 cores of the cpu.

MaxDegreeOfParallelism deciding on optimal value

Simple question.
How do you decide what the optimal value for MaxDegreeOfParallelism is for any given algorithm? What are the factors to consider and what are the trade-offs?
I think it depends, if all your tasks are "CPU bound" it would be probably equal to the number of CPUs in your machine. Nevertheless if you tasks are "IO bound" you can to start to increase the number.
When the CPU has to switch from one thread to other (context switch) it has a cost, so if you use too much threads and the CPU is switching all the time, you decrease the performance. In the other hand, if you limit that parameter too much, and the operations are long "IO bound" operations, and the CPU is idle a lot of time waiting for those tasks to complete... you are not doing the most with your machine's resources (and that is what multithreading is about)
I think it depends on each algorithm as #Amdahls's Law has pointed out, and there is not a master rule of thumb you can follow, you will have to plan it and tun it :D
Cheers.
For local compute intensive processes you should try two values;
number of physical cores or processors
number of virtual cores (physical including hyperthreading)
One of these is optimal in my experience. Sometimes using hyperthreading slows down, usually it helps. In C# use Environment.ProcessorCount to find the number of cores including hyperthreading fake cores. Actual physical cores is more difficult to determine. Check other questions for this.
For processes that have to wait for resources (db queries, file retrieval) scaling up can help. If you have to wait 80% of the time a process is busy the rule of thumb is to start 5 threads for it so one thread will be busy at any time. Maximizing the 5x20% each process requires locally.

which will cause more cpu utilization (C#)

I am developing a application (C# 3.5) which executes multiple parallel jobs. When I tried one or 2 jobs in parallel the CPU utilization is less like 2%. But when I run 50 jobs in parallel the CPU utilization is fine for some 20 minutes (less than 10%).
But it suddenly increased to 99% and PC hangs. I am using DB operation and LINQ operations. If you can give some idea, that can light me up in tuning my application.
And also is there any .NET tools which identifies potential CPU utilization code?
I know its weird to ask just like that which will cause more CPU utilization.
Edit:
For a single job the CPU utilization is not increasing. But it happens with multiple jobs only. I donno which causes more CPU utilization. Any help is appreciated.
Such a tool is called a profiler. You can see soem profiler recommendations at What Are Some Good .NET Profilers?

Will Multi threading increase the speed of the calculation on Single Processor

On a single processor, Will multi-threading increse the speed of the calculation. As we all know that, multi-threading is used for Increasing the User responsiveness and achieved by sepating UI thread and calculation thread. But lets talk about only console application. Will multi-threading increases the speed of the calculation. Do we get culculation result faster when we calculate through multi-threading.
what about on multi cores, will multi threading increse the speed or not.
Please help me. If you have any material to learn more about threading. please post.
Edit:
I have been asked a question, At any given time, only one thread is allowed to run on a single core. If so, why people use multithreading in a console application.
Thanks in advance,
Harsha
In general terms, no it won't speed up anything.
Presumably the same work overall is being done, but now there is the overhead of additional threads and context switches.
On a single processor with HyperThreading (two virtual processors) then the answer becomes "maybe".
Finally, even though there is only one CPU perhaps some of the threads can be pushed to the GPU or other hardware? This is kinda getting away from the "single processor" scenario but could technically be way of achieving a speed increase from multithreading on a single core PC.
Edit: your question now mentions multithreaded apps on a multicore machine.
Again, in very general terms, this will provide an overall speed increase to your calculation.
However, the increase (or lack thereof) will depend on how parallelizable the algorithm is, the contention for memory and cache, and the skill of the programmer when it comes to writing parallel code without locking or starvation issues.
Few threads on 1 CPU:
may increase performance in case you continue with another thread instead of waiting for I/O bound operation
may decrease performance if let say there are too many threads and work is wasted on context switching
Few threads on N CPUs:
may increase performance if you are able to cut job in independent chunks and process them in independent manner
may decrease performance if you rely heavily on communication between threads and bus becomes a bottleneck.
So actually it's very task specific - you can parallel one things very easy while it's almost impossible for others. Perhaps it's a bit advanced reading for new person but there are 2 great resources on this topic in C# world:
Joe Duffy's web log
PFX team blog - they have a very good set of articles for parallel programming in .NET world including patterns and practices.
What is your calculation doing? You won't be able to speed it up by using multithreading if it a processor bound, but if for some reason your calculation writes to disk or waits for some other sort of IO you may be able to improve performance using threading. However, when you say "calculation" I assume you mean some sort of processor intensive algorithm, so adding threads is unlikely to help, and could even slow you down as the context switch between threads adds extra work.
If the task is compute bound, threading will not make it faster unless the calculation can be split in multiple independent parts. Even so you will only be able to achieve any performance gains if you have multiple cores available. From the background in your question it will just add overhead.
However, you may still want to run any complex and long running calculations on a separate thread in order to keep the application responsive.
No, no and no.
Unless you write parallelizing code to take advantage of multicores, it will always be slower if you have no other blocking functions.
Exactly like the user input example, one thread might be waiting for a disk operation to complete, and other threads can take that CPU time.
As described in the other answers, multi-threading on a single core won't give you any extra performance (hyperthreading notwithstanding). However, if your machine sports an Nvidia GPU you should be able to use the CUDA to push calculations to the GPU. See http://www.hoopoe-cloud.com/Solutions/CUDA.NET/Default.aspx and C#: Perform Operations on GPU, not CPU (Calculate Pi).
Above mention most.
Running multiple threads on one processor can increase performance, if you can manage to get more work done at the same time, instead of let the processor wait between different operations. However, it could also be a severe loss of performance due to for example synchronization or that the processor is overloaded and cant step up to the requirements.
As for multiple cores, threading can improve the performance significantly. However, much depends on finding the hotspots and not overdo it. Using threads everywhere and the need of synchronization can even lower the performance. Optimizing using threads with multiple cores takes a lot of pre-studies and planning to get a good result. You need for example to think about how many threads to be use in different situations. You do not want the threads to sit and wait for information used by another thread.
http://www.intel.com/intelpress/samples/mcp_samplech01.pdf
https://computing.llnl.gov/tutorials/parallel_comp/
https://computing.llnl.gov/tutorials/pthreads/
http://en.wikipedia.org/wiki/Superscalar
http://en.wikipedia.org/wiki/Simultaneous_multithreading
I have been doing some intensive C++ mathematical simulation runs using 24 core servers. If I run 24 separate simulations in parallel on the 24 cores of a single server, then I get a runtime for each of my simulations of say X seconds.
The bizarre thing I have noticed is that, when running only 12 simulations, using 12 of the 24 cores, with the other 12 cores idle, then each of the simulations runs at a runtime of Y seconds, where Y is much greater than X! When viewing the task manager graph of the processor usage, it is obvious that a process does not stick to only one core, but alternates between a number of cores. That is to say, the switching between cores to use all the cores slows down the calculation process.
The way I maintained the runtime when running only 12 simulations, is to run another 12 "junk" simulations on the side, using the remaining 12 cores!
Conclusion: When using multi-cores, use them all at 100%, for lower utilisation, the runtime increases!
For single core CPU,
Actually the performance depends on the job you are referring.
In your case, for calculation done by CPU, in that case OverClocking would help if your parentBoard supports it. Otherwise there is no way for CPU to do calculations that are faster than the speed of CPU.
For the sake of Multicore CPU
As the above answers say, if properly designed the performance may increase, if all cores are fully used.
In single core CPU, if the threads are implemented in User Level then multithreading wont matter if there are blocking system calls in the thread, like an I/O operation. Because kernel won't know about the userlevel threads.
So if the process does I/O then you can implement the threads in Kernel space and then you can implement different threads for different job.
(The answer here is on theory based.)
Even a CPU bound task might run faster multi-threaded if properly designed to take advantage of cache memory and pipelineing done by the processor. Modern processors spend a lot of time
twiddling their thumbs, even when nominally fully "busy".
Imagine a process that used a small chunk of memory very intensively. Processing
the same chunk of memory 1000 times would be much faster than processing 1000 chunks
of similar memory.
You could certainly design a multi threaded program that would be faster than a single thread.
Treads don't increase performance. Threads sacrifice performance in favor of keeping parts of the code responsive.
The only exception is if you are doing a computation that is so parallelizeable that you can run different threads on different cores (which is the exception, not the rule).

Categories