Throttle CPU Usage of Application - c#

A new game server just came out which our company would like to offer for rental. However, the game developers did not create any sort of hibernation mode to shut down the physics when no players are connected, so an empty server is eating 30% or so CPU.
I found this game panel addon which limits the CPU usage of Applications.
I have written a few small apps in C# .NET for our company to help improve our services and I am wondering how I would go about creating something like this. Is it possible?

You might consider simply lowering the priority of the process down. This won't limit CPU directly, but will cause the processes threads to be scheduled less often than processes with normal and higher priorities.
Check System.Diagnostics.Process.PriorityClass (doc)

My guess is that the server app is doing polling instead being event driven. Polling will use CPU unless this piece of code is converted to be event driven. The application will sleep until it receives an event from the OS that it needs to process. Polling will just spin looking for an event and wastes the CPU. Reducing the priority of the process will not really help unless with CPU usage reduction in any way. This app needs to be rewritten to be more CPU efficient.

This answer might be interesting for you and that's how I would do it.
How to restrict the CPU usage a C# program takes?
I don't know if you can do that, but you can change the thread priority of the executing thread via the Priority property. You would set that by:
Thread.CurrentThread.Priority = ThreadPriority.Lowest;
Also, I don't think you really want to cap it. If the machine is otherwise idle, you'd like it to get busy on with the task, right? ThreadPriority helps communicate this to the scheduler.

I'm assuming the game server is threaded. If this is the case, you may be able to pragmatically force CPU affinity on the application. If you had a way to tell if the game had users or not, ie if UDP packets are coming in on the assigned port, you could say "hey, no one is connected". You could then have your program force all working threads onto the same core.
So, if you had an 8 core cpu and all the threads were on one core, then at most it would use 12.5% cpu.
Once you see packets coming in on the assigned port, you could assign the affinity back to all cores.
You could take this a step further and say "Are there any "idle" games. If there are any idle games, which are all on.. lets say.. core 7, then run an infinite loop of the HLT instruction at a higher priority than the game, but force the thread to sleep so it doesn't completely starve the game.
This would cause the CPU to use less power, but would be a lot more work and have a higher chance of problems.
I would stick to forcing affinity only, and just let all the idle games share some given core.

Related

Is Thread.Sleep now using high res timer or have Windows 10 changed default system clock frequency?

Thread.Sleep used to be tied to the system clock which clocked in at a interval at roughly 16 ms, so anything below 16ms would yield 16ms sleep. The behaviour seems to have changed some way down the line line.
What has changed? Is Thread.Sleep no longer tied to the system clock but to the high res timer? Or has the default system clock frequency increased in Windows 10?
edit:
It seems people are intersted in knowing why I use Thread.Sleep, its actually out of the scope of the question, the question is why the behavior have changed. But anyway, I noticed the change in my open source project freepie https://andersmalmgren.github.io/FreePIE/
Its a input/output emulator which is controlled by the end user using Iron python. That runs in the background. It has no natural interrupts. So I need to marshal the scripting thread so it does not starve an entire core.
Thanks to Hans Passant comment which I first missed I found the problem. It turns out Nvidias driver suit is the problem maker.
Platform Timer Resolution:Outstanding Timer Request A program or
service has requested a timer resolution smaller than the platform
maximum timer resolution. Requested Period 10000 Requesting Process
ID 13396 Requesting Process Path \Device\HarddiskVolume2\Program
Files\NVIDIA Corporation\NVIDIA GeForce Experience\NVIDIA Share.exe
This is so idiotic on so many levels. In the long run this is even bad for the environment since any computer with nvidia will use more power.
edit: Hans Comment, relevant part:
Too many programs want to mess with it, on top of the list is a free
product of a company that sells a mobile operating system. Run
powercfg /energy from an elevated command prompt to find the evildoer,
"Platform Timer Resolution" in the generated report.
The newer hardware does have a more precise timer, indeed. But the purpose of Thread.Sleep is not to be precise, and it should not be used as a timer.
It is all in the documentation:
If dwMilliseconds is less than the resolution of the system clock, the
thread may sleep for less than the specified length of time. If
dwMilliseconds is greater than one tick but less than two, the wait
can be anywhere between one and two ticks, and so on.
Also very important is that after the time elapses, your thread is not automatically run. In stead, the thread is marked as ready to run. This means that it will get time only when there are no higher priority threads that are also ready to run, and not before the currently running threads consume their quantum or get into a waiting state.

RabbitMQ: erl.exe taking high CPU usages

I have implemented rabbitmq in my application and it's running on windows server 2008 server, the problem is that erl.exe taking high CPU usages like sometime it reaches 40-45% CPU usages, even in the ideal case (when not processing any queue) it takes at least 4-15% CPU usages.
What could be the reason for taking high CPU usages? Is there any setting or any other thing that I need to do.
You say that even when not processing a queue it is still at 4-15%, but is your application running? If you weren't before, try to monitor erl while no application is using Rabbit.
One thing that comes to mind is that you might be using the QueingBasicConsumer in a loop and that could be contributing to the CPU usage. If you are using QueingBasicConsumer and it is what is causing the hit, try substituting it with EventingBasicConsumer (such that you don't do busy waiting) and see if you have improvement.
Also, how is your application using Rabbit? According to the documentation every IConnection is backed up by a background thread and if you're creating a bunch of connections in your application it could be another reason for the slow down.

Infinite looping consumes 100% CPU

I am stuck in a situation where I need to generate a defined frequency of some Hz. I have tried multimedia timers and all other stuff available on the internet but so far an infinite loop with some if-else conditions gave me the best results. But the problem in this approach is that it consumes almost all of the cpu leaving no space for other applications to work properly.
I need an algorithm with either generates frequency of some Hz to KHz.
I am using windows plateform with C#.
You can't accurately generate a signal of a fixed frequency on a non-realtime platform. At any moment, a high priority thread from same or other process could block the execution of your thread. E.g. when GC thread kicks in, worker threads will be suspended.
That being said, what is the jitter you are allowed to have and what is the highest frequency you need to support?
The way I would approach the problem would be to generate a "sound wave" and output in on the sound card.
There are ways to access your sound card from C#, e.g. using XNA.
As others have pointed out, using the CPU for that isn't a good approach.
Use Thread.Sleep(delay); in your loop
it's reduce processor usage
As you are generating a timer, You would want to Sleep based on the period of your frequency.
You will have to run your frequency generator in its own thread and call
Thread.Sleep(yourFrequencyPeriodMs);
in each iteration of period.

how much cpu should a single thread application use?

I have a single thread console application.
I am confused with the concept of CPU usage. Should a good single thread application use ~100% of cpu usage (since it is available) or it should not use lots of cpu usage (since it can cause the computer to slow down)?
I have done some research but haven't found an answer to my confusion. I am a student and still learning so any feedback will be appreciated. Thanks.
It depends on what the program needs the CPU for. If it has to do a lot of work, it's common to use all of one core for some period of time. If it spends most of its time waiting for input, it will naturally tend to use the CPU less frequently. I say "less frequently" instead of "less" because:
Single threaded programs are, at any given time, either running, or they're not, so they are always using either 100% or 0% of one CPU core. Programs that appear to be only using 50% or 30% or whatever are actually just balancing periods of computational work with periods of waiting for input. Devices like hard drives are very slow compared to the CPU, so a program that's reading a lot of data from disk will use less CPU resources than one that crunches lots of numbers.
It's normal for a program to use 100% of the CPU sometimes, often even for a long time, but it's not polite to use it if you don't need it (i.e. busylooping). Such behavior crowds out other programs that could be using the CPU.
The same goes with the hard drive. People forget that the hard drive is a finite resource too, mostly because the task manager doesn't have a hard drive usage by percentage. It's difficult to gauge hard drive usage as a percentage of the total since disk accesses don't have a fixed speed, unlike the processor. However, it takes much longer to move 1GB of data on disk than it does to use the CPU to move 1GB of data in memory, and the performance impacts of HDD hogging are as bad or worse than those of CPU hogging (they tend to slow your system to a crawl without looking like any CPU usage is going on. You have probably seen it before)
Chances are that any small academic programs you write at first will use all of one core for a short period of time, and then wait. Simple stuff like prompting for a number at the command prompt is the waiting part, and doing whatever operation ad academia on it afterwards is the active part.
It depends on what it's doing. Different types of operations have different needs.
There is no non-subjective way to answer this question that apples across the boards.
The only answer that's true is "it should use only the amount of CPU necessary to do the job, and no more."
In other words, optimize as much as you can and as is reasonable. In general, the lower the CPU the better, the faster it will perform, and the less it will crash, and the less it will annoy your users.
Typically an algoritmically heavy task such as predicting weather will have to be managed by the os, because it will need all of the cpu for as much time as it will be allowed to run (untill it's done).
On the other hand, a graphical application with a static user interface, like a windows forms application for storing a bit of data for record-keeping should require very low cpu usage, since it's mainly waiting for the user to do something.

Will Multi threading increase the speed of the calculation on Single Processor

On a single processor, Will multi-threading increse the speed of the calculation. As we all know that, multi-threading is used for Increasing the User responsiveness and achieved by sepating UI thread and calculation thread. But lets talk about only console application. Will multi-threading increases the speed of the calculation. Do we get culculation result faster when we calculate through multi-threading.
what about on multi cores, will multi threading increse the speed or not.
Please help me. If you have any material to learn more about threading. please post.
Edit:
I have been asked a question, At any given time, only one thread is allowed to run on a single core. If so, why people use multithreading in a console application.
Thanks in advance,
Harsha
In general terms, no it won't speed up anything.
Presumably the same work overall is being done, but now there is the overhead of additional threads and context switches.
On a single processor with HyperThreading (two virtual processors) then the answer becomes "maybe".
Finally, even though there is only one CPU perhaps some of the threads can be pushed to the GPU or other hardware? This is kinda getting away from the "single processor" scenario but could technically be way of achieving a speed increase from multithreading on a single core PC.
Edit: your question now mentions multithreaded apps on a multicore machine.
Again, in very general terms, this will provide an overall speed increase to your calculation.
However, the increase (or lack thereof) will depend on how parallelizable the algorithm is, the contention for memory and cache, and the skill of the programmer when it comes to writing parallel code without locking or starvation issues.
Few threads on 1 CPU:
may increase performance in case you continue with another thread instead of waiting for I/O bound operation
may decrease performance if let say there are too many threads and work is wasted on context switching
Few threads on N CPUs:
may increase performance if you are able to cut job in independent chunks and process them in independent manner
may decrease performance if you rely heavily on communication between threads and bus becomes a bottleneck.
So actually it's very task specific - you can parallel one things very easy while it's almost impossible for others. Perhaps it's a bit advanced reading for new person but there are 2 great resources on this topic in C# world:
Joe Duffy's web log
PFX team blog - they have a very good set of articles for parallel programming in .NET world including patterns and practices.
What is your calculation doing? You won't be able to speed it up by using multithreading if it a processor bound, but if for some reason your calculation writes to disk or waits for some other sort of IO you may be able to improve performance using threading. However, when you say "calculation" I assume you mean some sort of processor intensive algorithm, so adding threads is unlikely to help, and could even slow you down as the context switch between threads adds extra work.
If the task is compute bound, threading will not make it faster unless the calculation can be split in multiple independent parts. Even so you will only be able to achieve any performance gains if you have multiple cores available. From the background in your question it will just add overhead.
However, you may still want to run any complex and long running calculations on a separate thread in order to keep the application responsive.
No, no and no.
Unless you write parallelizing code to take advantage of multicores, it will always be slower if you have no other blocking functions.
Exactly like the user input example, one thread might be waiting for a disk operation to complete, and other threads can take that CPU time.
As described in the other answers, multi-threading on a single core won't give you any extra performance (hyperthreading notwithstanding). However, if your machine sports an Nvidia GPU you should be able to use the CUDA to push calculations to the GPU. See http://www.hoopoe-cloud.com/Solutions/CUDA.NET/Default.aspx and C#: Perform Operations on GPU, not CPU (Calculate Pi).
Above mention most.
Running multiple threads on one processor can increase performance, if you can manage to get more work done at the same time, instead of let the processor wait between different operations. However, it could also be a severe loss of performance due to for example synchronization or that the processor is overloaded and cant step up to the requirements.
As for multiple cores, threading can improve the performance significantly. However, much depends on finding the hotspots and not overdo it. Using threads everywhere and the need of synchronization can even lower the performance. Optimizing using threads with multiple cores takes a lot of pre-studies and planning to get a good result. You need for example to think about how many threads to be use in different situations. You do not want the threads to sit and wait for information used by another thread.
http://www.intel.com/intelpress/samples/mcp_samplech01.pdf
https://computing.llnl.gov/tutorials/parallel_comp/
https://computing.llnl.gov/tutorials/pthreads/
http://en.wikipedia.org/wiki/Superscalar
http://en.wikipedia.org/wiki/Simultaneous_multithreading
I have been doing some intensive C++ mathematical simulation runs using 24 core servers. If I run 24 separate simulations in parallel on the 24 cores of a single server, then I get a runtime for each of my simulations of say X seconds.
The bizarre thing I have noticed is that, when running only 12 simulations, using 12 of the 24 cores, with the other 12 cores idle, then each of the simulations runs at a runtime of Y seconds, where Y is much greater than X! When viewing the task manager graph of the processor usage, it is obvious that a process does not stick to only one core, but alternates between a number of cores. That is to say, the switching between cores to use all the cores slows down the calculation process.
The way I maintained the runtime when running only 12 simulations, is to run another 12 "junk" simulations on the side, using the remaining 12 cores!
Conclusion: When using multi-cores, use them all at 100%, for lower utilisation, the runtime increases!
For single core CPU,
Actually the performance depends on the job you are referring.
In your case, for calculation done by CPU, in that case OverClocking would help if your parentBoard supports it. Otherwise there is no way for CPU to do calculations that are faster than the speed of CPU.
For the sake of Multicore CPU
As the above answers say, if properly designed the performance may increase, if all cores are fully used.
In single core CPU, if the threads are implemented in User Level then multithreading wont matter if there are blocking system calls in the thread, like an I/O operation. Because kernel won't know about the userlevel threads.
So if the process does I/O then you can implement the threads in Kernel space and then you can implement different threads for different job.
(The answer here is on theory based.)
Even a CPU bound task might run faster multi-threaded if properly designed to take advantage of cache memory and pipelineing done by the processor. Modern processors spend a lot of time
twiddling their thumbs, even when nominally fully "busy".
Imagine a process that used a small chunk of memory very intensively. Processing
the same chunk of memory 1000 times would be much faster than processing 1000 chunks
of similar memory.
You could certainly design a multi threaded program that would be faster than a single thread.
Treads don't increase performance. Threads sacrifice performance in favor of keeping parts of the code responsive.
The only exception is if you are doing a computation that is so parallelizeable that you can run different threads on different cores (which is the exception, not the rule).

Categories