I tried to monitor a windows service CPU usage on a windows server with 24 processors. I used
_cpuCounter = new PerformanceCounter(
"Process", "% Processor Time", Process.GetCurrentProcess().ProcessName, true);
_cpuCounter.NextValue()/100 //most time, it is more than 100%, which is not correct.
But it does not give me the correct Process CPU usage.
The results are not incorrect. It's just that NextValue() will return values greater than 100 when your process uses more than one CPU and the individual values add up to "more" than 100%.
For instance, if your process uses 50% of CPU0 and 60% of CPU1 the result will be 110% usage of all potentially available CPU cycles across all CPUs/cores (which is expressed as 200% for dual core processors).
If you want a value taking the number of processors into acount (i.e. mapping 110 to 55 on a dual core machine) try dividing the retrieved value by the number of processors instead:
_cpuCounter.NextValue() / Environment.ProcessorCount
Related
I'm working in an application that needs to monitor CPU usage on Windows 10. I'm using PerformanceCounter and counter % Processor Utility. When the CPU usage in Windows Task Manager gets 100%, PerformanceCounter returning more than 100% (like 120, 170%).
PerformanceCounter cpuUsage = new PerformanceCounter("Processor Information", "% Processor Utility", "_Total", true);
Searching i've found that sometimes it's normal processor utility return more than 100%. But task manager don't show values higher than 100%.
I want to know if is there a way to "convert" processor utility return into a 100% scale? what is the highest value that the processor utility can achieve? is the maximum processor utility return limit associated with the number of cpu cores?
The server app uses postgres on localhost. It works good on Xeon E3-1270 V2 # 3.50Ghz with 16 GB RAM and handles more than 1k db requests/second. The app creates ~100 ThreadPool threads.
The same app when launched on E5 (the same configuration) uses 500 and more threads until it reaches max_connections. Sometimes transactions are executing very slow (begin takes 0.18s in avg and 15.94s max; commit takes 0.47s in avg and 15.93s max). The slow queries can be very simple like updating two integer columns in a one row. There are no problematic queries in the pg_stat_statements. I had to limit ThreadPool min/max threads to 100 otherwise postgres goes out of ram with 600+ connections.
Typical code that in some cases executes ~12sec:
using (var s = HibernateSessionFactory.OpenSession())
using (var tr = s.BeginTransaction())
{
try
{
try
{
s.Lock(User, LockMode.None);
}
catch
{
s.Lock(User, LockMode.None);
}
User.Guild = null;
tr.Commit();
}
catch
{
tr.Rollback();
throw;
}
}
When the app stops responding to client requests pgAdmin "Server Status" shows these queries:
set extra_float_digits=3; set ssl_recognitation_limit=0; select 'npgsql12345';
DISCARD ALL
COMMIT
BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITED;
and ~2000 granted locks
What can cause that?
Based on the data you've provided, it seems the crux of the issue is the 5x increase in threadcount -- 100 on the E3 vs. 500 on the E5. You've said they are the same configuration hardware-wise, which I assume means that each has 4 hyperthreaded cores, since that's what the E3 model you listed has according to the Intel spec sheet.
That means with the same number of CPU threads available, you're trying to process 5x as many threads. This also will hugely up the memory requirements, and will also up the CPU overhead, as it'll likely be thrashing around trying to context-switch between all the threads. Given that the E5 also has 16 GB of RAM (based on your same-config comment), it likely cannot cope with that added overhead.
I would look to see if you're swapping a ton to disk, which would cause terrible I/O performance, and whether things are CPU or I/O bound. I'm guessing you're running Windows based on the use of C#, so I'd recommend using something like Resource Monitor to look deeper into that. That is, use it to monitor the Postgres processes and look at their disk usage, CPU usage, etc. There are a wide variety of monitoring options available in that tool.
However, that aside, why not just run with the same workload -- 100 threads -- on the E5 that's working fine with the E3? If otherwise identically configured, the main difference (depending on exact E5 model) would be CPU frequency, which, while providing some marginal edge on a per-CPU-thread basis over the slower clock speed of the E3, would be unlikely to allow for a vast performance edge over the E3 (as opposed to, say, if your E5 had 24 cores, or 48 threads). Obviously, there would need to be some performance testing and tuning to determine the true redline, but I suspect it's a lot closer to 100 threads than 500.
If you run with 100 threads max on the E5 just as you are on the E3, is the performance fine (essentially the same)? You said it "helped", but unclear from that if it was still worse.
When you need many connections or high performance, use a connection pool like pgBouncer or pgPool. Your application should connect to the available connections on the connection pool. With hardware like this, you should use some 20 to 50 connections between the pool and the database, and that's it. Additional connections will slow down the database. The exact number of connections depends on the usage pattern, but hundreds of connections is never a good idea, it doesn't perform well.
I have a 2 processor 12 core machine (E5, 24 hyper threads in total) that gives me max performance with just 20 connections: 2500 tps. And it's using a FusionIO-card, IO isn't a real problem. The application is written in pl/pgsql and is doing pretty complex calculations with 200GB of data.
I used pgbench to check the servers performance. It was either postgresql server problem or machine problems (HDD or something). In both cases this is not programming related.
I have 200,000 tasks that will run in parallel for speed gains. I'm using ParallelEnumerable.Range(0, 200000).Sum( a => /*do_something*/ ).
as the task counter goes from 0 to 200,000 the required number of iterations decreases. The task with a=0 requires most number of iterations, while tasks with a>100,000 finish with one or no iteration.
due to this, my quad-core machine isn't reaching max cpu utilization peak as the tasks progress. It seems like the workload is distributed to all 4 cores at start, and some cores go idle earlier as their portion was mainly tasks with high as. cpu utilization starts with 100%, but drops gradually to 75~50~25%. How can I achieve full cpu utilization from start to end?
very simple self-answer : replacing ParallelEnumerable.Range(0, 200000) with Enumerable.Range(...).AsParallel() was enough to solve this problem. The it seems that the workload is distributed dynamically in Enumerable.Range(...).AsParallel().
I have a ASP.Net project and many reports.Some of my reports have heavy calculation that I calculate them in memory using Linq. When I test this reports on my client CPU usage is about 25%.
My question is why cpu usage does not increase to 80% or more?
When I publish this project on the server does it has this behaviour?
You have 4 cores (or 2 hyper-threader cores), meaning each single thread can take up to 25% of the total computing power (which is shown as 25% CPU in the Task Manager).
Your calculation is probably single threaded.
Can you possibly break your calculation into several threads? That'll spread the load across the cores of your CPU a little more evenly.
I am writing a rather resource-intensive application, and I am trying to find a way to let the user of said app decide what resources, exactly the app should use.
Part of the problem here is knowing what the CPUs are capable of -> I imagine that if the user wants the app to use 5 cores on a AMD Phenom II X6, I can get away with throwing everything at those 5 cores. On the other hand, if the processor is an Intel i7 or an AMD FX-8510, some of those cores share various components. What then should I do, to ensure I don't schedule something accidentally to the hyper-threader scheduler, or something to that extent?
I am trying to avoid the scenario where a CPU chokes because I am throwing everything at the wrong part of it (was an old problem). Any ideas?
Thread scheduler in operating system has all information necessary (number of physical CPU cores, which logical cores shares resources, which cores are parked to safe energy, if hyperthreading is enabled, which cores uses turbo frequency, what resources are used by another programs, etc...) for efficient CPU resource allocation. You program has no knowledge about these things, so best thing you can do is to let OS decide how to allocate resources. Even without any special support in your program, user can limit resources used by program by setting process priority or processor affinity if he wants.
One thing you can try is to create number of threads according with number of CPU cores (returned by Environment.ProcessorCount). If you thing that scheduling one thread per physical core (instead of one thread per logical core) will perform better, you can play with Process.GetCurrentProcess().ProcessorAffinity property - set it to 0x55555555 on CPUs with hyperthreading, but this can also make things worse than before (what if some future CPU will have 3 logical cores per physical core instead of 2, or what if another program set processor affinity to the same cores?).