I am currently running nginx on my windows system and am making a little control panel to show statistics of my web server.
I'm trying to get the performance counters for the CPU Usage and Memory Usage for the process but nginx shows as more than one process, it can vary from 2 - 5 depending on the setting in the configuration file. My setting shows two processes, so nginx.exe and nginx.exe
I know what performance counters to use, % Processor Time and Working Set - Private but how would I be able to get the individual values of both processes so i can add them together for a final value?
I tried using the code found at Waffles question but it only could output the values for the first process out of the two.
Thanks.
EDIT - Working Code
for (int i = 0; i < instances.Length; i++)
{
//i = i + 1;
if (i == 0)
{
toPopulate = new PerformanceCounter
("Process", "Working Set - Private",
toImport[i].ProcessName,
true);
}
else
{
toPopulate = new PerformanceCounter
("Process", "Working Set - Private",
toImport[i].ProcessName + "#" + i,
true);
}
totalNginRam += toPopulate.NextValue();
instances[i] = toPopulate;
}
Look at the accepted answer to that question. Try running perfmon. Processes that have the same names will be identified as something like this process#1, process#2, etc. In your case it could be nginx#1, nginx#2, etc.
Edit:
You need to pass the instance name to either the appropriate constructor overload or the InstanceName property. According to this, it looks like the proper format is to use underscore. So, process_1, process_2.
When using Azure Log Analytics, you can specify a path such as
Process(nginx*)\% Processor Time
This seems to be collecting data from all processes that match the wildcard pattern at any time. I can confirm that it picks up data from new processes (started after changing the settings) and it does not pick up data from "dead" processes. However, the InstanceName (such as nginx#3) may be reused, making it hard to tell when a process was "replaced" by a new one.
I have not been able to do this in Performance Monitor. The closest thing is to type "nginx*" in the search box of the "Add Counters" dialog, then select <All searched instances>. This will create one counter per process, and counters will not be dynamically added or removed as processes are started or stopped.
Perhaps it can be done with data collector sets created via PowerShell. However, even if you are able to set a path with a wildcard in the instance part, it is not guaranteed that it will behave as you expect (i.e., automatically collect data from all processes that are running at any time).
Related
I'm messing around with a scanning engine I'm working on and I'm trying to read the memory of a process. My code is below (it's a little messy) but for some reason if I read the memory of an application in different states, or after it has a lot of things loaded into memory, I get the same memory size no matter what. Are my entry point addresses and length incorrect?
If I use a memory editor I don't get the same results I do with this.
Process process = Process.GetProcessesByName(processName)[0];
List<Byte[]> moduleMemory = new List<byte[]>();
byte[] temp;
//MessageBox.Show(pm.FileName);
temp = new byte[pm.ModuleMemorySize];
int read;
if (ReadProcessMemory(process.Handle, pm.BaseAddress, temp, temp.Length, out read)) {
moduleMemory.Add(temp);
}
}
//string d = Encoding.Default.GetString(moduleMemory[0]);
MessageBox.Show("Size: " + moduleMemory[0].Length);
Your problem is probaly caused by the fact, that Process class caches values:
The process component obtains information about a group of properties
all at once. After the Process component has obtained information
about one member of any group, it will cache the values for the other
properties in that group and not obtain new information about the
other members of the group until you call the Refresh method.
Therefore, a property value is not guaranteed to be any newer than the
last call to the Refresh method. The group breakdowns are
operating-system dependent.
Therefore after target process loads some additional modules, process instance will still return old values. Calling process.Refresh() should update all cached values and fix the issue.
As I see this code does nothing more than reading the memory layout of the executable module (.exe file) which the process was created for. So no wonder you get the same size all the time.
I assume you are up to read the "operational" memory of the process. If so, you should have a look at this discussion.
We have created a monitoring application for our enterprise app that will monitor our applications Performance counters. We monitor a couple system counters (memory, cpu) and 10 or so of our own custom performance counters. We have 7 or 8 exes that we monitor, so we check 80 counters every couple seconds.
Everything works great except when we loop over the counters the cpu takes a hit, 15% or so on my pretty good machine but on other machines we have seen it much higher. We are wanting our monitoring app to run discretely in the background looking for issues, not eating up a significant amount of the cpu.
This can easily be reproduced by this simple c# class. This loads all processes and gets Private Bytes for each. My machine has 150 processes. CallNextValue Takes 1.4 seconds or so and 16% cpu
class test
{
List<PerformanceCounter> m_counters = new List<PerformanceCounter>();
public void Load()
{
var processes = System.Diagnostics.Process.GetProcesses();
foreach (var p in processes)
{
var Counter = new PerformanceCounter();
Counter.CategoryName = "Process";
Counter.CounterName = "Private Bytes";
Counter.InstanceName = p.ProcessName;
m_counters.Add(Counter);
}
}
private void CallNextValue()
{
foreach (var c in m_counters)
{
var x = c.NextValue();
}
}
}
Doing this same thing in Perfmon.exe in windows and adding the counter Process - Private Bytes with all processes selected I see virtually NO cpu taken up and it's also graphing all processes.
So how is Perfmon getting the values? Is there a better/different way to get these performance counters in c#?
I've tried using RawValue instead of NextValue and i don't see any difference.
I've played around with Pdh call in c++ (PdhOpenQuery, PdhCollectQueryData, ...). My first tests don't seem like these are any easier on the cpu but i haven't created a good sample yet.
I'm not very familiar with the .NET performance counter API, but I have a guess about the issue.
The Windows kernel doesn't actually have an API to get detailed information about just one process. Instead, it has an API that can be called to "get all the information about all the processes". It's a fairly expensive API call. Every time you do c.NextValue() for one of your counters, the system makes that API call, throws away 99% of the data, and returns the data about the single process you asked about.
PerfMon.exe uses the same PDH APIs, but it uses a wildcard query -- it creates a single query that gets data for all of the processes at once, so it essentially only calls c.NextValue() once every second instead of calling it N times (where N is the number of processes). It gets a huge chunk of data back (data for all of the processes), but it's relatively cheap to scan through that data.
I'm not sure that the .NET performance counter API supports wildcard queries. The PDH API does, and it would be much cheaper to perform one wildcard query than to perform a whole bunch of single-instance queries.
Sorry for a long response, but I've found your question only now. Anyway, if anyone will need additional help, I have a solution:
I've made a little research on my custom process and I've understood that when we have a code snippet like
PerformanceCounter ourPC = new PerformanceCounter("Process", "% Processor time", "processname", true);
ourPC.NextValue();
Then our performance counter's NextValue() will show you the (number of logical cores * task manager cpu load of the process) value which is kind of logical thing, I suppose.
So, your problem may be that you have a slight CPU load in the task manager because it understands that you have a multiple core CPU, although the performance counter counts it by the formula above.
I see a one (kind of crutchy) possible solution for your problem so your code should be rewritten like this:
private void CallNextValue()
{
foreach (var c in m_counters)
{
var x = c.NextValue() / Environment.ProcessorCount;
}
}
Anyway, I do not recommend you to use Environment.ProcessorCount although I've used it: I just didn't want to add too much code to my short snippet.
You can see a good way to find out how much logical cores (yeah, if you have core i7, for example, you'll have to count logical cores, not physical) do you have in a system if you'll follow this link:
How to find the Number of CPU Cores via .NET/C#?
Good luck!
Is there a way for me to minimize the performance hit when i'm either running or debugging my coded U.I test. Currently its taking me a long time to run my coded UI test because it takes to long to execute. I"ve timed it and too long means that for checking if a screen exist and doing an action it takes over 1min plus, so its taking me to long to debug and finish it out.
To give some more background. These if statements are all inside one test method, where i'm checking for different screens. Its very dynamic but takes to long to run. I've read i can do ordered test but i didn't think i can create ordered test with these dynamic screens(reason being i dont think ordered test can act as if statements to account for dynamic dialog and screens) and plus i think its too late in the process to go to that architecture.
I've tried the following playback settings with little or no improvements.
Here are my current playback settings
Playback.PlaybackSettings.WaitForReadyLevel = WaitForReadyLevel.Disabled;
//Playback.PlaybackSettings.SmartMatchOptions = SmartMatchOptions.None;
Playback.PlaybackSettings.MaximumRetryCount = 10;
Playback.PlaybackSettings.ShouldSearchFailFast = false;
Playback.PlaybackSettings.DelayBetweenActions = 1000;
Playback.PlaybackSettings.SearchTimeout = 2000;
None of these setting have helped either turning off smart options.
I could have sworn that i've read somewhere that if i replace my if statements
with try catch that this would help, but i maybe totally wrong since i'm just grabbing at straws to try to atleast increase performance by 40% or so.
Would anyone have any tips or tricks when dealing with ifs statements that you had to code in your coded ui code.
I'm guessing your if statements are of a kind:
if (uTtestControl.exists)
{
do something
}
if that's the case - your delays are a result of codedui searching for the control - a time costly operation - especially when searching for a control that doesn't exists.
there are a number of ways to handle this - if my guess is in the ball park - please confirm and i'll detail the options.
Updtae:
the main reason for delay is the MaximumRetryCount =10. in addition try the following settings:
Playback.PlaybackSettings.MaximumRetryCount = 3;
Playback.PlaybackSettings.DelayBetweenActions = 100;
Playback.PlaybackSettings.SearchTimeout = 15000;
when waiting for control to exists use the:
uiTtestControl.WaitForControlExist(5000)
this will tell the playback to search for the control for a max of 5 sec.
in addition - you should reduce the Playback.PlaybackSettings.SearchTimeout before searching for a control that you know might not exists:
var defaultTimeout = Playback.PlaybackSettings.SearchTimeout;
Playback.PlaybackSettings.SearchTimeout = 5000;
and after you finish searching return it to the default value:
Playback.PlaybackSettings.SearchTimeout = defaultTimeout;
this should do the trick
Update: The answers from Andrew and Conrad were both equally helpful. The easy fix for the timing issue fixed the problem, and caching the bigger object references instead of re-building them every time removed the source of the problem. Thanks for the input, guys.
I'm working with a c# .NET API and for some reason the following code executes what I feel is /extremely/ slowly.
This is the handler for a System.Timers.Timer that triggers its elapsed event every 5 seconds.
private static void TimerGo(object source, System.Timers.ElapsedEventArgs e)
{
tagList = reader.GetData(); // This is a collection of 10 objects.
storeData(tagList); // This calls the 'storeData' method below
}
And the storeData method:
private static void storeData(List<obj> tagList)
{
TimeSpan t = (DateTime.UtcNow - new DateTime(1970, 1, 1));
long timestamp = (long)t.TotalSeconds;
foreach (type object in tagList)
{
string file = #"path\to\file" + object.name + ".rrd";
RRD dbase = RRD.load(file);
// Update rrd with current time timestamp and data.
dbase.update(timestamp, new object[1] { tag.data });
}
}
Am I missing some glaring resource sink? The RRD stuff you see is from the NHawk C# wrapper for rrdtool; in this case I update 10 different files with it, but I see no reason why it should take so long.
When I say 'so long', I mean the timer was triggering a second time before the first update was done, so eventually "update 2" would happen before "update 1", which breaks things because "update 1" has a timestamp that's earlier than "update 2".
I increased the timer length to 10 seconds, and it ran for longer, but still eventually out-raced itself and tried to update a file with an earlier timestamp. What can I do differently to make this more efficient, because obviously I'm doing something drastically wrong...
Doesn't really answer your perf question but if you want to fix the rentrancy bit set your timer.AutoRest to false and then call start() at the end of the method e.g.
private static void TimerGo(object source, System.Timers.ElapsedEventArgs e)
{
tagList = reader.GetData(); // This is a collection of 10 objects.
storeData(tagList); // This calls the 'storeData' method below
timer.Start();
}
Is there a different RRD file for each tag in your tagList? In your pseudo code you open each file N number of times. (You stated there is only 10 objects in the list thought.) Then you perform an update. I can only assume that you dispose your RRD file after you have updated it. If you do not you are keeping references to an open file.
If the RRD is the same but you are just putting different types of plot data into a single file then you only need to keep it open for as long as you want exclusive write access to it.
Without profiling the code you have a few options (I recommend profiling btw)
Keep the RRD files open
Cache the opened files to prevent you from having to open, write close every 5 seconds for each file. Just cache the 10 opened file references and write to them every 5 seconds.
Separate the data collection from data writing
It appears you are taking metric samples from some object every 5 seconds. If you do not having something 'tailing' your file, separate the collection from the writing. Take your data sample and throw it into a queue to be processed. The processor will dequeue each tagList and write it as fast as it can, going back for more lists from the queue.
This way you can always be sure you are getting ~5 second samples even if the writing mechanism is slowed down.
Use a profiler. JetBrains is my personal recommendation. Run the profiler with your program and look for the threads / methods taking the longest time to run. This sounds very much like an IO or data issue, but that's not immediately obvious from your example code.
I'm trying to parse through e-mails in Outlook 2007. I need to streamline it as fast as possible and seem to be having some trouble.
Basically it's:
foreach( Folder fld in outllookApp.Session.Folders )
{
foreach( MailItem mailItem in fld )
{
string body = mailItem.Body;
}
}
and for 5000 e-mails, this takes over 100 seconds. It doesn't seem to me like this should be taking anywhere near this long.
If I add:
string entry = mailItem.EntryID;
It ends up being an extra 30 seconds.
I'm doing all sorts of string manipulations including regular expressions with these strings and writing out to database and still, those 2 lines take 50% of my runtime.
I'm using Visual Studio 2008
Doing this kind of thing will take a long time as you having to pull the data from the exchange store for each item.
I think that you have a couple of options here..
Process this information out of band use CDO/RDO in some other process.
Or
Use MapiTables as this is the fastest way to get properties there are caveats with this though and you may be doing things in your processin that can be brought into a table.
Redemption wrapper - http://www.dimastr.com/redemption/mapitable.htm
MAPI Tables http://msdn.microsoft.com/en-us/library/cc842056.aspx
I do not know if this will address your specific issue, but the latest Office 2007 service pack made a synificant performance difference (improvement) for Outlook with large numbers of messages.
Are you just reading in those strings in this loop, or are you reading in a string, processing it, then moving on to the next? You could try reading all the messages into a HashTable inside your loop then process them after they've been loaded--it might buy you some gains.
Any kind of UI updates are extremely expensive; if you're writing out text or incrementing a progress bar it's best to do so sparingly.
We had exactly the same problem even when the folders were local and there was no network delay.
We got 10x speedup by storing a copy of every email in a local Sql Server CE table tuned for the search we needed. We also used update events to make sure the local database remains in sync with the Outlook/Exchange folders.
To totally eliminate user lag we took the search out of the Outlook thread and put it in its own thread. The perception of lagging was worse than the actual delay it seems.
I had encountered a similar situation while trying to access Outlook mails via VBA(in excel).
However, it was far more slower in my case: 1 E-mail per sec!(Maybe it was slower in mine than in your case due to the fact that I had it implemented on VBA).
Anyway, I successfully managed to improve the speed by using the SetColumnns(eg. https://learn.microsoft.com/en-us/office/vba/api/Outlook.Items.SetColumns)
I know.. I Know.. This only works for a few properties, like "Subject" and "ReceivedTime" and not for the body!
But think again, do you really want to read through the body of all your emails? or is it just a subset? maybe based on its 'Subject' line or 'ReceivedTime'?
My requirement was to just go into the body of the email in case its subject matched a specific string!
Hence, I did the below:
I had added a second 'Outlook.Items' obj called 'myFilterItemCopyForBody' and applied the same filter I had on the other 'Outlook.Items'.
so, now I have two 'Outlook.Items' : 'myFilterItem' and 'myFilterItemCopyForBody' both with the same E-mail items since the same Restrict conditions are applied on both.
'myFilterItem'- to hold only 'Subject' and 'ReceivedTime' properties of the relevant mails (done by using SetColumns)
'myFilterItemCopyForBody'- to hold all the properties of the mail(including Body)
Now, both 'myFilterItem' and 'myFilterItemCopyForBody' are sorted with 'ReceivedTime' to have them in the same order.
Once sorted, both are looped simultaneously in a nested for each loop and pick corresponding properties (with the help of a counter) as in the code below.
Dim myFilterItem As Outlook.Items
Dim myItems As Outlook.Items
Set myItems = olFldr.Items
Set myFilterItemCopyForBody = myItems.Restrict("#SQL=""urn:schemas:httpmail:datereceived"" > '" & startTime & "' AND ""urn:schemas:httpmail:datereceived"" < '" & endTime & "'")
Set myFilterItem = myItems.Restrict("#SQL=""urn:schemas:httpmail:datereceived"" > '" & startTime & "' AND ""urn:schemas:httpmail:datereceived"" < '" & endTime & "'")
myFilterItemCopyForBody.Sort ("ReceivedTime")
myFilterItem.Sort ("ReceivedTime")
myFilterItem.SetColumns ("Subject, ReceivedTime")
For Each myItem1 In myFilterItem
iCount = iCount + 1
For Each myItem2 In myFilterItemCopyForBody
jCount = jCount + 1
If iCount = jCount Then
'Display myItem2.Body if myItem1.Subject contain a specific string
'MsgBox myItem2.Body
jCount = 0
Exit For
End If
Next myItem2
Next myItem1
Note1: Notice that the Body property is accessed using the 'myItem2' corresponding to 'myFilterItemCopyForBody'.
Note2: The lesser the number of times the compiler enters the loop to access the body property, the better! You can further improve the efficiency by playing with the Restrict and the logic to lower down the number of times the compiler has to loop through the logic.
Hope this helps, even though this is not something new!