ProcessCpuUsage class - Resources Monitor - c#

I want create Resources monitor as UWP application. I read a lot about performanceCounter class from System.Diagnostics but... this class is not contain to UWP. Then i read about some native classes but my skill is too low to implement this yet ;/.
Then i found ProcessCpuUsage class Windows.SystemDiagnostics; and i tried implements this class but i cant find info about constructors and i dont know how implements it? What info can i get from this class ?
In my app i need string with cpu/ram usage, and info about free disk space, and i want to show it as widget. Please Help.

ProcessCpuUsage class has no constructor, it provides access to data about the CPU usage of a process. And this class only has a GetReport method, which gets the ProcessCpuUsageReport for the process. With ProcessCpuUsageReport class, we can get KernelTime and UserTime consumed by the process.
To get a ProcessCpuUsage object, we need use ProcessDiagnosticInfo.CpuUsage property. This is one of the properties in ProcessDiagnosticInfo class. This class provides diagnostic information about a process, such as CPU usage, disk usage, memory usage and so on. And ProcessDiagnosticInfo class has two static methods: GetForCurrentProcess and GetForProcesses that help us to get the ProcessDiagnosticInfo.
However, please note that these two methods can only get the ProcessDiagnosticInfo related to your own app. GetForProcesses method can return a list of ProcessDiagnosticInfo objects for all running processes. But here the "all running processes" means all running processes in the same App Container. For example, for a UWP with out-of-process background tasks, GetForProcesses method may return several ProcessDiagnosticInfos including running background tasks. But for a simple UWP app, it may always return one ProcessDiagnosticInfo. This method can't return diagnostic informations for all running processes in the computer/device as UWP apps are running in app containers and isolated from each other.
So if you are going to create a resources monitor application, UWP may be not a good choice. Classic desktop apps might be better for your scenario.

Related

Accessing windows 10 task manager process list

When it comes to the Windows 7 task manager, I was easily able to get the process count, etc because the processes were stored in a traditional listview control that I could access using SendMessage functions. For explorer/win10 task mgr/etc however the list control that the processes are stored in does not seem to be a traditional control and seems to be a custom control. I was wondering if there is any documentation on the custom controls that Microsoft uses in their newer system applications, and/or if I can access them using SendMessage or something of the sort like I did before?
//Get the handle of the list..you can find the handle in win7&10 pretty easily
FindWindowExA(...parent,IntPtr.Zero,"SysListView32","Processes");//=listview handle
//Sending a message to get the number of processes for instance, works in windows 7 only
SendMessageA(process list handle,(IntPtr)0x1004,IntPtr.Zero,null);//=process count
If not, is it worth to try to debug my self how to access the list, or is that a bad idea? Why? I have a c# application and have no problem porting C++ methods with PInvoke. Thanks
Don't do that. Task Manager isn't doing any magic. It uses Windows APIs to get the list of processes, enumerate it, and obtain details. What you're doing is actually harder to get right than simply replicating what Task Manager is doing internally. A good starting point would be the Process Status API (psapi), and there are other APIs that will allow to get resource usage information etc. All it takes to figure out is a "stroll" through WINAPI documentation (admittedly sparse here and there). If in doubt, use a debugger to break the task manager and see what API calls it makes, or use depends tool to see what APIs it uses so that you'd get an idea where to look. You can also use reverse-engineering tools like IDA-Pro to do partial disassembly and look for API calls.

Is there a way to associate arbitrary data to a Windows Process?

I have many instances of a process I've written on a server. I'd like to associate some information with each process. In this specific case I'd like to store the "CurrentState" of the process - "RUNNING|DRAINING|STOPPING", but it would be useful for me to store a "Friendly Name" and so on.
I want to query this information from another "mother" process - this mother process will query the processes running and collate the data.
I've thought of a couple of different ways I could achieve this. For example I might open up a NetPipe to each process of interest and ask for the data, or have each process broadcast it's state regularly.
I was wondering: is there a way to store key value pair information against a process built into Windows itself? Is there an accepted pattern for doing this?
I control the source for the child processes and the mother process. They are written in C#, P/Invoking is fine. The operating system is Windows 2012 R2.
You can host WCF services that use named pipes:
http://msdn.microsoft.com/en-us/library/ms733769(v=vs.110).aspx
Based on some of your comments, it looks like you could also consider the System.AddIn (aka Managed AddIn Framework (MAF)) functionality to create, host, and communicate with Add-ins. MAF supports loading addins in your app domain, a separate app domain, or in a completely separate process. The downside with MAF is that it requires 5 DLLs to get started, but in doing that gives you a lot of flexibility with API compatibility as you version and change your pipeline.
If you're controlling the data from a Mother process, you can also use AppDomains to load your other processes and communicate via Marhsaled data such as a Status class, or use the AppDomains to Set and Get data.
Be aware that any Status data you transfer needs to either be a class which derives from the Marshaling class or be marked as Serializable. The reason for this is because AppDomains are treated in the OS the same as different processes, so they can't access each others memory an actually have to serialize data as if it were being passed through IPC.
Take a look at the .Net Process Class:
http://msdn.microsoft.com/en-us/library/system.diagnostics.process(v=vs.110).aspx
You can use it to get all running processes, start a process, get the processes unique Id, and be alerted when the process exits. This should give you everything you need to track processes.
Children can call Process.GetCurrentProcess to get their own process id, then make a call to the "mother" process to associate arbitrary data about itself.

How to make a process fire an event in another process in c#/.net?

How to make process-1 able to fire an event in process-2, and send along few argument, to signal the 2nd process to do a specific action, and optionally receive a reply?
It is possible to do this using the filesystem, there could be a file, where process-1 dumps some commands/querys, and process-2 would be constantly reading from that file, but, this solution is not nice.
Any other way to do it?
(I know that its easy in VB.net to fire an event in a running process whenever a new process is started, IF the "single instance" is enabled in the project properties)
You can use named EventWaitHandle to achieve cross-process synchronization.
This article seems to do what you are used to with vb.net single instance (and it seems still a viable option).
In short it seems that there are three approaches to accomplishing single instance like solutions:
Use a Mutex
Cycle through the process list to see if a process with the same name is already running
Use the Visual Basic system for single instance apps (which you can access from C#)
If by "process" you mean "app-domain", it's fairly easy to set up eventing between the two. In fact if you have two classes in two separate app-domains (where each class has MarshalByRefObject as a base class), then .net will automatically set up a remoting structure that will make events appear to behave as they would in a single app-domain. (Example here: http://msdn.microsoft.com/en-us/library/system.marshalbyrefobject.aspx)
The key here though is 'appear'. 'App-domain' and 'process' separation are intended to keep resources isolated on purpose. To access anything outside of your process you really need help from the operating system, like a shared file or internet connection or named pipes - something to that effect. But .net concepts like events don't exist outside of your space in the runtime.
In other words, you'd have to use something like Named-Pipes (http://msdn.microsoft.com/en-us/library/system.io.pipes.namedpipeserverstream.aspx) if both processes are on the same machine, TCPClient/TCPListener (http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.aspx) if communicating across multiple systems, or WCF if you need something more heavy duty.
If you'd like to see a specific example of one of these technologies in practice, I can write one up for you, btw.

Pass data to background agent, C# windows phone

Did quite a bit of reading on how to get data from my main app to the background agent. Microsoft suggestion seems to be to use isolated storage with mutex.
It is suggested in a few places that you can create a static class in a third project (referenced by main and agent) and pass the data that way (but no samples). I could not get that to work. The background agent did not seem to have access to the static class created by the main app.
Has anyone got that to work? Or is isolated storage the best way?
What you have read in a few places is completely impossible by design.
Background agents live in a separate process. If you'll define static variable in a shared library, you'll have 2 completely independent copies of that variable, one in the GUI process, another one in the background agent process.

Question about how to implement a c# host application with a plugin-like architecture

I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services.
My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing.
On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it.
What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this?
Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application.
Thanks
EDIT: Maybe I really need to use different AppDomains. From what I've been reading, loading assemblies in different AppDomains is the only way to later be able to unload them from the process.
I've implemented something along these lines using the Managed Addin Framework (MAF) in the System.Addin namespace. With MAF you package your addins as separate DLLs, which your host app can discover and launch in its app domain, in a separate domain for all of the addins, or each addin in its own domain. With shadow copy and separate domains you can even update an addin without shutting down your hostapp.
Your host app and the addins communicate through contracts that you derive from MAF interfaces. You can send objects back and forth between the host and the addins. The cotnracts provide a black-box interface between addins and the host, allowing you to change an addin's implementation unbeknownst to the host.
Addins can even communicate between themselves if the host tells them about each other. In my case a logging addin is shared by the others. This lets me drop in different loggers without touching the other addins or the host.
For my app, the addin use simple supervisor classes that in launch worker classes on their own threads that do all of the processing. Workers catch their own exceptions, which they return to their supervisor through callback methods. Supervisors can restart workers or take other action. The host controls the supervisors through a command contract, which instructs them to start and stop workers and return data.
My host app is a Windows service. The worker threads have thrown exceptions for all the usual reasons (including bugs!), but the host app has never crashed in any of our installations. Since debugging services is inconvenient, addins allow me to build test apps that use the same contracts, with added assurance that I'm testing what I deploy.
Addins can expose UI elements, too. This is very helpful to me as I need to deploy a controller app with the host service, since services do not have UIs. Each plugin includes its own controller interface. The controller app itself is very simple - it loads the addins and displays their UI elements. This allows me to ship an updated addin with an updated interface and not have to ship a new controller.
Even though the controller and the host service use the same addins, they don't step on each other; in fact, they don't even know that another app is using the same addins. The controller and the host talk to each other through a shared database, but you could also use another inter-app mechanism like MSMQ. In the next version the host will be a WCF service with addins on the backend and web services for control.
This is a bit long-winded but I wanted to give you an idea of how versatile MAF is. It's not as complex as it might first look, and you can build rock-solid apps with it.
It depends on how much trust you wish to allow the extensions. I'm working on a similar application and I've chosen to mostly trust the extension code, as this greatly simplifies things. I call into the code from a common thread (in my case, the extensions don't really 'run' in any continuous loop, but rather execute certain tasks that the main application wants to do) and catch exceptions in this thread, so as to provide helpful warnings that loaded extensions are misbehaving.
Currently there's nothing keeping these extensions from launching their own threads that could throw and crash the whole app, but this where I've had to make the trade-off between safety and complexity. My application is not mission-critical (not like a web server or database server), so I consider it an acceptable risk that a buggy extension could bring down my application. I provide safeguards to more politely cover the most common failure cases and leave it to the plugin developers (who will mostly be in-house people for now anyway) to clean up their bugs.
In regards to Unloading, yes, you can only unload the code and metadata for an assembly if you place it in an AppDomain. That said, unless you want to be loading and unloading frequently over the life of your program, the overhead associated with keeping the code in memory is not necessarily an issue. Any actual instances or resources using types from the assembly will still be cleaned up by the GC when you stop 'using' it, so the fact that it's still in memory doesn't imply a memory leak.
If your main use case is a series of plugins that you locate once at startup and then provide an option to instantiate while your app is running, I suggest investigating the real memory footprint associated with loading all of them at start-up and keeping them loaded. If you use AppDomains, there will be additional overhead there as well (for instance, memory for the proxy objects and loaded/JITed code to support AppDomain marshaling). There will also be CPU overhead associated with the marshaling and attendant serialization.
In short, I would only use AppDomains if one of the following were true:
I want to get true isolation for the purposes of code security (i.e. I need to run untrusted code in an isolated way)
My app is mission-critical and I absolutely need to make sure that if a plugin fails, it can't bring down my core app.
I need to load and unload the same plugin repeatedly, in order to support dynamic changes to the DLL. This is mainly if my app can't stop running, but I want to hot-patch plugins while it's still running.
I would not prefer AppDomains for the sole purpose of reducing possible memory footprint by allowing Unload.
This is an interisting question.
My first idea was to simply implement interfaces from your host application in your plugin applications to allow them to communicate through Reflection, but this would only allow communication and would not bring a real "sandbox-like" architecture.
My second thought was to design a service-oriented platform. The host application would be a kind of "plugin broadcaster" which would publish your plugins in a ServiceHost on a different thread. As this need to be really responsive and "no brainer configurated", the host application could communicate with the plugin through named pipes channel (NetNamedPipesBinding for WCF) which means is only communicating with localhost pipes and does not need any network configuration or knowledge at all. I think this could be a good solution to your problem.
Regards.

Categories