Using Performance Counters to track windows services - c#

I'm working with a system that consists of several applications and services, almost all using a SQL database.
Windows services do different things at different times, and I would like to track them. Meaning that on some deployed systems we see the machine running high on CPU, we see that the sql process is running high, but we can't be sure about which service is responsible for it.
I wonder if Performance Counters are good for this job.
Basically I would like to be able to see at a certain moment which service woke up and is processing something.
It seems to me that I can end up having a perfcounter that only has the value 0 or 1 for each service to show if it is doing something, but this doesn't seem like a normal usage for perfcounters.
Are performance counters suitable?
Do you think I should track this in a different way?

If your monitoring framework/approach already centers around monitoring performance counters, this is a viable approach.
Personally I find more detailed instrumentation necessary to really understand what's happening in my services (though maybe that has to do with the nature of my services).
I use .NET Logging Framework because it's simple and can write to multiple targets including log files, the event log, and a TCP socket (I have a simple monitor that listens on the logging socket for each app server and shows me in real-time what's happening).

Performance Counters are attractive because they are really lightweight, but as you say, they only allow you to capture numeric values. Sure, there's a slew of different types of values you can record, such as averages, deltas and totals, but they have to be numbers.
If you need more information than that, you must resort to some other type of instrumentation. In your case, it sounds like your need goes more in that direction.
If your services don't wake up and suspend themselves too often, it sounds like informational message to a custom event log might be a good idea. Create a custom event log for the application if you expect a fair amount of these so as to not flood the regular Application event log.
The .NET Trace API will be a better option if you expect the instrumentation to generate too much data for the normal event log. You can configure your application(s) to trace or not based on app/web.config, although a change will require a restart of the app. This is a good option if you only wish to use the instrumentation for troubleshooting, but it otherwise generates too much data or if tracing itself degrades performance too much. Another good thing about the Tracing API is that you can Trace on multiple levels, so even if you have written code to Trace very verbosely, you will only see that verbose trace data if you enable verbose tracing. That gives you better control of just what is being traced.

Eric J has a good point. I think if you really want to capture "timing" performance you'll have to use some other sort of logging and use start and stop time logs. I personally like log4net though it can be a pain to configure the first time around

Related

How to run C# Task Parallel Library across multiple machines (like a render farm)?

I'm writing a calculation intensive program in C# using the TPL. Some preliminary benchmarking shows good reduction in computation time through using processors with more cores/threads.
However, there is a limit to how many threads are available on a single CPU (I think even the best Xeons money can buy is currently have about 16).
I've been reading about how render farms with a 'grid' of multiple inexpensive CPUs in their own machines is a good way to increase the overall core count, but I have no idea how I go about implementing one of these. Is it implemented at the OS level with Microsoft server technology (and if so, how?), or do I also need to modify the C# code itself?
Any help or links to existing information would be greatly appreciated.
If you want to do this at scale (100s of nodes) then developing your own system is hard. You have to handle; nodes becoming unavailable, data replication to each node, tracking job progress.. It's a long list. You also need to consider the sort of communication you're going to require between your nodes. Remember that the cost of sending a message (data) from one thread to another is tiny compared to the cost of sending it to another machine across a network (even a fast one). You may have to completely rewrite your multithreaded application to run well on a distributed system, even to the point of using a completely different algorithm.
Hadoop
Microsoft had plans to commercialize Dryad as LINQ to HPC but this project was sidelined a while back (I worked on this project before I left Microsoft). I believe you can still get the final "public preview", but it's unsupported. The SQL team opted to work with the Hadoop/Hortonworks people on getting a Windows/Azure/.NET friendly Hadoop distribution off the ground. As far as I know the only thing they have delivered is HDInsight. A Hadoop service running in Azure.
There is now a Microsoft .NET SDK For Hadoop which will allow you to manage a cluster and submit jobs etc. It does not seem to allow you to write code that executes on the Hadoop nodes. You can however use the Hadoop streaming API. This is fairly low level but is language agnostic so you can pretty much use it to integrate map reduce code written in any language with Hadoop. More details on this can be found in this blog post.
Hadoop for .NET Developers
If you want to do this as a smaller scale (10s of nodes) then I would look for something like MPI .NET. it looks like this project has been abandoned but something similar is probably what you want.
You might look into some like Dryad - http://research.microsoft.com/en-us/projects/dryadlinq/default.aspx
It might on the other hand also be a big too much for your situation, but the ideas in Dryad could be simplified to your needs.
You might also look into making your own TaskScheduler, which could handle the distribution of threads to agents running on other boxes, but you would have to implement a simple socket client/server communication to get and push the data.
Another and a bit odd suggestion, which might be okay for investigating things, is to do the following.
Let the master of the calculation cut the problem into the number of available client computers.
Write the parameters to kick of the calculation for each client to a file shared by all on the network.
Let the clients look for files dedicated to them, and kick of the calculation for their piece, when file appears. The output is written back to a result file.
The server will sit an listen for all clients completing their jobs.
The files could be replaced with a database, low-level sockets, REST services, Web Services etc. depending on your needs.

Write custom events that can be used by 3rd party applications

Is it possible to write custom events that can be handled by 3rd party applications?
We have an existing app and we're finding that many people that use the app are using sql triggers to custom-write functionality of their own certain when things happen in our app.
This has led to some instances where our own processes have slowed down due to shoddy 3rd party Triggers that block our app.
I was thinking we could make this easier for 3rd party devs if we could raise events that they could handle in their own services or apps instead of having to use triggers.
That way we'd lose the blocking because we can just fire the event and continue. Also their slowness/potential crashes would happen outside of our process.
A) is this a reasonable approach?
B) Is this possible? Can I scope an event beyond the scope of my app?
EDIT
I have since found other related questions to be of interest:
wcf cross application communication
Interprocess pubsub without network dependency
Listen for events in another application (This seems very close to what I'm after)
I guess I'm looking for the simplest approach but if we wanted to adopt this method across a number of other apps within our company we'd have some further challenges:
We have some older apps in vb6 and delphi - from those I'd just like to be able to listen for their events in my (or 3rd party) newer C# apps or services.
For now, I'll look at:
Managed Spy and http://pubsub.codeplex.com
No, events are only usable by code that's loaded into your own process. If you don't trust these people now, you really don't want to expose yourself to shoddy code that you load into your own process and throws unhandled exceptions that terminate your app. You'll get the phone call, not them. Besides, they'll use such an event to run code that slows down your app.
In general, anything you do with a dbase will run with an entirely unpredictable amount of overhead. Not just because of triggers added by others, the dbase server could simply be bogged down by other work and naturally slow down over time as it stores more and more data. Make sure that doesn't make your app difficult or unpleasant to use, dbase operations typically must run on a worker thread or be done asynchronously with, say, BeginExecuteXxxx(). And make it obvious in your UI that progress is stalled by the dbase server, not by any code that you wrote. Saves you from having to do a lot of explaining.
What you're looking to do is basically to send messages to other processes. For this, you need some sort of IPC mechanism. Since it sounds like you'll have multiple listeners to each message, a mailslot is probably the best way. Unfortunately, .NET doesn't have built-in support for mailslots, so you'll have to use P/Invoke.
If you're looking for a built-in solution, then you could use named pipes, WCF, .NET Remoting, or bare TCP or UDP. With any of these, though, you'll have to loop through all of your listeners and send the message one at a time to each of them, which is not that big of a deal, but maintaining the separate connections is a little more of a hassle.
Note that with WCF and .NET Remoting, you're pretty much limiting your clients to using .NET as well. If your clients might be native or some other platform, then mailslots, named pipes, and TCP/IP are your best bet.

Measure the time spent in each layer of a Web Service

I have a WCF REST Web Service (.NET 4) which is design based on multitier architecture (i.e. presentation, logic, data access, etc.). At runtime and without instrumentation, I would like to measure how much time a single request takes for each layer (e.g. Presentation = 2 ms, Logic = 50 ms, Data = 250 ms).
Considering that I cannot change the method signature to pass in a StopWatch (or anything similar), how would you accomplish such thing?
Thanks!
If you cannot add code to your solution, you will end up using profilers to examine what the code is doing, but I would only suggest that in an environment other than production unless you are only having issues in performance. There are plenty of ways to set up another environment and profile under load.
Profilers will hook into your code and examine how long each method takes. There is no "this is the business layer performance" magic, as the profiler realizes physical boundaries (class, method) instead of logical boundaries, but you can examine the output and determine speed.
If you can touch code there are other options you can add which can be turned on and off via configuration. Since you have stated you cannot change code, I would imagine this is a non-option.
There's a tracer in the MS Ent Libs, syntaxically you use a using statement and duration of everything that occurs inside the using statement is logged into the standard Ent Lib logging ecosystem.
using(new Tracer())
{
// Your code here.
}
More basic info on MSDN here, and see here for it's constructors; there are different constructors that allow you to pass in different identifiers to help you keep track of what's being recorded.
I've recently been working on performance-enhancements and looking at .NET performance tools. DotTrace seems to be the best candidate so far.
In particular, the Performance Profiler can produce Tracing Profiles. These profiles detail how long each method is taking:
Tracing profiling is a very accurate way of profiling that involves
getting notifications from CLR whenever a function is entered or left.
The time between these two notifications is taken as the execution
time of the function. Tracing profiling helps you learn precise timing
information and the number of calls on the function level.
This screen-shot illustrates the useful stats that the tool can produce:
You should be able to see exactly how much time a single request takes for each layer by inspection of the profiling data.

Viewing output of multiple .Net console apps in one place

I have a C# console app which I'm deploying around 20 times (with different config settings) and running. As you might imagine it's hard to keep an eye on what's happening with 20 apps running (I'm eventually going to deploy these as windows services), so is there anything that can show the output of these in one place easily?
I've thought about log files but these could get big quite fast, and it is a lot of files to open and look at - I just want to have some output to check things are still running as expected.
Edit:
I'm going to be writing errors and stop/start information to the database. What I'm talking about here is the general processing information, which isn't all that relevant to revisit, but interesting to look at while its running in the console app.
I have successfully used log4net and its configurable UdpAppender. Then you can point all the UdpAppenders to a single machine where you can receive the Udp messages with Log4View for example.
Since it's configurable, you can use it when you install and debug in production and then increase the logging level to only output ERROR messages instead of DEBUG or INFO messages.
http://logging.apache.org/log4net/
http://www.log4view.com
http://logging.apache.org/log4net/release/config-examples.html
Maybe because I come from a heavy DB background, but how about using SQL Server with a Log table to track activity across different apps?
DBs are geared up towards concurrency and will easily handle multiple applications inserting data into the same Log table, also you get the options of slicing and dicing through the data as much as you would like, taking advantage of the already existing aggregation functions in a DB environment.
If you go down that route, you will probably need to consider maintaining that table (Log retention period, etc.).
You could also potentially start using tools such as Splunk to collate all the log data, and start corresponding app failures to system or environment failures (if these are being tracked).
I'd second Mikael Östberg and recommend using a logger library (log4net, or nlog). There are many options where you can send messages to either a database or queues, etc... Since you can turn the logging on or off easily, you can even keep it in your services as a monitor hook in case something weird happens

A method for high-load web site logging to file?

I need to build in click and conversion tracking (more specific and focused than IIS log files) to an existing web site. I am expecting pretty high load. I have investigated using log4net, specifically the FileAppender Class, but the docs explicitly state: "This type is not safe for multithreaded operations."
Can someone suggest a robust approach for a solution for this type of heavy logging? I really like the flexibility log4net would give me. Can I get around the lack of safe multi-threading using lock? Would this introduce performance/contention concerns?
While FileAppender itself may not be safe for logging, I'd certainly expect the normal access routes to it via log4net to be thread-safe.
From the FAQ:
log4net is thread-safe.
In other words, either the main log4net framework does enough locking, or it has a dedicated logging thread servicing a producer/consumer queue of log messages.
Any logging framework which wasn't thread-safe wouldn't survive for long.
You could check out the Logging Application Block available in the Microsoft Enterprise Library. It offers a whole host of different types of loggers, as well as a handy GUI configurator that you can point to your app.config\web.config in order to modify it. So there's not need to sift through the XML yourself.
Here's a link to a nice tutorial on how to get started with it:
http://elegantcode.com/2009/01/20/enterprise-library-logging-101/
I'm also interested in the answer, but I'll tell you what I was told when I tried to find a solution.
An easy way around it would be to use something like an SQL database. If the data you want isn't well suited for that, you could have each page access write it's own log file and then periodically merge the log files.
However, I'm sure there's a better solution.
When using syslog, you won't be having any threading issues. Syslog, sends the loglines using UDP to a logdaemon (could potentially be on the same machine).
Works especially great if you have more running processes/services, since all log lines are aggregated in 1 viewing tool.
if you expect really heavy loads, look at how the guys from facebook do it: http://developers.facebook.com/scribe/ You can use their opensource logtool. I don't think you'll hit their kind of load just yet, so you should be safe for some time to come!
R

Categories