Measure the time spent in each layer of a Web Service - c#

I have a WCF REST Web Service (.NET 4) which is design based on multitier architecture (i.e. presentation, logic, data access, etc.). At runtime and without instrumentation, I would like to measure how much time a single request takes for each layer (e.g. Presentation = 2 ms, Logic = 50 ms, Data = 250 ms).
Considering that I cannot change the method signature to pass in a StopWatch (or anything similar), how would you accomplish such thing?
Thanks!

If you cannot add code to your solution, you will end up using profilers to examine what the code is doing, but I would only suggest that in an environment other than production unless you are only having issues in performance. There are plenty of ways to set up another environment and profile under load.
Profilers will hook into your code and examine how long each method takes. There is no "this is the business layer performance" magic, as the profiler realizes physical boundaries (class, method) instead of logical boundaries, but you can examine the output and determine speed.
If you can touch code there are other options you can add which can be turned on and off via configuration. Since you have stated you cannot change code, I would imagine this is a non-option.

There's a tracer in the MS Ent Libs, syntaxically you use a using statement and duration of everything that occurs inside the using statement is logged into the standard Ent Lib logging ecosystem.
using(new Tracer())
{
// Your code here.
}
More basic info on MSDN here, and see here for it's constructors; there are different constructors that allow you to pass in different identifiers to help you keep track of what's being recorded.

I've recently been working on performance-enhancements and looking at .NET performance tools. DotTrace seems to be the best candidate so far.
In particular, the Performance Profiler can produce Tracing Profiles. These profiles detail how long each method is taking:
Tracing profiling is a very accurate way of profiling that involves
getting notifications from CLR whenever a function is entered or left.
The time between these two notifications is taken as the execution
time of the function. Tracing profiling helps you learn precise timing
information and the number of calls on the function level.
This screen-shot illustrates the useful stats that the tool can produce:
You should be able to see exactly how much time a single request takes for each layer by inspection of the profiling data.

Related

IIS High and unstable TTFB

I have my MVC application with API used in it running on IIS 6.0(7.0 on production servers). For the API, I use IHttpHandler implementation in API.ashx file.
I have many different API calls being made to my API.ashx file, but I'll tell about one, that has no DB calls, so it's definitely NOT database issue.
At the very beginning of ProcessRequest method I've added Diagnostics.Stopwatch to track performance and stopping it at the last method's line.
The output of my stopwatch is always stable(+-2ms) and shows 5ms(!!!) in average.
But on my site, I see absolutely unstable and different Time to First Byte. It may start from 15ms and may grow up to 1 SECOND, and demonstrates 300 ms in average, but in logs I'll still have my stable 5ms from stopwatch.
This happens on every server I use, even locally(so this is not network related-problem) and on production. BTW all static resources are loaded really fast(<10ms)
Can anyone suggest the solution to this?
This sounds like a difficult one to diagnose without a little more detail. Could you edit your question and add a waterfall chart showing the slow API call in question? A really good tool to produce waterfall charts is http://webpagetest.org
I also recommend reading this article about diagnosing slow TTFBs.
http://www.websiteoptimization.com/speed/tweak/time-to-first-byte/
It goes into great detail about some of the reasons behind a slow response.
Here are some server performance issues that may be slowing down your server.
Memory leaks
Too many processes / connections
External resource delays
Inefficient SQL Queries
Slow database calls
Insuficient server resources
Overloaded Shared Servers
Inconsistent website response times
Hope that helps!

Handling limitations in multithreaded server

In my client-server architecture I have few API functions which usage need to be limited.
Server is written in .net C# and it is running on IIS.
Until now I didn't need to perform any synchronization. Code was written in a way that even if client would send same request multiple times (e.g. create sth request) one call will end with success and all others with error (because of server code + db structure).
What is the best way to perform such limitations? For example I want no more that 1 call of API method: foo() per user per minute.
I thought about some SynchronizationTable which would have just one column unique_text and before computing foo() call I'll write something like foo{userId}{date}{HH:mm} to this table. If call end with success I know that there wasn't foo call from that user in current minute.
I think there is much better way, probably in server code, without using db for that. Of course, there could be thousands of users calling foo.
To clarify what I need: I think it could be some light DictionaryMutex.
For example:
private static DictionaryMutex FooLock = new DictionaryMutex();
FooLock.lock(User.GUID);
try
{
...
}
finally
{
FooLock.unlock(User.GUID);
}
EDIT:
Solution in which one user cannot call foo twice at the same time is also sufficient for me. By "at the same time" I mean that server started to handle second call before returning result for first call.
Note, that keeping this state in memory in an IIS worker process opens the possibility to lose all this data at any instant in time. Worker processes can restart for any number of reasons.
Also, you probably want to have two web servers for high availability. Keeping the state inside of worker processes makes the application no longer clustering-ready. This is often a no-go.
Web apps really should be stateless. Many reasons for that. If you can help it, don't manage your own data structures like suggested in the question and comments.
Depending on how big the call volume is, I'd consider these options:
SQL Server. Your queries are extremely simple and easy to optimize for. Expect 1000s of such queries per seconds per CPU core. This can bear a lot of load. You can use a SQL Express for free.
A specialized store like Redis. Stack Overflow is using Redis as a persistent, clustering-enabled cache. A good idea.
A distributed cache, like Microsoft Velocity. Or others.
This storage problem is rather easy because it fits a key/value store model well. And the data is near worthless so you don't even need to backup.
I think you're overestimating how costly this rate limitation will be. Your web-service is probably doing a lot more costly things than a single UPDATE by primary key to a simple table.

ASP.NET Web API performance issue

I have a simple Web API that returns the list of contacts:
public class ContactsApi : ApiController
{
public List<Contact> GetContacts()
{
Stopwatch watch = new Stopwatch();
watch.Start();
// Doing some business to get contacts;
watch.Stop();
// The operation only takes less than 500 milliseconds
// returning list of contacts
}
}
As I've used Stopwatch to test data-retrieval performance, it's apparent that it takes less than a second. However, when I issue a request to the GetContacts action via Chrome browser, it takes 4 to 5 seconds to return data.
Apparently that delay has nothing to do with my data-retrieval code. It seems to me that Web API is running slow. But I have no idea how to debug and trace that.
Is there any utility to log timing for ASP.NET HTTP request process pipeline? I mean, something like Navigation Timing to show that each event has occurred in what time?
How big is your response? Maybe it is a cost of serialization and transfer? However, there is a lot of possibilities to profile it, I would start from profiling with one of the tools in the market like ANTS Performance Profiler or dotTrace
Are you running it with the debugger? Do some tests without the debugger.
I had similar problems with a web API project I am currently developing and for us
turning off the debugger made the test take milliseconds instead of seconds.
There also seems to be some startup cost when calling a API the first time, subsequent request are always faster.
Try using Fiddler (http://fiddler2.com/), a free web debugging tool. It has most of the features that you are looking for.
4.5 seconds is pretty huge. If you use EF, you could use MiniProfiler.EF
I experienced some slowdown ( in the past) by incorrectly using Entity Framework Queryable ( converting it to lists, expanding, ...).
If you are using EF, keep it IQueryable as long as possible ( .ToList() executes a Query).
According to your needs, use debugging tools like MiniProfiler, MiniProfiler.Ef and tools other suggested are probably good too ( although i haven't used them in the past).
The cost of serialization could be important ( if ou are using DTO's), AutoMapper ( and probably other tools) seems slow on large lists. I'd suggest manually mapping them in an extension method, if you really want performance on big lists.

Web server performance/test tool

I'm looking for a tool that simply tests a web server which I developed an application on it.
Tool must tell me that entire web server or a page in my application
- can serve at most how many users
- can use how much CPU
- can measure transaction per second (TPS)
Please do not confuse my question for a kind of http(s) listening tools like Fiddler.I do not want to listen,I do want to test it (This is like "Can you run it" tool for a game)
ApacheBench (don't be fooled by the name) will load up your app and give you a count of how many requests per second you can deliver. In addition, it's concurrency options will give you an idea of usercount.
See also the Microsoft Web Capacity Analysis Tool.
You need two things:
A load tester. See these questions/answsers:
load test / stress test web services
Best way to stress test a website
Open source Tool for Stress, Load and Performance testing
On your server, use performance monitor to measure the things you're interested in (memory use, processor use, paging...) while it's under load. Performance monitor also has ASP.NET-specific counters.
Like Ian said, ApacheBench is a good starting tool. If you find you need something a bit more programmable or robust, the next free step up is definitely JMeter, which also happens to be an Apache Foundation project, and is a Java client application that can record a series of user actions on your site via built in proxy server and then replay them for X users / N minutes / Y iterations / etc... to simulate real traffic. You can even record different activity segments and play them back at alternate ratios (i.e. 20% submit content, 80% read content)
Myra,
I would think that most Apllication Server providers do have a monitoring tool that allows you to make that kind of decisions. For example, JBoss has JOPR or JON (same tool but the later is supported by Red Hat). Others like webappVM are specifically designed to run and gather metrics under a virtualized cloud. You need to look in what what have, budget, and what is available for that environment.
Hope this helps,

Using Performance Counters to track windows services

I'm working with a system that consists of several applications and services, almost all using a SQL database.
Windows services do different things at different times, and I would like to track them. Meaning that on some deployed systems we see the machine running high on CPU, we see that the sql process is running high, but we can't be sure about which service is responsible for it.
I wonder if Performance Counters are good for this job.
Basically I would like to be able to see at a certain moment which service woke up and is processing something.
It seems to me that I can end up having a perfcounter that only has the value 0 or 1 for each service to show if it is doing something, but this doesn't seem like a normal usage for perfcounters.
Are performance counters suitable?
Do you think I should track this in a different way?
If your monitoring framework/approach already centers around monitoring performance counters, this is a viable approach.
Personally I find more detailed instrumentation necessary to really understand what's happening in my services (though maybe that has to do with the nature of my services).
I use .NET Logging Framework because it's simple and can write to multiple targets including log files, the event log, and a TCP socket (I have a simple monitor that listens on the logging socket for each app server and shows me in real-time what's happening).
Performance Counters are attractive because they are really lightweight, but as you say, they only allow you to capture numeric values. Sure, there's a slew of different types of values you can record, such as averages, deltas and totals, but they have to be numbers.
If you need more information than that, you must resort to some other type of instrumentation. In your case, it sounds like your need goes more in that direction.
If your services don't wake up and suspend themselves too often, it sounds like informational message to a custom event log might be a good idea. Create a custom event log for the application if you expect a fair amount of these so as to not flood the regular Application event log.
The .NET Trace API will be a better option if you expect the instrumentation to generate too much data for the normal event log. You can configure your application(s) to trace or not based on app/web.config, although a change will require a restart of the app. This is a good option if you only wish to use the instrumentation for troubleshooting, but it otherwise generates too much data or if tracing itself degrades performance too much. Another good thing about the Tracing API is that you can Trace on multiple levels, so even if you have written code to Trace very verbosely, you will only see that verbose trace data if you enable verbose tracing. That gives you better control of just what is being traced.
Eric J has a good point. I think if you really want to capture "timing" performance you'll have to use some other sort of logging and use start and stop time logs. I personally like log4net though it can be a pain to configure the first time around

Categories