Can Visual Studio CPU sampling help identify database vs. method execution time? - c#

In Java webapps, I often do CPU sampling through jvisualvm to distinguish database bottlenecks from other processing in the application. Long-running database queries will show up as time spent within the JDBC driver's classes and methods.
It looks like ASP.NET tooling and database drivers don't work the same way. If I profile with CPU sampling, I don't see any of the time spent in methods while waiting for I/O. If I want to compare database and application bottlenecks side-by-side, I need to use instrumentation.
In other words, CPU samples are not an approximation -- they are fundamentally a different metric.
Is this because of a difference in how the CPU sampling process works in Java vs. the CLR, or is it a difference in how the database drivers work? Or a difference in how the CLR vs. JVM treat time spent waiting for I/O?

Related

Comparison of two separate Data Access Layers (DALs) of a project

I have two separate DALs of a project and I want to compare them to see which one is the better DAL with regards to performance. The performance metrics that I have in mind are memory, execution time of queries etc.
The problems that I have faced are:
1) I have used visual studio profiler and generated two reports but in some cases the values that are common to both projects don't match up.(I have read that for this instrumentation is to blame)
2) Also I have an insert method in both DAL's whose performance I want to compare so when I use the compare reports option it does not show a value since the comparison is of methods in different projects.
Any suggestions to the approach I can use will be helpful.
Also , is profiling the only method to judge application performance,for my case?
Profiling tools usually heavily distort the measurements, it's not surprising that the results you get are inconsistent.
I would suggest just using Stopwatch to measure a loop of let's say 100000 DB access operations through each of the DAL's. You can measure average time per one operation, and the test duration in total.
While the loop is running, have Perfmon display counters of CPU, .NET CLR Memory\# Bytes in all heaps, .NET CLR Memory\% Time in GC. It would be also useful to measure transaction throughput from the database, like MSSQL$SQLEXPRESS\Transactions/sec and MSSQL$SQLEXPRESS:SQL Statistics\Batch Requests/sec (assuming you use SQL Express; other DBMS usually supply similar counters as well).
I think this should give you quite enough information to decide.

how much cpu should a single thread application use?

I have a single thread console application.
I am confused with the concept of CPU usage. Should a good single thread application use ~100% of cpu usage (since it is available) or it should not use lots of cpu usage (since it can cause the computer to slow down)?
I have done some research but haven't found an answer to my confusion. I am a student and still learning so any feedback will be appreciated. Thanks.
It depends on what the program needs the CPU for. If it has to do a lot of work, it's common to use all of one core for some period of time. If it spends most of its time waiting for input, it will naturally tend to use the CPU less frequently. I say "less frequently" instead of "less" because:
Single threaded programs are, at any given time, either running, or they're not, so they are always using either 100% or 0% of one CPU core. Programs that appear to be only using 50% or 30% or whatever are actually just balancing periods of computational work with periods of waiting for input. Devices like hard drives are very slow compared to the CPU, so a program that's reading a lot of data from disk will use less CPU resources than one that crunches lots of numbers.
It's normal for a program to use 100% of the CPU sometimes, often even for a long time, but it's not polite to use it if you don't need it (i.e. busylooping). Such behavior crowds out other programs that could be using the CPU.
The same goes with the hard drive. People forget that the hard drive is a finite resource too, mostly because the task manager doesn't have a hard drive usage by percentage. It's difficult to gauge hard drive usage as a percentage of the total since disk accesses don't have a fixed speed, unlike the processor. However, it takes much longer to move 1GB of data on disk than it does to use the CPU to move 1GB of data in memory, and the performance impacts of HDD hogging are as bad or worse than those of CPU hogging (they tend to slow your system to a crawl without looking like any CPU usage is going on. You have probably seen it before)
Chances are that any small academic programs you write at first will use all of one core for a short period of time, and then wait. Simple stuff like prompting for a number at the command prompt is the waiting part, and doing whatever operation ad academia on it afterwards is the active part.
It depends on what it's doing. Different types of operations have different needs.
There is no non-subjective way to answer this question that apples across the boards.
The only answer that's true is "it should use only the amount of CPU necessary to do the job, and no more."
In other words, optimize as much as you can and as is reasonable. In general, the lower the CPU the better, the faster it will perform, and the less it will crash, and the less it will annoy your users.
Typically an algoritmically heavy task such as predicting weather will have to be managed by the os, because it will need all of the cpu for as much time as it will be allowed to run (untill it's done).
On the other hand, a graphical application with a static user interface, like a windows forms application for storing a bit of data for record-keeping should require very low cpu usage, since it's mainly waiting for the user to do something.

Anyone know of a good C# code Profiler / Analizer to help optimize a webservice

I have a webservice that is in much need of some optimization. It's on an enterprise application the resides on a virtual server machine and is getting a huge bottle neck. I'm confident in my skills and being able to make this more efficient, but I was wondering if anyone out there has had a good experience with a profiler or optimization tool that would help point me to trouble spots.
The webservices main function is to generate PDFs which are created using Sql Reports and a third party PDF Writer utility. Basically it gets an ID and creates X number of PDFs based on number of Forms that are associated to that ID. So it has a loop which can run an average of 8 times / ID, and there are thousands of IDs sent daily. Needless to say there is always a back log of PDFs to be created, which the client would rather not see.
I have also thought about running multi-threads to asynchronously generate the PDF pages but I'm hesitant because they said they had issues with multi-threading on the "Virtual Server". So if anyone can point me to a good tutorial or advise about multi-threading on a Virtual Server I would appreciate that too.
Thanks for any help you can give.
I've used this one before and it's great:
JetBrains dotTrace
http://www.jetbrains.com/profiler/whatsnew/
Try Telerik's JustTrace, it has alot of neat stuff. It has 60 days free trial with support, so you can try it out first.
Fast Profiling
JustTrace aims to redefine fast memory and performance profiling. It adds minimal overhead to the profiled application, allows near seamless execution, and enables analysis-in-place, thus eliminating the need to move the application from its environment. The user can examine different stages of the application’s behavior by swiftly taking multiple snapshots throughout its lifetime.
Made-to-Measure Profiling
JustTrace offers three distinct profilers – Sampling, Memory and Tracing – to meet even the most demanding profiling requirements.
Profiling of Already Running Processes
JustTrace allows for unobtrusive attaching to live processes. Should an application start experiencing higher memory or CPU consumption, analysis on its state gives the opportunity to handle scenarios that are otherwise hard to reproduce.
Simple yet Intuitive UI
By definition, a memory and performance profiling tool should enable you to speed up the performance of your apps without slowing you down or getting into your way. JustTrace employs a minimalistic yet highly intuitive user interface that allows for easy navigation of the performance and memory results. A few effortless steps take you from choosing the application being profiled to an in-depth analysis of the profiling insights made by JustTrace. Memory and performance profiling has never been easier.
Live Profiling
JustTrace enables real-time monitoring of the application’s execution. The close-up watching of the application’s behavior brings out potential performance bottlenecks to the surface, and provides reliable hints of the application’s stages that are worth investigating.
Stand-alone Tool and Seamless Visual Studio Integration
JustTrace offers seamless integration with Visual Studio and can also be used as a stand-alone tool. The integration of JustTrace into Visual Studio’s UI removes a burdensome step by cutting the time needed to jump between the development environment and the tool to test the resulting memory and CPU utilization improvements. Simply modify the code, then run it through the Visual Studio UI and get JustTrace’s core capabilities in a single tool window.
Profiling of Multiple Application Types
JustTrace enables the profiling of local applications, running applications, Silverlight applications and local ASP .NET web site.
I would suggest taking a look at ANTS Memory & Performance Profiler from Red Gate:
ANTS Memory Profiler
ANTS Performance Profiler
The ANTS profilers do a fantastic job of identifying bottlenecks and memory leaks. They're not free, but they're very affordable and offer fully functional trials so you can evaluate the products.
There are other profilers:
ANTS: http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
Which can also profile SQL calls. They also have an EAP open at the moment which gives you more functionality for database calls, that is here:
http://help.red-gate.com/help/ANTSPerformanceProfiler/download_eap.html
There is YourKit:
http://www.yourkit.com/
Visual Studio has a profiler too but not as good.

Performance and monitoring .NET Apps

We've build a Web Application which is performing horrible even with alot of resources available. My boss doesn't believe me that the application is consuming alot of Hardware IO, so I have to prove that the hardware is ok, but the web app is really crap.
The app is using:
SQL Server 2000 with SP4
The main web application (.NET 3.5)
Two Web Services (.NET 1.1)
Biztalk 2004
There are 30 people using this apps.
How can I prove I am right?
You can hook up a profiler like ANTS profiler or JetBrains DotTrace and see where the application's performance bottlenecks are.
One place you could start is getting a performance profiler like Red-Gate ANTS profiler. I've used this tool and it's very useful is weeding out performance bottlenecks.
Randy
You could start by using SQL Server Profiler to get an impression of the amount of database traffic that is going on.
I'm not saying that database interaction is the bottleneck, but it often is, and the tool is already there if you are using SQL Server, so it may be a good idea to take a look at that before you go out and buy a lot of profiling tools.
Visual Studio 2008 also have built-in performance analysis tools.
Windows performance counters are a good way to get some basic information about general system performance. Proper counters will show you if it's really the IO that's doing lots of stuff. If you take out the numbers from the counters and compare those to the specs, you should be able to tell if the system is maxing out or not.
If the system is maxing out, it's a problem with the web application, and it should be profiled to find out where to start optimizing.
You could use the system performance monitor built into windows since at least XP. You can get almost any information you could possibly need. This includes CPU time, .NET memory usage (include gen0 gen1 and gen2), native memory usage, amount of time spent garbage collection, disk access time, etc. If you just search codeproject or just the web there are many examples of using these counter to test for just about anything you want.
One of the benefits of this is you don't have to change your code and can be used with existing system.
I find this is the best starting point to point you to where you should look for bottle necks and issues.

How big an effect on compile times does L2 cache size have?

I am in the middle of the decision process for a new developer workstation, and one remaining question is which processor to choose, and one of the early decisions is whether to go with Xeon or Core2 processors. (We've already restricted ourselves to HP machines, so we're only looking at Intel processors.)
The main goal of the upgrade is to shorten compile times as much as we can. We're using Visual Studio 2008 targeting .NET 3.5, mostly working on a solution with about a dozen projects. We know our build is CPU-bound. Since Visual Studio can't parallelize C# builds, we know we want to maximize CPU clock frequency - but the question is, do the larger caches of the Xeon line help during compilation, and if they do is the increase justifiable given the tripling in price?
You can add custom task to VS2008 in order for it to make build in parallel so the more processors (virtual) you have - the better. Take a look here. It helped me greatly.
I would guess that the compile process is more I/O-bound than CPU-bound. At least I could cut my compile time in half by putting my ASP.NET application on a RAM drive. (See here). As such, I would suggest not only thinking about the CPU but also about your disks, perhaps even more so.
I would really recommend that you measure this yourself. You're going to have loads of factors affecting performance e.g. are you compiling lots of small components, or one big deliverable (i.e. how CPU-bound will this be) ? And what disks are you specifying ? Memory ? All of this will make a difference, and it would be worth it to borrow some sample machines and test out your scenarios.
As for the question about cache size performance being 'worth it' - again - how much are you prepared to spend on compilation servers and how much is your time worth ? I suspect that if the servers are compiling more than a few hours a day and you have more than a couple of developers, the extra horsepower will more than pay for itself.
If I was you I would just go for the Q9550 with 12MB L2 cache :) They are currently good value for money.
I 'unfortunately' had to get a Core i7 860 due to my previous motherboard not supporting the FSB of the quadcore. I have no complaints though :)

Categories