This is a very strange issue that I am experiencing, and almost goes against anything logical that I can think of. I am currently profiling a website which we are building, which sometimes takes 5 seconds for a page to load. This happens both on IIS, and Visual Studio Development Server. However, when I profile it using ANTS Performance Profiler, it performs up to 5x faster, and loads in less than a second.
I am quite baffled as to why this can happen, because as far as I know, profiling should increase the time, not decrease it. Anyone could maybe shed some light on this?
Site is developed in Visual Studio 2010, ASP.Net v4.0, C#.
This is interesting as its very rare (I work on ANTS support). The main difference ANTS imparts on a process is permissions (since (usually) the process is fully initiated by ANTS and inherits the permissions). We have some routines that optimise the start-up procedure but I've never heard of a speed-up like this. Using Taskmanager, take a look at the login account that the process runs under ANTS and normally- then try to run your process under the account that ANTS uses. You may find this helps to explain the speed-up.
Performance testing needs to be done in carefully controlled setting. Things like system file cache, network, machine load, NGEN status, virus scanner could affect perf result.
Use Perfview to understand how 5s is spent (could be waiting for disk IO):
http://www.microsoft.com/en-us/download/details.aspx?id=28567
Related
I have a rather large solution revolving around a WebAPI project. I ran into some performance issues on a particular web service, and used the built-in performance profiler in VS2013 to find the bottlenecks and deal with them. Eventually I got the response time on a HTTP request down from around 500ms to 50ms (I use an external app to perform repeated requests and log the round-trip time).
However, I only see this improvement while running the WebAPI from the performance profiler tool. As soon as I switch back to running it straight from Visual Studio (F5) or on our test server, the response times increase to around 400ms, still an improvement on the original 500, but not exactly magnificent.
It only makes a slight difference if I run it in debug mode or release mode. Setting "debug info" to "none" rather than the default "pdb-only", on all the involved projects helps a tiny bit, bringing the average response time down to around 350ms.
I can't for the life of me, figure out what the performance profiler tool does to optimize the code further. And it's killing me that I've seen how fast it can run, but I'm unable to achieve the same performance.
Turns out the performance profiler wasn't doing anything different to optimize the code. But it did run IIS Express without any debuggers enabled.
By going to the Web API project's properties, and switching to the Web-tab, I could uncheck all the debuggers, and now the response times during debugging matched what I was seeing when running the profiler. Obviously disabling the ASP.NET debugger, prevents any debugging of the code.
Mikal is right.
This performance difference issue (when doing profiling is better than debuging) also has troubled me for a few hours, I even tried to move code from Web API to Console Application to test, and Console Application has similar performance as good as when profiling Web API.
Then I figured out that it is due to ASP.NET debugger was enabled for debugging mode, which consumed a lot CPU, after de-check that debugger, performance is back, as good as Console Application and Profiling mode
As the load on our Azure website has increased (along with the complexity of the work that it's doing), we've noticed that we're running into CPU utilization issues. CPU utilization gradually creeps upward over the course of several hours, even as traffic levels remain fairly steady. Over time, if the Azure stats are correct, we'll somehow manage to get > 60 seconds of CPU per instance (not quite clear how that works), and response times will start increasing dramatically.
If I restart the web server, CPU drops immediately, and then begins a slow creep back up. For instance, in the image below, you can see CPU creeping up, followed by the restart (with the red circle) and then a recovery of the CPU.
I'm strongly inclined to suspect that this is a problem somewhere in my own code, but I'm scratching my head as to how to figure it out. So far, any attempts to reproduce this on my dev or testing environments have proven ineffectual. Nearly all the suggestions for profiling IIS/C# performance seem to presume either direct access to the machine in question or at least a "Cloud Service" instance rather than an Azure Website.
I know this is a bit of a long shot, but... any suggestions, either for what it might be, or how to troubleshoot it?
(We're using C# 5.0, .NET 4.5.1, ASP.NET MVC 5.2.0, WebAPI 2.2, EF 6.1.1, Azure System Bus, Azure SQL Database, Azure redis cache, and async for every significant code path.)
Edit 8/5/14 - I've tried some of the suggestions below. But when the website is truly busy, i.e., ~100% CPU utilization, any attempt to download a mini-dump or GC dump results in a 500 error, with the message, "Not enough storage." The various times that I have been able to download a mini-dump or GC dump, they haven't shown anything particularly interesting, at least, so far as I could figure out. (For instance, the most interesting thing in the GC dump was a half dozen or so >100KB string instances - those seem to be associated with the bundling subsystem in some fashion, so I suspect that they're just cached ScriptBundle or StyleBundle instances.)
Try remote debugging to your site from visual studio.
Try https://{sitename}.scm.azurewebsites.net/ProcessExplorer/ there you can take memory dumps an GC dumps of your w3wp process.
Then you can compare 2 GC dumps to find memory leaks and open the memory dump with windbg/VS for further "offline" debugging.
I have a webservice that is in much need of some optimization. It's on an enterprise application the resides on a virtual server machine and is getting a huge bottle neck. I'm confident in my skills and being able to make this more efficient, but I was wondering if anyone out there has had a good experience with a profiler or optimization tool that would help point me to trouble spots.
The webservices main function is to generate PDFs which are created using Sql Reports and a third party PDF Writer utility. Basically it gets an ID and creates X number of PDFs based on number of Forms that are associated to that ID. So it has a loop which can run an average of 8 times / ID, and there are thousands of IDs sent daily. Needless to say there is always a back log of PDFs to be created, which the client would rather not see.
I have also thought about running multi-threads to asynchronously generate the PDF pages but I'm hesitant because they said they had issues with multi-threading on the "Virtual Server". So if anyone can point me to a good tutorial or advise about multi-threading on a Virtual Server I would appreciate that too.
Thanks for any help you can give.
I've used this one before and it's great:
JetBrains dotTrace
http://www.jetbrains.com/profiler/whatsnew/
Try Telerik's JustTrace, it has alot of neat stuff. It has 60 days free trial with support, so you can try it out first.
Fast Profiling
JustTrace aims to redefine fast memory and performance profiling. It adds minimal overhead to the profiled application, allows near seamless execution, and enables analysis-in-place, thus eliminating the need to move the application from its environment. The user can examine different stages of the application’s behavior by swiftly taking multiple snapshots throughout its lifetime.
Made-to-Measure Profiling
JustTrace offers three distinct profilers – Sampling, Memory and Tracing – to meet even the most demanding profiling requirements.
Profiling of Already Running Processes
JustTrace allows for unobtrusive attaching to live processes. Should an application start experiencing higher memory or CPU consumption, analysis on its state gives the opportunity to handle scenarios that are otherwise hard to reproduce.
Simple yet Intuitive UI
By definition, a memory and performance profiling tool should enable you to speed up the performance of your apps without slowing you down or getting into your way. JustTrace employs a minimalistic yet highly intuitive user interface that allows for easy navigation of the performance and memory results. A few effortless steps take you from choosing the application being profiled to an in-depth analysis of the profiling insights made by JustTrace. Memory and performance profiling has never been easier.
Live Profiling
JustTrace enables real-time monitoring of the application’s execution. The close-up watching of the application’s behavior brings out potential performance bottlenecks to the surface, and provides reliable hints of the application’s stages that are worth investigating.
Stand-alone Tool and Seamless Visual Studio Integration
JustTrace offers seamless integration with Visual Studio and can also be used as a stand-alone tool. The integration of JustTrace into Visual Studio’s UI removes a burdensome step by cutting the time needed to jump between the development environment and the tool to test the resulting memory and CPU utilization improvements. Simply modify the code, then run it through the Visual Studio UI and get JustTrace’s core capabilities in a single tool window.
Profiling of Multiple Application Types
JustTrace enables the profiling of local applications, running applications, Silverlight applications and local ASP .NET web site.
I would suggest taking a look at ANTS Memory & Performance Profiler from Red Gate:
ANTS Memory Profiler
ANTS Performance Profiler
The ANTS profilers do a fantastic job of identifying bottlenecks and memory leaks. They're not free, but they're very affordable and offer fully functional trials so you can evaluate the products.
There are other profilers:
ANTS: http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
Which can also profile SQL calls. They also have an EAP open at the moment which gives you more functionality for database calls, that is here:
http://help.red-gate.com/help/ANTSPerformanceProfiler/download_eap.html
There is YourKit:
http://www.yourkit.com/
Visual Studio has a profiler too but not as good.
I am using the ants profiler to see performance problems. I have compiled my windows application in release mode and started the Ants profiler, but once I click the stop profiler, I can't see any results. Is there anything that needs to be done?
There're a few things that can cause this problem: have you tried the suggestions in Red Gate's support center for "Troubleshooting missing results"?
That suggests looking at the following possibilities:
Missing PDB files - either because you don't have them for the profiled code, or they're not stored in the expected directory and ANTS can't find them
No available managed code to profile - either because you're trying to profile on a remote machine, are profiling methods exclusively comprising unmanaged code, or are profiling something where no method contributes more than 1% of total CPU time
Unreadable performance counters - in which case rebuilding the performance counters will usually help.
We've build a Web Application which is performing horrible even with alot of resources available. My boss doesn't believe me that the application is consuming alot of Hardware IO, so I have to prove that the hardware is ok, but the web app is really crap.
The app is using:
SQL Server 2000 with SP4
The main web application (.NET 3.5)
Two Web Services (.NET 1.1)
Biztalk 2004
There are 30 people using this apps.
How can I prove I am right?
You can hook up a profiler like ANTS profiler or JetBrains DotTrace and see where the application's performance bottlenecks are.
One place you could start is getting a performance profiler like Red-Gate ANTS profiler. I've used this tool and it's very useful is weeding out performance bottlenecks.
Randy
You could start by using SQL Server Profiler to get an impression of the amount of database traffic that is going on.
I'm not saying that database interaction is the bottleneck, but it often is, and the tool is already there if you are using SQL Server, so it may be a good idea to take a look at that before you go out and buy a lot of profiling tools.
Visual Studio 2008 also have built-in performance analysis tools.
Windows performance counters are a good way to get some basic information about general system performance. Proper counters will show you if it's really the IO that's doing lots of stuff. If you take out the numbers from the counters and compare those to the specs, you should be able to tell if the system is maxing out or not.
If the system is maxing out, it's a problem with the web application, and it should be profiled to find out where to start optimizing.
You could use the system performance monitor built into windows since at least XP. You can get almost any information you could possibly need. This includes CPU time, .NET memory usage (include gen0 gen1 and gen2), native memory usage, amount of time spent garbage collection, disk access time, etc. If you just search codeproject or just the web there are many examples of using these counter to test for just about anything you want.
One of the benefits of this is you don't have to change your code and can be used with existing system.
I find this is the best starting point to point you to where you should look for bottle necks and issues.