Reading w3wp .Net Performance Counter Instances programmatically - c#

When viewing the .Net performance counters using the Performance tool I can see the web process performance counters listed (w3wp, w3wp#1):
However when I run the following code as Administrator:
var instanceNames = new PerformanceCounterCategory(".NET CLR Memory")
.GetInstanceNames()
.OrderBy(x => x);
foreach (var name in instanceNames)
{
Console.WriteLine(name);
}
This is the output I see:
Notice the w3wp counters are not listed. Does anyone know why this is the case and how I can fix it?

The solution was you have to run the application in the same bitness as your website. As my website was 64 bit I needed to run the console application in 64 bit mode. To do this right click on the console application project, click properties in the Build tab untick the box that says "Prefer 32-bit".
Also when you collect the process id for the w3wp process by using the Process ID counter inside the .NET CLR Memory category it is zero to begin with. To get the process id you have to initialize the web site and make sure at least one garbage collection happens. As this was in my test code I could simply call GC.Collect in the Application_Start handler.

Related

How can I debug an internal error in the .NET Runtime?

I am trying to debug some work that processes large files. The code itself works, but there are sporadic errors reported from the .NET Runtime itself. For context, the processing here is a 1.5GB file (loaded into memory once only) being processed and released in a loop, deliberately to try to reproduce this otherwise unpredictable error.
My test fragment is basically:
try {
byte[] data =File.ReadAllBytes(path);
for(int i = 0 ; i < 500 ; i++)
{
ProcessTheData(data); // deserialize and validate
// force collection, for tidiness
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
GC.WaitForPendingFinalizers();
}
} catch(Exception ex) {
Console.WriteLine(ex.Message);
// some more logging; StackTrace, recursive InnerException, etc
}
(with some timing and other stuff thrown in)
The loop will process fine for an non-deterministic number of iterations fully successfully - no problems whatsoever; then the process will terminate abruptly. The exception handler is not hit. The test does involve a lot of memory use, but it saw-tooths very nicely during each iteration (there is not an obvious memory leak, and I have plenty of headroom - 14GB unused primary memory at the worst point in the saw-tooth). The process is 64-bit.
The windows error-log contains 3 new entries, which (via exit code 80131506) suggest an Execution Engine error - a nasty little critter. A related answer, suggests a GC error, with a "fix" to disable concurrent GC; however this "fix" does not prevent the issue.
Clarification: this low-level error does not hit the CurrentDomain.UnhandledException event.
Clarification: the GC.Collect is there only to monitor the saw-toothing memory, to check for memory leaks and to keep things predictable; removing it does not make the problem go away: it just makes it keep more memory between iterations, and makes the dmp files bigger ;p
By adding more console tracing, I have observed it faulting during each of:
during deserialization (lots of allocations, etc)
during GC (between a GC "approach" and a GC "complete", using the GC notification API)
during validation (just foreach over some of the data) - curiously just after a GC "complete" during the validation
So lots of different scenarios.
I can obtain crash-dump (dmp) files; how can I investigate this further, to see what the system is doing when it fails so spectacularly?
If you have memory dumps, I'd suggest using WinDbg to look at them, assuming that you're not doing that already.
Trying running the comment !EEStack (mixed native and managed stack trace), and see if there's anything that might jump out in the stack trace. In my test program, I found this one of the times as my stack trace where a FEEE happened (I was purposefully corrupting the heap):
0:000> !EEStack
---------------------------------------------
Thread 0
Current frame: ntdll!NtWaitForSingleObject+0xa
Child-SP RetAddr Caller, Callee
00000089879bd3d0 000007fc586610ea KERNELBASE!WaitForSingleObjectEx+0x92, calling ntdll!NtWaitForSingleObject
00000089879bd400 000007fc5869811c KERNELBASE!RaiseException+0x68, calling ntdll!RtlRaiseException
[...]
00000089879bec80 000007fc49109cf6 clr!WKS::gc_heap::gc1+0x96, calling clr!WKS::gc_heap::mark_phase
00000089879becd0 000007fc49109c21 clr!WKS::gc_heap::garbage_collect+0x222, calling clr!WKS::gc_heap::gc1
00000089879bed10 000007fc491092f1 clr!WKS::GCHeap::RestartEE+0xa2, calling clr!Thread::ResumeRuntime
00000089879bed60 000007fc4910998d clr!WKS::GCHeap::GarbageCollectGeneration+0xdd, calling clr!WKS::gc_heap::garbage_collect
00000089879bedb0 000007fc4910df9c clr!WKS::GCHeap::Alloc+0x31b, calling clr!WKS::GCHeap::GarbageCollectGeneration
00000089879bee00 000007fc48ff82e1 clr!JIT_NewArr1+0x481
Since this could be related to heap corruption from the garbage collector, I would try the !VerifyHeap command. At least you could make sure that the heap is intact (and your problem lies elsewhere) or discover that your issue might actually be with the GC or some P/Invoke routines corrupting it.
If you find that the heap is corrupt, I might try and discover how much of the heap is corrupted, which you might be able to do via !HeapStat. That might just show the entire heap corrupt from a certain point, though.
It's difficult to suggest any other methods to analyze this via WinDbg, since I have no real clue about what your code is doing or how it's structured.
I suppose if you find it to be an issue with the heap and thus meaning it could be GC weirdness, I would look at the CLR GC events in Event Tracing for Windows.
If the minidumps you're getting aren't cutting it and you're using Windows 7/2008R2 or later, you can use Global Flags (gflags.exe) to attach a debugger when the process terminates without an exception, if you're not getting a WER notification.
In the Silent Process Exit tab, enter the name of the executable, not the full path to it (ie. TestProgram.exe). Use the following settings:
Check Enable Silent Process Exit Monitoring
Check Launch Monitor Process
For the Monitor Process, use {path to debugging tools}\cdb.exe -server tcp:port=5005 -g -G -p %e.
And apply the settings.
When your test program crashes, cdb will attach and wait for you to connect to it. Start WinDbg, type Ctrl+R, and use the connection string: tcp:port=5005,server=localhost.
You might be able to skip using remote debugging and instead use {path to debugging tools}\windbg.exe %e. However, the reason I suggested remote instead, was because WerFault.exe, which I believe is what reads the registry and launches the monitor process, will start the debugger in Session 0.
You can make session 0 interactive and connect to the window station, but I can't remember how that's done. It's also inconvenient, because you'd have to switch back and forth between sessions if you need to access any of your existing windows you've had open.
Tools->Debugging->General->Enable .Net Framework Debugging
+
Tools->IntelliTace-> IntelliTaceEbents And Call Information
+
Tools->IntelliTace-> Set StorIntelliTace Recordings in this directory
and choose a directory
should allow you to step INTO .net code and trace every single function call.
I tried it on a small sample project and it works
after each debug session it suppose to create a recording of the debug session. it the set directory
even if CLR dies if im not mistaken
this should allow you to get to the extact call before CLR collapsed.
Try writing a generic exception handler and see if there is an unhandled exception killing your app.
AppDomain currentDomain = AppDomain.CurrentDomain;
currentDomain.UnhandledException += new UnhandledExceptionEventHandler(MyExceptionHandler);
static void MyExceptionHandler(object sender, UnhandledExceptionEventArgs e) {
Console.WriteLine(e.ExceptionObject.ToString());
Console.WriteLine("Press Enter to continue");
Console.ReadLine();
Environment.Exit(1);
I usually invesitgate memory related problems with Valgrind and gdb.
If you run your things on Windows, there are plenty of good alternatives such as verysleepy for callgrind as suggested here:
Is there a good Valgrind substitute for Windows?
If you really want to debug internal errors of the .NET runtime, you have the problem that there is no source for neither the class libraries nor the VM.
Since you can't debug what you don't have, I suggest that (apart from decompiling the .NET framework libraries in question with ILSpy, and adding them to your project, which still doesn't cover the vm) you could use the mono runtime.
There you have both the source of the class libraries as well as of the VM.
Maybe your program works fine with mono, then your problem would be solved, at least as long as it's only a one-time-processing task.
If not, there is an extensive FAQ on debugging, including GDB support
http://www.mono-project.com/Debugging
Miguel also has this post regarding valgrind support:
http://tirania.org/blog/archive/2007/Jun-29.html
In addition to that, if you let it run on Linux, you can also use strace, to see what's going on in the syscalls. If you don't have extensive winforms usage or WinAPI calls, .NET programs usually work fine on Linux (for problems regarding file system case-sensitivity, you can loopmount a case-insensitive file system and/or use MONO_IOMAP).
If you're Windows centric person, this post
says the closest thing Windows has is WinDbg's Logger.exe, but ltrace information is not as extensive.
Mono sourcecode is available here:
http://download.mono-project.com/sources/
You are probably interested in the sources of the latest mono version
http://download.mono-project.com/sources/mono/mono-3.0.3.tar.bz2
If you need framework 4.5, you'll need mono 3, you can find precompiled packages here
https://www.meebey.net/posts/mono_3.0_preview_debian_ubuntu_packages/
If you want to make changes to the sourcecode, this is how to compile it:
http://ubuntuforums.org/showthread.php?t=1591370
There are .NET exceptions which can not be caught. Check out: http://msdn.microsoft.com/en-us/magazine/dd419661.aspx.

C# Process.GetProcessById(4) throws System.ComponentModel.Win32Exception

I am writing a piece of code whereby I am to iterate through the list of modules loaded by the System process (PID : 4). The following is the code I am using to achieve it.
Process process = Process.GetProcessById(4);
foreach (ProcessModule pMod in process.Modules)
{
Console.Write(pMod.FileName + " ");
}
Console.WriteLine();
This code is throwing an error of System.ComponentModel.Win32Exception, whenever it is trying to evaluate the list of Modules. In effect, any property read or method call is throwing the same error. Any other process is working fine and it is able to list all the modules correctly. Could anyone shed light on what might be causing this behavior.
The System "process" (with PID 4 on Windows machines) is actually not a process at all, it denotes a group of processes that have SYSTEM integrity.
Try to work with a real process PID (for instance, run Internet Explorer, and use it's PID) instead, see if you`ll get the exception.
The system process is not a real user mode process, it is the Windows kernel (for want of a better description). Therefore it cannot be examined as if it were a normal process.

Local ASP.NET MVC Suddenly Very Slow; Load times > 1 minute

Over the last few weeks I've been subject to a sudden and significant performance deterioration when browsing locally hosted ASP.NET 3.5 MVC web applications (C#). Load times for a given page are on average 20 seconds (regardless of content); start up is usually over a minute. These applications run fast on production and even test systems (Test system is comparable to my development environment).
I am running IIS 6.0, VS2008, Vista Ultimate, SQL2005, .NET 3.5, MVC 1.0, and we use VisualSVN 1.7.
My SQL DB is local and IPv6 does not seem to be the cause. I browse in Firefox and IE8 outside of Debug mode using loopback, machine name, and 'localhost' and get the exact same results every time (hence DNS doesn't seem to be the issue either).
Below are screen shots of my dotTrace output.
http://www.glowfoto.com/static_image/28-100108L/3123/jpg/06/2010/img4/glowfoto
This issue has made it near impossible to debug/test any web app. Any suggestions very much appreciated!
SOLUTION: Complete re-installation of Windows, IIS, Visual Studio, etc. It wasn't the preferred solution, but it worked.
Surely the big red flag on that profiler output is the fact that AddDirectory is called 408 times and AddExistingFile is called 66,914 times?
Can you just confirm that there's not just a shed load of directories and files underneath your MVC app's root folder? Because it looks like the framework is busying itself trying to work out what files it needs to build (or add watches to) on startup.
[I am not au fait with MVC and so maybe this is not what is happening but 67k calls to a function with a name like "AddExistingFile" does smell wrong].
I've learnt that it's usually a "smell" when things fail near a power of two ...
Given
Over the last few weeks I've been subject to a sudden and significant performance deterioration
and
AddExistingFile is called 66,914 times
I'm wondering if the poor performance hit at about the time as the number of files exceeded 65,535 ...
Other possibilities to consider ...
Are all 66,914 files in the same directory? If so, that's a lot of directory blocks to access ... try a hard drive defrag. In fact, it's even more directory blocks if they're distributed across a bunch of directories.
Are you storing all the files in the same list? Are you preseting the capacity of that list, or allowing it to "grow" naturally and slowly?
Are you scanning for files depth first or breadth first? Caching by the OS will favor the performance of depth first.
Update 14/7
Clarification of Are you storing all the files in the same list?
Naive code like this first example doesn't perform ideally well because it needs to reallocate storage space as the list grows.
var myList = new List<int>();
for (int i=0; i<10000; i++)
{
myList.Add(i);
}
It's more efficient, if you know it, to initialize the list with a specific capacity to avoid the reallocation overhead:
var myList = new List<int>(10000); // Capacity is 10000
for (int i=0; i<10000; i++)
{
myList.Add(i);
}
Update 15/7
Comment by OP:
These web apps are not programmatically probing files on my hard disk, at least not by my hand. If there is any recursive file scanning, its by VS 2008.
It's not Visual Studio that's doing the file scanning - it is your web application. This can clearly be seen in the first profiler trace you posted - the call to System.Web.Hosting.HostingEnvironment.Initialize() is taking 49 seconds, largely because of 66,914 calls to AddExistingFile(). In particular, the read of the property CreationTimeUTC is taking almost all the time.
This scanning won't be random - it's either the result of your configuration of the application, or the files are in your web applications file tree. Find those files and you'll know the reason for your performance problems.
Try creating a new, default MVC2 application in a new web folder. Build and browse it. If your load times are okay with the new app, then there's something up with your application. If not, it's outside of the context of the app and you should start looking at IIS config, extensions, hardware, network, etc.
In your app, back up your web config and start with a new, default web.config. That should disable any extensions or handlers you've installed. If that fixes your load times, start adding stuff from the old web.config into the new one in small blocks until the problem reappears, and in that way isolate the offending item.
I call this "binary search" debugging. It's tedious, but actually works pretty quickly and will most likely identify the problem when we get stuck in one of those "BUT IT SHOULD WORK!!!" modes.
Update Just a thought: to rule out IIS config, try running the site under Cassini/built-in dev server.
The solution was to format and do a clean install of Vista, SQL Server 2005, Visual Studio 2008, IIS6 and the whole lot. I am now able to debug, without consequence, the very same webapp(s) I was experiencing the problems with initially. This leads me to believe the problem lay within one of the installations above and must have been aggravated by a software update or by the addition of software.
You could download Fidler to measure how long each call takes and get some measurements.
Link
This video might help...

Performance Counters not being released

All:
I am using some custom Performance Counters that I have created. These are multi-instance, with a lifetime of "Process".
the problem: When I'm debugging in VS, if I stop the process and then start it again, I get an exception when my code attempts to create my performance counters. The exception indicates that the peformance counters already exist and that I cannot create them until the owning process releases them.
Once I get this error, there seems to be only 1 way out -- I have to close and restart Visual Studio -- it's as though VS gets ownership of my Process Lifetime performance counters even though it was really created by the owned process. Any idea what I can do about this?
BTW: the problem only seems to surface if my code actually writes to a performance counter before it is shut down.
I think you're doing battle with the Visual Studio hosting process. It is a helper .exe that hosts the CLR to improve the debugging experience, it is always running while you've got a project loaded into VS. Project + Properties, Debug tab, scroll down, uncheck the "Enable the Visual Studio hosting process" checkbox.
This does affect the debugging session somewhat, most notable is that the output written by Console.WriteLine() in your program no longer shows up in the Output window. Some obscure security options, not at all well documented. I doubt you'll have a problem.

ASP.NET/C# - Custom PerformanceCounters only show up in 32-bit perfmon on 64-bit system

I'm trying to create a set of custom performance counters to be used by my ASP.NET application. I use the following code to increment the counters:
internal static void Increment(String instanceName, DistributedCacheCounterInstanceType counterInstanceType)
{
var permission = new PerformanceCounterPermission(PerformanceCounterPermissionAccess.Write, Environment.MachineName, "CounterName");
permission.Assert();
var counter = new PerformanceCounter("CategoryName", "CounterName", instanceName, false);
counter.RawValue++; // Use RawValue++ instead of Increment() to avoid locking
counter.Close();
}
This works perfectly in unit tests and also in Cassini on my dev box (Vista Business x64)., and I can watch the counters working in Performance Monitor. However, the counters don't seem to register any incrementation in my production environment (Win Server 2003 x64). The counter instances themselves are available, but they all just show "--" for the last/average/minimum/maximum display.
Any ideas as to what I could be doing wrong?
EDIT: Here's a [perhaps somewhat outdated] MSDN article that I used for reference
EDIT 2: I'm using VS 2008/.NET Framework v3.5 SP1, if that makes any difference.
EDIT 3: Just found this article about 32 bit/64 bit app and monitor mismatching, but I'm not sure how it applies to my situation, if at all. Cassini is indeed a 32-bit app, but I had no problem viewing the values on my 64-bit system. On my production server, both the app and the system are 64-bit, but I can't see the values.
EDIT 4: The values are showing up when I run the 32-bit perfmon on the production server. So I suppose now the question is why can't I read the values in the 64-bit perfmon?
EDIT 5: It actually does appear to be working, it was just that I had to restart my instance of perfmon because it was open before the counters were created.
I read that instantiating a PerformanceCounter is quite resource intensive. Have you thought of caching these in a Session / Application variable?
Also, is it wise to update the counter without locking in a multithreaded ASP.net application?
Patrick

Categories