How to debug slow Office application interop constructor? - c#

I have an application which deals with excel. Recently I encountered a problem with very slow creation of Excel object.
I've recreated the issue with this simple code:
Microsoft.Office.Interop.Excel.Application xlApp;
xlApp = new Microsoft.Office.Interop.Excel.Application();
The second line causes the delay.
In order to measure the time needed for new object allocation, above code has been extended with time tracking solution and the results are conclusive. In NORMAL situation, above code executes in 0.5s while in case of FAULTY-BEHAVIOR it can take up to 5 minutes.
There are no memory leaks and excel objects are being properly freed. My solution has been running 24/7 whole year without any issues. I'm not sure if it's important but the application is running on 20 separate user's sessions (server machine). So there are 20 copies of this application running at the same time and it may result in 20 copies of Excel running at the same time.
First time the issue has been noticed 2 months ago and has been solved by upgrade of Office (2010 -> 2013). This time I have more time to investigate and sadly results aren't promising.
Facts:
only one machine is currently affected by this issue (24 cpu cores, 24GB of Ram)
CPU isn't stressed at all when the "delay" happens
I've tried using "process monitor" application to verify what happens when we "new Excel.Application()" constructor (to see if there is any excessive disk/memory/cpu usage) - no signs of resources limitations. No sign of log files related to COM objects, etc.
The only issue here is this few minutes of delay. All other Excel Interop commands work as usual.
Main Question:
Is there a way to debug this Microsoft.Office.Interop.Excel.Application() constructor to see which part is an issue here?
External content
One guy with similar issue. His solution didn't help with my problem at all.
EDIT - additional test
PowerPoint constructor is not affected by the delay
ppApp = new Microsoft.Office.Interop.PowerPoint.Application();

I've found solution on my own. I'll post it as someone else may encounter similar problem and it can save him hours/days of investigation.
What i did to find solution?
I've analyzed test application (basically only one line where new excel application is being created) with Process Monitor and it didn't show anything important. Then I repeated analysis with newly started Excel process. It highlighted numerous reads of windows registry
HKEY_USERS\S-1-5-21-2929665075-1795331740-364918325-1024\Software\Microsoft\Office\15.0\Excel\Resiliency\DocumentRecovery
Under above location I've discovered tens of thousands of keys. They all were created by Excel's "auto-recovery" functionality. Because of the numbers, loading them when starting new Excel object was taking about 40 seconds. This number was additionally being multiplied by another 10-20 simultaneously loaded sessions (did I mention my application is running on 20 user sessions?).
Solution:
Removal of "Resilency" registry tree does the trick.
Why all these "auto-recovery" entries were there in a first place? I guess I don't handle closing of Excel very well and it "thinks" I'm having regular crashes and "tries" to help.
Now what's left is preventing it from happening all over again. I'll have a closer look at my ExcelClose() function.
Thanks for your attention - Adrian

I don't think the problem is with this constructor. Try to create the object dynamically:
var obj = Activator.CreateInstance(Type.GetTypeFromProgID("Excel.Application"));
Then cast it to Microsoft.Office.Interop.Excel.Application:
var xlApp = (Microsoft.Office.Interop.Excel.Application)obj;
MessageBox.Show(xlApp.Name);
I'd expect the slow-down to move to the Activator.CreateInstance call.
Anyway, you can try to work it around by placing the following into you app.config file (more details):
<runtime>
<generatePublisherEvidence enabled="false"/>
</runtime>
I'd also suggest to make sure you're running the latest VSTO Runtime and the latest Office PIAs.

Related

Using filehelpers ExcelStorage - Excel File not opening

I am using filehelpers ExcelStorage somewhat like this:
ExcelStorage provider = new ExcelStorage(typeof(Img));
provider.StartRow = 2;
provider.StartColumn = 1;
provider.FileName = "Customers.xls";
provider.HeaderRows = 6;
provider.InsertRecords(imgs.ToArray()); // imgs was a list before
And when I am done inserting records, I would like to open the Excelfile I created (with my software still running). But it seems that Excel is somehow locked. I.e. there is an Excel instance running in process manager. When I kill all Excel instances I can open the file. Do I have to dispose the ExcelStorage in some sort of way?
I've used FileHelpers, but not ExcelStorage. The link here suggests that you should probably be using FileHelpers.ExcelNPOIStorage instead.
Looking at the source code for ExcelStorage, there is no public dispose method. There is a private CloseAndCleanup method which is called at the end of InsertRecords. Therefore I don't think there's anything you are doing wrong.
The usage of ExcelNPOIStorage looks very much the same, there is a call to GC.Collect() within the private cleanup method here, so I'd guess that there was a known issue with the cleanup of the prior version of the component.
Your best bet is to grab a copy of HANDLE.EXE which you can use with an elevated command prompt to see what has a handle to the file in question. This may be your code, anti virus or excel (if open). Excel does keep a full lock on a file when open preventing ordinary notepad access etc.
If the process owning the handle to the file is your own code, then see if the handle exists once you have exited back to the development environment. If that clears the handle, then you are not releasing the lock properly and that can be slightly trickier as it will depend on exactly what you have coded.
The CloseAndCleanup function mentioned by #timbo is only called from a few places, the Sheets property and the ExtractRecords / InsertRecords functions. The only other thing to wonder is whether you are seeing any exceptions when it attempts to perform the CloseAndCleanup or the reference count the Excel application hasn't been properly released by the COM system.
If you can replicate this with a small sample app, I will be more than willing to give it a quick test and see what happens.
Note 1, if you are running your code from within Visual Studio, it may be a process called <APPNAME>.VSHOST.EXE which is visual studio's development process, or if you've turned off Visual Studio hosting, just your <APP>.EXE. If running within IIS for a web page or web service, you will more than likely have a w3p process.
Note 2, if you run handle without being elevated, it may or may not find the handle to the file in question. Therefore, it is always recommended to run elevated to ensure results are accurate.
Note 3, the difference between ExcelStorage and ExcelNPOIStorage is that the former deals with .xls and the latter deals with .xlsx if I remember rightly.

Saving .NET user settings takes very long time

In our .NET 4.0 Winforms application, some users (all Win7 x64) recently experienced very long wait times (compared to others) when the application is saving its' settings using this code:
Properties.Settings.Default.Save();
Typical durations: 0.5 to 1 seconds
Extreme durations: 15 to 20
seconds
The applications settings (scope: User, everything saved in user.config under AppData\Local\\) consist of several custom classes as well as two classes representing printer settings:
System.Drawing.Printing.PageSettings and
System.Drawing.Printing.PrinterSettings
Using GlowCode profiler on one of those machines, I found the following function to take 17 seconds:
<Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPrinterSettings_x003A__x003A_Write9_PrinterSettings Nodes="1" Visits="1" percent_in_Child="100.00 %" Time_in_Child="17.456" Time="17.456" Avg._Time_in_Child_="17.456" Avg._Time="17.456" Blocks_net="12" Bytes_net="1024" Blocks_gross="1087" Bytes_gross="494146" />
Of which the duration was almost equally split onto three getters (taken from GlowCode viewer):
PrinterSettings::get_PaperSizes
PrinterSettings::get_PaperSources
PrinterSettings::get_PrinterResolutions
Doing some research revealed following pages:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/8fd2132a-63e8-498e-ab27-d95cdb45ba87/printersettings-are-very-slow
and
http://www.pcreview.co.uk/forums/papersources-and-papersizes-really-slow-some-systems-t3660593.html, quote:
On some systems, particularly Vista x64 systems, it takes forever (5 to 15
seconds if compiled for x64, 10-20 seconds if compiled for x86) to enumerate
either the papersources or papersizes collection of a printersettings object.
Using a small test app just saving PrinterSettings revealed a saving time around 3.5 seconds on one of those "slow" machines, while the other was quite not impressed with a duration of 0.2 seconds which corresponds to my fast development machine.
Any ideas on the reasons and how to improve this?
How can I find the real reasons for these delays?
Edit: Thanks for pointing out that the printer settings are acquired through the driver, this might explain delays on certain machines.
Updating the printer drivers on machines which I cannot access in future wherever this will be installed is not possible.
Also, I won't (I know I know) reduce the PrinterSettings information to be saved just because some people might experience a lag and break backward compatibility eventually ...
Maybe if I try serialization in background (after user has done some printer changes?) it might speed up things ...
First suggestion:
The calls to retrieve paper sources and paper sizes are being passed through to the driver. Your best bet is going to make sure that the newest version of the driver is installed. It's possible that older versions of the driver (in particular, those from the CD that came in the box) are old and buggy. If you haven't already, hit the manufacture's website, and grab the latest.
Second suggestion
Apart from that, it's going to be a pain, but you could try using the underlying Win32 APIs instead of the CLR counterparts. In this case, you'd call GetPrinter, requesting a PRINTER_INFO_2 struct. Once you have that, you can examine pDevMode to get a DEVMODE struct that has all of the information you're looking for.
This question or this question should be helpful.
Instead of persisting the entire PrinterSettings class instance, only persist individual settings as their base types. Keep it simple -- strings, ints, bools, etc. Clearly the Serializer is requesting communication with the printer, and that's what is introducing the latency. I'm willing to bet that if you grab individual class members and serialize them yourself, you'll see an improvement.
Obviously, this means that when you load settings, you'll need to deserialize all of these settings back into a new PrinterSettings class, and apply them.
EDIT 1, in response to question edit
That's true - you could have the Save() run async in the background. Your only issue would be if the user attempts to end the process (close the app) before the save is complete. You'd have to maintain a bool as to whether a save is occurring (set to false when the callback fires). If the user attempts to exit the app and the bool is true, put up "Please wait while settings are saved..." until the bool goes false.
So, it seems some machines take a long time querying the page and printer settings through the installed driver. I couldn't find anymore specifics about that.
To shorten the shutdown time, the aforementioned parts of the settings are assigned and saved in a background thread after the user made changes to the printer settings. That takes about 10 seconds.
During shutdown (form close), these settings are not assigned again but we still save all (using Properties.Settings.Default.Save()) and somehow the serializer recognizes that they don't have changes to query and so the saving finishes very quick:
Between 0.02 and 0.05 seconds, but still all settings are saved properly!
Fun fact: this issue was first reported in the week when we got a new office printer :)

Windows Store App Massive RAM usage / Memory Leak

After running the WACK (Windows App Certification Kit) locally, I had a failure with the app Crashes and Hangs tests (seemingly due to "startup" not being quick enough - there were no "crashes or hangs") and so refactored some of the startup code to ensure quicker start times.
I got the app to pass the local test ok.
But then I noticed I had somehow caused an odd gigantic memory leak / issue where my c# xaml win 8.1 store app is starting... i.e.
void OnLaunched(LaunchActivatedEventArgs args);
with around ~200MB shown in Task Manager, which quickly ramps up to ~1.5GB before causing the app to crash.
I tried using the VS2013 Performance and Diagnostic tools to try and work out how the hell I'd managed to break my app so totally. And saw the initial ~200MB being used by around ~100
RuntimeTypeCache
objects, and then later on (the 0.7 - 1.5 GB and upwards stages of fun) a
List<object>
with various types in it.
I've tried commenting out code until the App.xaml.cs file does nothing except
this.InitializeComponent();
No joy.
I checked the Package.appxmanifest and removed everything non essential.
I removed references and commented out code until the app was essentially an empty prism MVVM win store app - doing NOTHING... still starts at 200MB !
What on earth is going on?
Well I solved this, and thought the answer might help someone else from pulling their hair out
It seems that one of the performance profiling tools I had used to try and help with the startup time issue had hooked into my app and was actually CAUSING this massive memory usage and subsequent crashes.
I'm not sure which, but I used
WACK - unlikely
Visual Studio 2013 (performance and diagnostics) - possible
Application Verifier (appverif.exe in C:\Windows\System32 - as recommended on the WACK
documentation for solving Crashes and Hangs)
My guess is that it was 3, but this is basically an uninformed guess.
I solved this issue by renaming the .exe i.e. changing the
<AssemblyName>MyNEWAppName</AssemblyName>
In MyAppName.csproj
After this the app returned to normal memory usage (a tiny fraction of the 200MB - 1.7GB I had been seeing)

Very High Memory Usage in .NET 4.0

I have a C# Windows Service that I recently moved from .NET 3.5 to .NET 4.0. No other code changes were made.
When running on 3.5, memory utilzation for a given work load was roughly 1.5 GB of memory and throughput was 20 X per second. (The X doesn't matter in the context of this question.)
The exact same service running on 4.0 uses between 3GB and 5GB+ of memory, and gets less than 4 X per second. In fact, the service will typically end up stalling out as memory usage continue to climb until my system is siting at 99% utilization and page file swapping goes nuts.
I'm not sure if this has to do with garbage collection, or what, but I'm having trouble figuring it out. My window service uses the "Server" GC via the config file switch seen below:
<runtime>
<gcServer enabled="true"/>
</runtime>
Changing this option to false didn't seem to make a difference. Futhermore, from the reading I've done on the new GC in 4.0, the big changes only effect the workstation GC mode, not server GC mode. So perhaps GC has nothing to do with the issue.
Ideas?
Well this was an interesting one.
The root cause turns out to be a change in the behavior of SQL Server Reporting Services' LocalReport class (v2010) when running this on top of .NET 4.0.
Basically, Microsoft altered the behavior of RDLC processing so that each time a report was processed it was done so in a seperate application domain. This was actually done specifically to address a memory leak caused by the inability to unload assemblies from app domains. When the LocalReport class processed an RDLC file, it actually creates an assembly on the fly and loads it into the app domain.
In my case, due to the large volume of report I was processing, this was resulting in very large numbers of System.Runtime.Remoting.ServerIdentity objects being created. This was my tip off to the cause, as I was confused as to why processing an RLDC required remoting.
Of course, to call a method on a class in another app domain, remoting is exactly what you use. In .NET 3.5, this wasn't necessary as, by default, the RDLC-assembly was loaded into the same app domain. In .NET 4.0, however, a new app domain is created by default.
The fix was fairly easy. First I needed to go enable legacy security policy using the following config:
<runtime>
<NetFx40_LegacySecurityPolicy enabled="true"/>
</runtime>
Next, I needed to force the RDLCs to be processed in the same app domain as my service by calling the following:
myLocalReport.ExecuteReportInCurrentAppDomain(AppDomain.CurrentDomain.Evidence);
This resolved the issue.
I ran into this exact issue. And it is true that app domains are created and not cleaned up. However I wouldn't recommend reverting to legacy. They can be cleaned up by ReleaseSandboxAppDomain().
LocalReport report = new LocalReport();
...
report.ReleaseSandboxAppDomain();
Some other things I also do to clean up:
Unsubscribe to any SubreportProcessing events,
Clear Data Sources,
Dispose the report.
Our windows service processes several reports a second and there are no leaks.
I'm pretty late to this, but I have a real solution and can explain why!
It turns out that LocalReport here is using .NET Remoting to dynamically create a sub appdomain and run the report in order to avoid a leak internally somewhere. We then notice that, eventually, the report will release all the memory after 10 to 20 minutes. For people with a lot of PDFs being generated, this isn't going to work. However, the key here is that they are using .NET Remoting. One of the key parts to Remoting is something called "Leasing". Leasing means that it will keep that Marshal Object around for a while since Remoting is usually expensive to setup and its probably going to be used more than once. LocalReport RDLC is abusing this.
By default, the leasing time is... 10 minutes! Also, if something makes various calls into it, it adds another 2 minutes to the wait time! Thus, it can randomly be between 10 and 20 minutes depending how the calls line up. Luckily, you can change how long this timeout happens. Unluckily, you can only set this once per app domain... Thus, if you need remoting other than PDF generation, you will probably need to make another service running it so you can change the defaults. To do this, all you need to do is run these 4 lines of code at startup:
LifetimeServices.LeaseTime = TimeSpan.FromSeconds(5);
LifetimeServices.LeaseManagerPollTime = TimeSpan.FromSeconds(5);
LifetimeServices.RenewOnCallTime = TimeSpan.FromSeconds(1);
LifetimeServices.SponsorshipTimeout = TimeSpan.FromSeconds(5);
You'll see the memory use start to rise and then within a few seconds you should see the memory start coming back down. Took me days with a memory profiler to really track this down and realize what was happening.
You can't wrap ReportViewer in a using statement (Dispose crashes), but you should be able to if you use LocalReport directly. After that disposes, you can call GC.Collect() if you want to be doubly sure you are doing everything you can to free up that memory.
Hope this helps!
Edit
Apparently, you should call GC.Collect(0) after generating a PDF report or else it appears the memory use could still get high for some reason.
You might want to
profile the heap
use WinDbg + SOS.dll to establish what resource is being leaked and from where the reference is held
Perhaps some API has changed semantics or there might even be a bug in the 4.0 version of the framework
Just for completeness, if anyone is looking for the equivalent ASP.Net web.config setting, it is:
<system.web>
<trust legacyCasModel="true" level="Full"/>
</system.web>
ExecuteReportInCurrentAppDomain works the same.
Thanks to this Social MSDN reference.
It seems as though Microsoft tried putting the report into its own separate memory space to work around all of the memory leaks rather than fix them. In doing so, they introduced some hard crashes, and ended up having more memory leaks anyway. They seem to cache the report definition, but never use it and never clean it up, and every new report creates a new report definition, taking up more and more memory.
I played around with doing the same thing: use a separate app domain and marshal the report over to it. I think that is a terrible solution and makes a mess very quickly.
What I did instead is similar: split the reporting part of your program out into its own separate reports program. This turns out to be a good way to organize your code anyway.
The tricky part is passing information to the separate program. Use the Process class to start a new instance of the reports program and pass any parameters it needs on the command line. The first parameter should be an enum or similar value indicating the report that should be printed. My code for this in the main program looks something like:
const string sReportsProgram = "SomethingReports.exe";
public static void RunReport1(DateTime pDate, int pSomeID, int pSomeOtherID) {
RunWithArgs(ReportType.Report1, pDate, pSomeID, pSomeOtherID);
}
public static void RunReport2(int pSomeID) {
RunWithArgs(ReportType.Report2, pSomeID);
}
// TODO: currently no support for quoted args
static void RunWithArgs(params object[] pArgs) {
// .Join here is my own extension method which calls string.Join
RunWithArgs(pArgs.Select(arg => arg.ToString()).Join(" "));
}
static void RunWithArgs(string pArgs) {
Console.WriteLine("Running Report Program: {0} {1}", sReportsProgram, pArgs);
var process = new Process();
process.StartInfo.FileName = sReportsProgram;
process.StartInfo.Arguments = pArgs;
process.Start();
}
And the reports program looks something like:
[STAThread]
static void Main(string[] pArgs) {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
var reportType = (ReportType)Enum.Parse(typeof(ReportType), pArgs[0]);
using (var reportForm = GetReportForm(reportType, pArgs))
Application.Run(reportForm);
}
static Form GetReportForm(ReportType pReportType, string[] pArgs) {
switch (pReportType) {
case ReportType.Report1: return GetReport1Form(pArgs);
case ReportType.Report2: return GetReport2Form(pArgs);
default: throw new ArgumentOutOfRangeException("pReportType", pReportType, null);
}
}
Your GetReportForm methods should pull the report definition, make use of relevant arguments to obtain the dataset, pass the data and any other arguments to the report, and then place the report in a report viewer on a form and return a reference to the form. Note that it is possible to extract much of this process so that you can basically say 'give me a form for this report from this assembly using this data and these arguments'.
Also note that both programs must be able to see your data types that are relevant to this project, so hopefully you have extracted your data classes into their own library, which both of these programs can share a reference to. It would not work to have all of the data classes in the main program, because you would have a circular dependency between the main program and the report program.
Don't over do it with the arguments, either. Do any database querying you need in the reports program; don't pass a huge list of objects (which probably wouldn't work anyway). You should just be passing simple things like database ID fields, date ranges, etc. If you have particularly complex parameters, you might need to push that part of the UI to the reports program too and not pass them as arguments on the command line.
You can also put a reference to the reports program in your main program, and the resulting .exe and any related .dlls will be copied to the same output folder. You can then run it without specifying a path and just use the executable filename by itself (ie: "SomethingReports.exe"). You can also remove the reporting dlls from the main program.
One issue with this is that you will get a manifest error if you've never actually published the reports program. Just dummy publish it once, to generate a manifest and then it will work.
Once you have this working, it's very nice to see your regular program's memory stay constant when printing a report. The reports program appears, taking up more memory than your main program, and then disappears, cleaning it up completely with your main program taking up no more memory than it already had.
Another issue might be that each report instance will now take up more memory than before, since they are now entire separate programs. If the user prints a lot of reports and never closes them, it will use up a lot of memory very fast. But I think this is still much better since that memory can easily be reclaimed simply by closing the reports.
This also makes your reports independent of your main program. They can stay open even after closing the main program, and you can generate them from the command line manually, or from other sources as well.

Local ASP.NET MVC Suddenly Very Slow; Load times > 1 minute

Over the last few weeks I've been subject to a sudden and significant performance deterioration when browsing locally hosted ASP.NET 3.5 MVC web applications (C#). Load times for a given page are on average 20 seconds (regardless of content); start up is usually over a minute. These applications run fast on production and even test systems (Test system is comparable to my development environment).
I am running IIS 6.0, VS2008, Vista Ultimate, SQL2005, .NET 3.5, MVC 1.0, and we use VisualSVN 1.7.
My SQL DB is local and IPv6 does not seem to be the cause. I browse in Firefox and IE8 outside of Debug mode using loopback, machine name, and 'localhost' and get the exact same results every time (hence DNS doesn't seem to be the issue either).
Below are screen shots of my dotTrace output.
http://www.glowfoto.com/static_image/28-100108L/3123/jpg/06/2010/img4/glowfoto
This issue has made it near impossible to debug/test any web app. Any suggestions very much appreciated!
SOLUTION: Complete re-installation of Windows, IIS, Visual Studio, etc. It wasn't the preferred solution, but it worked.
Surely the big red flag on that profiler output is the fact that AddDirectory is called 408 times and AddExistingFile is called 66,914 times?
Can you just confirm that there's not just a shed load of directories and files underneath your MVC app's root folder? Because it looks like the framework is busying itself trying to work out what files it needs to build (or add watches to) on startup.
[I am not au fait with MVC and so maybe this is not what is happening but 67k calls to a function with a name like "AddExistingFile" does smell wrong].
I've learnt that it's usually a "smell" when things fail near a power of two ...
Given
Over the last few weeks I've been subject to a sudden and significant performance deterioration
and
AddExistingFile is called 66,914 times
I'm wondering if the poor performance hit at about the time as the number of files exceeded 65,535 ...
Other possibilities to consider ...
Are all 66,914 files in the same directory? If so, that's a lot of directory blocks to access ... try a hard drive defrag. In fact, it's even more directory blocks if they're distributed across a bunch of directories.
Are you storing all the files in the same list? Are you preseting the capacity of that list, or allowing it to "grow" naturally and slowly?
Are you scanning for files depth first or breadth first? Caching by the OS will favor the performance of depth first.
Update 14/7
Clarification of Are you storing all the files in the same list?
Naive code like this first example doesn't perform ideally well because it needs to reallocate storage space as the list grows.
var myList = new List<int>();
for (int i=0; i<10000; i++)
{
myList.Add(i);
}
It's more efficient, if you know it, to initialize the list with a specific capacity to avoid the reallocation overhead:
var myList = new List<int>(10000); // Capacity is 10000
for (int i=0; i<10000; i++)
{
myList.Add(i);
}
Update 15/7
Comment by OP:
These web apps are not programmatically probing files on my hard disk, at least not by my hand. If there is any recursive file scanning, its by VS 2008.
It's not Visual Studio that's doing the file scanning - it is your web application. This can clearly be seen in the first profiler trace you posted - the call to System.Web.Hosting.HostingEnvironment.Initialize() is taking 49 seconds, largely because of 66,914 calls to AddExistingFile(). In particular, the read of the property CreationTimeUTC is taking almost all the time.
This scanning won't be random - it's either the result of your configuration of the application, or the files are in your web applications file tree. Find those files and you'll know the reason for your performance problems.
Try creating a new, default MVC2 application in a new web folder. Build and browse it. If your load times are okay with the new app, then there's something up with your application. If not, it's outside of the context of the app and you should start looking at IIS config, extensions, hardware, network, etc.
In your app, back up your web config and start with a new, default web.config. That should disable any extensions or handlers you've installed. If that fixes your load times, start adding stuff from the old web.config into the new one in small blocks until the problem reappears, and in that way isolate the offending item.
I call this "binary search" debugging. It's tedious, but actually works pretty quickly and will most likely identify the problem when we get stuck in one of those "BUT IT SHOULD WORK!!!" modes.
Update Just a thought: to rule out IIS config, try running the site under Cassini/built-in dev server.
The solution was to format and do a clean install of Vista, SQL Server 2005, Visual Studio 2008, IIS6 and the whole lot. I am now able to debug, without consequence, the very same webapp(s) I was experiencing the problems with initially. This leads me to believe the problem lay within one of the installations above and must have been aggravated by a software update or by the addition of software.
You could download Fidler to measure how long each call takes and get some measurements.
Link
This video might help...

Categories