I need help with re-hosted designer of WF4. It should be used for design
very complex workflow but there is limit of the nested activities. (It's
around 40th nested activities in one branch). If is that limit exceeded, an
System.StackOverflowException occurred in PresentationCore.dll.
Is there any way, how to increase limit for System.StackOverflowException?
Here is the code example for download. After building and executing application, move the
scrollbars to last activity with number 40 and exception should occur.
I can run it without error and scroll to the very bottom - though it does get a bit slow - (64 bit machine, 16 gb memory).
You can allocate stack size on creation of a new thread, but I don't know how you could change the size for the default UI thread in your application...and if you can I'm not sure it'd be a good idea.
Besides, increasing the limit is only hiding the overall issue - which is why you need this in the first place - is anyone really going to nest 40 layers of complexity in a workflow? It would be completely unwieldy and incredibly difficult to support. Couldn't the logic be split into sub workflows, etc?
I solved the problem by increasing the size of the stack by using utility EDITBIN from VisualStudio
editbin /STACK:6291456 "WpfApplication1.exe"
Unfortunately it doesn't work in VisualStudio by adding into Post-Build section in project properties.
So I'd created the bat file, which is necessary to execute after build.
Related
I have an application (C# .Net 3.5 and .Net 2.0) that performs multiple readfile operations. However, the system shows hickups (jitter) every now and then. I have attached VTune profiler and performed a locks&waits analysis, see the first image below.
The locks and waits analysis showed that a "Sync Object: Stream filepath" causes the application to be locked (waiting) on all threads. CPU utilization drops to 0% during this period.
Next, I used SysInternals Process Monitor to log what operation was performed when the hickups occurred. It shows a fileread operation that takes approx. 1 second, but only occasionally (jitter). See the second image.
single-click large version of image: here
Single-click large version of image: here
I am puzzled. What could cause this jitter in File I/O? It is a synchroneous read. I have tried to reduce the read buffer from the 32,768b to 4096b, but this did not chance anything. Maybe important to note, the machine used to collect these numbers has an SSD. However, we see similar hickups on machines without SSDs.
Any leads in where to look would be welcome.
This question needs an update. I will post this in the form of an answer as I have solved the issue, yet not in a way that I can say for sure what was the original issue.
I have tried a lot of things to find out what caused the occasional spike in IO read(file) duration. First of all, virusscanners matter, especially McAfee caused some trouble. The comments on the question hinted here already, and #remus rusanu's tip to use WPA/WPR combo showed this as well. WPA/WPR combo pleasantly surprised me and is a valuable tool next to VTune, and ProcMon. The first image shows a spike in McAfee taskmanager just before some long duration flushes and reads start (>1s). The second shows that all information in WPA is nicely linked over all graphs. A nice and strong tool, if searching for that needle in the haystack.
Quicklink large version: here.
Quick larger version: here.
Yet, when I uninstalled the virusscan software spikes did still occur. Less frequently, and they were shorter in duration, but still visible in the application. I have tried numerous things to find out what it was. Used VMWare setups so I could completely strip the system and see if other processes might be the issue. In the end, I gave up. I implemented a system to workaround the issue, and this is sufficient for now. Knowing all the actions I took I would say there was another conflicting process. Another option is the linked unmanaged program, which used Mutexes, maybe doing some problematic stuff. I changed the mutex to CriticalSections, but no direct visible results, so I gave up on that route.
To conclude, unfortunately I have no direct answer. Due to time constraints I was forced to work around it, and will probably never know what the root cause for the issue was. I guess that is real life as well..
Thanks for all the tips, I learned some things I will certainly use in the future.
I'm working on an app for windows phone 8, and I'm having a memory leak problem. But first some background. The app works (unfortunately) using WebBrowsers as pages. The pages are pretty complex with a lot of javascript involved.
The native part of the app, written in c#, is responsible for doing some simple communication with the javascript(e.g. native is a delegate for the javascript to communicate with a server), make animation for page transition, tracking, persistance, etc. All is done in a unique PhoneApplicationPage.
After I had some crashes for out of memory exceptions, I started profiling the app. I can see that the WebBrowsers, which are the big part of the the app, are being disposed correctly.
But the problem I'm seeing is that memory continues to increase. What's worse, I have little feedback from the profiler. From what I understand, the profiler graph says there is a big problem, while the profiler numbers say there's no problem at all...
Note: the step represents a navigation from a WebBrowser to another WebBrowser. The spike is created (I suppose) by the animation between the two controls. In the span I've selected in the image, I was doing a navigation forward and one backward having a maxium of 5 WebBrowsers (2 for menus that are always there, 1 for the index page, 1 for the page I navigate from and 1 for the page I navigate to). At every navigation the profiler shows the correct number of WebBrowsers: 5 after navigating forward, 4 after navigating backward.
Note 2: I have added the red line to make clearer that the memory is going up in that span of time
As you can see from the image
the memory usage is pretty big but the numbers say it's low and in that span of time, retained allocation is lower than when it started...
I hope I've included enough information. I want some ideas on what could cause this problem. My ideas so far are:
-the javascript in the WebBrowser is doing something wrong (e.g. not cleaning some event handler). Even if this is the case, shouldn't the WebBrowser release the memory when it is destroyed?
-using a unique PhoneApplicationPage is something evil that is not supposed to be done, and changing its structure may cause this.
-other?
Another question: why does the graph show the correct amount of memory use while the number doesn't?
If you need more info about the profiler, ask and I will post them tomorrow.
Ok after a lot of investigation I finally was able to find the leak.
the leak is created by the WebBrowser control itself which seems to have some event handler that are not removed when you remove it from a Panel. In fact the leak is reproducible by following these steps:
Create a new WebBrowser
Add it to a Panel or whatever
Navigate to a page, with an image which is big and heavy
Tap somewhere in the blank space of the browser(tapping on the image seems to not create the leak)
remove and collect the browser
repeat from 1
at every iteration the memory of the image is never collected and the memory continue to grow.
A ticket to Microsoft was already sent.
The problem was resolved using a pool of WebBrowsers
I don't think There is enough information to find the cause to your leak, and without posting your entire solution I am not sure there can be, since the question is about locating the root cause of it...
What I Can offer is the approach I have used when I had my own memory leak.
The technique was to:
Open a memory profiler. From your screenshot I see you are using one. I used perfmon. This article has some material about setting perfmon and #fmunkert also explains it rather well.
Locate an area in the code that you suspect that it is likely that the leak is in that area. This part is mostly depending on you having good guesses about the part of the code that is responsible for the issue.
Push the Leak to the extreme: Use labels and "goto" for isolating an area / function and repeat the suspicious code many times (a loop will work to. I find goto more convenient for this matter).
In the loop I have used a breakpoint that halted every 50 hits for examining the delta in the memory usage. Of course you can change the value to feet a noticeable leak change in your application.
If you have located the area that causes the leak, the memory usage should rapidly spike. If the Memory usage does not spike, repeat stages 1-4 with another area of code that you suspect being the root cause. If it does, continue to 6.
In the area you have found to be the cause, use same technique (goto + labels) to zoom in and isolate smaller parts of the area until you find the source of the leak.
Note that the down sides of this method are:
If you are allocating an object in the loop, it's disposal should be also contained in the loop.
If you have more than one source of leak, It makes it harder to spot (yet still possible)
Did you clean up your event handlers? You may inadvertently still have some references if the controls are rooted.
I have a complex project using SilverLight Toolkit's ListBoxDragDropTarget for drag-drop operations and it is maxing CPU. I tried to reproduce the issue in a small sample project, but then it works fine. The problem persists when I remove our custom styles and all other controls from the page, but the page is hosted in another page's ScrollView.
"EnableRedrawRegions" shows that the screen gets redrawn on every frame. My question is this: How can I track down the cause of this constant redrawing?
I have used XPerf to help track down performance issues related to redrawing in Silverlight. It is not completely straightforward or an easy process, but it can help point you in the right direction to where your problems are.
I started with a great tutorial by Seema about using the XPerf command-line tool to profile CPU usage for a Silverlight app. You can basically load up your app, start sampling with XPerf, perform your CPU intensive operations, and then stop sampling and analyze the profile XPerf generates. When you look at the XPerf charts you can select can filter by some process (such as iexplorer or your browser) to see the total % CPU. You can then select a specific length of time in the profile and drill down to see what functions from which DLLs are taking the most CPU cycles. If you point XPerf to Microsoft's symbol server you should get the specific names of the functions where the app is spending most of its time.
For a Silverlight app it's most important to look at what's going on in agcore.dll, npctrl.dll, and coreclr.dll. If your performance problems are related to redrawing, most of the CPU time is likely spent in agcore.dll since that does most of the graphics related work for Silverlight. You can then drill into that and see the specific functions in agcore.dll that are getting called most often during your sample time.
I understand it is kind of an annoying way to debug since you can only really see what is going on in the core Silverlight functions, but it may be able to help you figure out what is going on. In my case I was able to see that most of the time was spent calculating drop-shadows in agcore.dll. I was then able to figure out I stupidly had some content within a drop-shadow effect that was changing many times a second and causing constant recalculation/redraws of the entire drop-shadow effect.
Once you identify your redrawing issues you might want to look into GPU Acceleration with BitmapCaching if you haven't already. That will help offload some of the redrawing to the GPU and save you some CPU cycles.
My project is developed in Visual Studio 08, in C#. It's a standalone desktop application, about 60k lines of code.
Once upon a time I loved working on this software - now that the compliation time has grown to approx 2 minutes, it becomes a far less enjoyable experience...
I think that my lack of experience in C# may be a factor; I have developed everything under one namespace for example - would having a well structured codebase enable the compiler to recompile only the necessary parts of the code when changes are made? Or do I need to separate sections into separate projects/DLLs to force this to happen?
How much of a difference would upgrading to the latest quad-core processor make?
The other thought is, perhaps this is a typical thing for programmers to deal with - is a long compile time like this simply something that must be managed?
Thanks in advance.
Things that increase compile time:
The number of projects in a solution makes more difference than the number of files in a particular project.
Custom build tasks can make a huge difference, especially if they are generating code or running post-build analysis (FxCop, StyleCop, Code Contracts).
Native code projects take longer to build.
A single project containing 60K lines of C# code with no special build features enabled should compile in seconds on any machine made in the past 5+ years.
I'm surprised that 60k lines of code take 2 minutes to compile. I have an application that is 500,000 lines of code, and it only takes about a minute and a half. Make sure you are not doing full rebuilds each time, and make sure you are not cleaning the solution between builds. A normal build should perform an incremental build, only recompiling code that has changed since the last build (along with anything affected by that change.)
Perhaps some other factors might include heavy use of large resources (images?), broad-sweeping changes in the lowest level libraries (i.e. those used by everything else), etc. Generally speaking, on a relatively modern machine, compiling 60,000 lines of C# code should take less than a minute on average, unless you are rebuilding the entire solution.
There is this thread about hardware to improve compile time. Also this really excellent blog post from Scott Guthrie on looking at hard drive speed for performance.
Splitting your project up into multiple projects will help. Only those projects that have changes (and projects that depend on it) will need recompilation.
A single namespace however, shouldn't affect compile time. However, if you do split up your project into multiple projects/assemblies, then a single namespace is definitely not a good idea.
Upgrading to a faster CPU will probably help, but you might find that faster I/O (better disks, RAID, etc will be more useful).
And yes, avoiding long compile times are one of the things developers need to take care of. When it comes to productivity, do whatever you can (better tools, bigger screens, faster machines, etc...)
I'm working on UWP after working for a long time in WPF and compilation is abysmally slow.
I understand why release compilation is slow (net native compilation) but that's disabled in debug yet it still takes a lot of time between F5 and the application being displayed on screen, even for a blank application.
Is there any way to speed this up (even at the cost of runtime performance)? It's really hard to work with when you're used to C# giving you extremely fast compile times and testing after every small changes.
For exemple just a right click and rebuild on a very simple (4 pages, each < 200 line of xaml, and pretty much 0 C#) uwp project takes almost exactly 20 seconds in debug (without .net native) with no project references. Meanwhile a much larger WPF application (dozens of windows, thouthand of lines of codes) takes a few seconds and most of that time is copying child projects!
As suggested you can find a minimal example for download here :
https://www.dropbox.com/s/0h89qsz66erba3x/WPFUWPCompileSpeed.zip?dl=0
It's just a solution with a blank wpf & blank uwp app. Compilation time for WPF is just over 1 second, for UWP 12 seconds, in each case solution was cleaned and the single project i was testing was right clicked and rebuilt. This is in debug (without .net Native compilation)
I definitely agree that UWP compilation is significantly slower than WPF. I've always had to break apart my WPF assemblies whenever I reached about 5-6 dozen xaml windows or views, in order to keep the incremental compilation times down at 10-20 seconds. The way things are looking, my UWP assemblies will probably grow to only about 2 or 3 dozen items before they take way too long to compile. Any assembly that takes over 10 seconds to compile is problematic when you are trying to do iterative coding & debugging.
Here are two recommendations, if you haven't tried them. The first thing to do is go to the build tab and always remember to uncheck "compile with .NET native tool chain". That is pretty useless for normal debug/test iterations.
The next thing to do is to monitor your build operations with procmon. That will surface any issues that may be specific to your workstation (like virus scanning, other apps that may be interfering).
Here are the factors I noticed that would slow down UWP compilation compared to WPF:
Lots of compilation passes (see them in the build output window):
> MarkupCompilePass1:
> XamlPreCompile:
> MarkupCompilePass2:
> GenerateTargetFrameworkMonikerAttribute:
> CoreCompile:
> CopyGeneratedXaml:
> CopyFilesToOutputDirectory:
> ComputeProcessXamlFiles:
> CustomOutputGroupForPackaging:
> GetPackagingOutputs:
> _GenerateProjectPriConfigurationFiles:
> _GenerateProjectPriFileCore:
Core/Nuget dependencies
Unlike with the .Net Framework, all your dependencies for UWP are coming from .Net core and nuget. You should be able to use procmon and see tons of reads against the directory where VS keeps the stuff: C:\Program Files (x86)\Microsoft SDKs\UWPNuGetPackages. Note that the dependencies themselves aren't of a totally different quality than the .Net framework dependencies (in WPF). But there is a big difference in how those dependencies are gathered and how they are pushed around in your output directories during compile operations (eventually ending up in the final Appx directory).
No "shared output directory" for optimization.
With WPF we could send class library targets individually to a shared output directory (by checking them for build, while unchecking other libraries and the application exe itself). Then we could launch the debugger and without compiling a whole ton of stuff that wasn't necessarily changing. However, UWP requires us to build the entry-level application, regardless of whether we try to configure a shared output directory.
The new "Deploy" step is required in UWP
WPF didn't have the new "deploy" step that is required every time you build a UWP app. You will see the checkbox in the build configuration, and it applies to the entry-level application. If you don't deploy, you won't see all your changes while debugging.
UWP is still actively changing (unlike WPF)
They keep changing the UWP compilation operation. Soon we are going to start to see something called the "WinUI" library that will introduce another dependency from NuGet for UWP apps. This is probably not going to help matters where compilation performance is concerned.
I think once the pace of change starts to slow down, Microsoft may decide to focus on finding ways to improve the performance of compiles. It seems pretty clear when reading the procmon output that less than half of the work done by devenv.exe is especially being done for me. The rest of it is pulling in all the dependencies so that my code can compile against it. There has to be a way to optimize that repetitive processing.
Ronan the (twelve-second?) compile time for an empty project does seem a bit high... Mine generally runs for under five seconds (when empty).
You might want to watch what is going on with CPU. The compilation of a single project should be entirely cpu-bound, especially on the second or third attempt, after the file system has cached your source code (and all the nuget dependencies are downloaded). You may want to keep an eye on task-manager and make sure that you are running at full speed on a single core (ie 25% if you have four cores, 12% for eight cores, etc). If you see CPU drop down too far then something is going wrong. Also make sure to check that CPU is only being used by the usual things that you would expect, eg devenv.exe, VBCSCompiler.exe and MSBuild.exe.
For those who claim to be compiling UWP projects so much faster than everyone else, it might be interesting to hear their benchmarks for the windows community toolkit. (https://github.com/windows-toolkit/WindowsCommunityToolkit/tree/rel/5.0.0) The most interesting compile times would be for projects that are most heavy on xaml. Here are my times:
10 seconds for Microsoft.Toolkit.Uwp.UI.Controls
8 seconds Microsoft.Toolkit.Uwp.UI.Controls.Graph
8 seconds Microsoft.Toolkit.Uwp.UI.Controls.DataGrid
The basic Van Arsdel inventory app is another thing to benchmark. https://github.com/Microsoft/InventorySample Here is my result:
14 seconds for Inventory.App
The VanArsdel sample app would be another good one to benchmark. (Careful with this; you may have to be on a Windows insider build and it may mess up your VS like it did mine). The url is here: https://github.com/microsoft/vanarsdel Here is my result:
13 seconds for VanArsdel project
Remember that projects which are heavily xaml-oriented are also usually the slowest because they involve more processing work during the various compilation passes.
Using Diagnostic Output
While the total elapsed time to build a project is critical, it may also be helpful to understand where the time is coming from. You can get a summary of that if you go to Tools->Options->Build-and-Run and then turn up the project build output verbosity to Diagnostic. Then compile the project in question. At the end you will see a Task Performance Summary that looks like the following. Note that CompileXaml should take the most time. These five items seem to be pretty common.
Task Performance Summary:
100 ms ValidateAppxManifest 1 calls
400 ms ExpandPriContent 1 calls
400 ms Csc 2 calls
600 ms ResolveAssemblyReference 1 calls
7000 ms CompileXaml 2 calls
If you see anything else (eg GenerateAppxManifest) taking up a few seconds every time you compile then that is probably a corruption problem within your project. You should be able to troubleshoot by searching for the word "completely", as in "Building target _GenerateCurrentProjectAppxManifest completely". When you find that, it should tell you why it is doing the extra work.
Min Version Targeting
I noticed that changing the min version on the targeting tab will cut out three seconds out of my "CompileXaml" time.
See below that changing min version to 15063 helps cut down on compile time. I suspect this is related to bloat in the related dependencies.
15063 (creators update) [4 seconds]
16299 (fall creators) [7 seconds]
17134 (version 1803) [ 7.5 seconds]
Its not always an option to target the creators update, so this may not be general-purpose fix. But it is interesting to know about the impact of getting an updated Windows 10 SDK.