How to speed up MonoTouch compilation time? - c#

It is well known that
If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion, which will suck them in and kill hours of productivity.
Our MonoTouch app takes 40 seconds to compile on Macbook Air in Debug/Simulator configuration.
We have about 10 assemblies in the solution.
We're also linking against some native libraries with gcc_flags.
I'm sure there are ways to optimize compilation time that I'm not aware of, which might have to do with references, linker, whatever.
I'm asking this question in hope that someone with better knowledge than me will compile (no pun intended) a list of tips and things to check to reduce MonoTouch compilation time for debug builds.
Please don't suggest hardware optimizations or optimizations not directly related to MonoTouch.

Build Time Improvements in Xamarin.iOS 6.4
Xamarin.iOS 6.4 has significant build time improvements, and there is now an option to only send updated bits of code to the device. See for yourself:
(source: xamarin.com)
Read more and learn how to enable incremental build in Rolf's post.
Evolve 2013 Video
An updated and expanded version of this content can be seen in the video of the Advanced iOS Build mechanics talk I gave at Evolve 2013.
Original Answer
There are several factors affecting build speed. However most of them have more impact on device builds, including the use of the managed linker that you mentioned.
Managed Linker
For devices then Link all is the fastest, followed by Link SDK and (at the very end) Don't link. The reason is that the linker can eliminate code faster than the AOT compiler can build it (net gain). Also the smaller .app will upload faster to your devices.
For simulator Don't link is always faster because there's no AOT (the JIT is used). You should not use other linking options unless you want to test them (it's still faster than doing a device build).
Device tricks
Building a single architecture (e.g. ARMv7) is faster than a FAT binary (e.g. ARMv7 + ARMV7s). Smaller applications also means less time to upload to device;
The default AOT compiler (mono) is a lot faster than using LLVM compilers. However the later will generate better code and also supports ARMv7s, Thumb2;
If you have large assets bundled in your .app then it will take time to deploy/upload them (every time since they must be signed) with your app. I wrote a blog post on how you can workaround this - it can save a lot of time if you have large assets;
Object file caching was implemented in MonoTouch 5.4. Some builds will be a lot faster, but others won't be (when the cache must be purged) faster (but never slower ;-). More information why this often happens here).
Debug builds takes longer because of symbols, running dsymutil and, since it ends up being larger, extra time to upload to devices.
Release builds will, by default (you can turn it off), do a IL strip of the assemblies. That takes only a bit of time - likely gained back when deploying (smaller .app) to the device.
Simulator tricks
Like said earlier try to avoid linking since it will take more time and will require copying assemblies (instead of symlinking them);
Using native libraries is slower because we cannot reuse the shared simlauncher main executable in such cases and need to ask gcc to compile one for the application (and that's slow).
Finally whenever in doubt time it! and by that I mean you can add --time --time to your project extra mtouch arguments to see a timestamp after each operation :-)

This is not really meant as an answer, rather a temporary placeholder until there is a better one.
I found this quote by Seb:
Look at your project's build options and make sure the "Linker
behavior" is at the default "Link SDK assemblies".
If it's showing "Don't link" then you'll experience very long build
time (a large part of it in dsymutil).
I don't know if it is still relevant though, because MonoDevelop shows a warning sign when I choose this option, and it doesn't seem to affect performance much.

You cannot expect your compiler to be lightninng quick without understanding everything that it is required to do. Larger applications will naturally take longer. Different languages or different compilers of the same language can make a huge difference on how long it takes to compile your code.
We have a project that will take almost 2 minutes to compile. Your best solution is to figure out a way to reduce the number of times you compile your code.
Instead of trying to fix 1 line of code and rebuilding, over and over again. Get a group of people together to discuss the problem. Or create a list of 3 or 4 things you want to work on, complete them all then test.
These are just some suggestions and they will not work in all cases.

Related

"On-the-run" Debugging/Monitoring

Is there a way/system to debug/monitor code without stopping execution?
In industrial automation control programming (PLC/PAC/DCS) it is possible to connect the debugger while the program is running, and see in the code editor the value of variables and expressions, without setting breakpoints or tracepoints.
As an example, let's have a F# multithreaded application, where code is executed in a continuous loop or triggered by timers. Is there a way to attach a debugger like Visual studio Debugger and see the values of variables and expressions (in the code editor or in a watch pane) WITHOUT interrupting the execution?
It doesn't matter if it's not synchronous, it's acceptable if the debugger/monitor does not capture all the code scans.
I am tasked to create an high level controller for a process plant and I would like to use C# or F# or even C++ with a managed or native application, instead of a PAC system. But being forced to interrupt execution to debug is a huge disadvantage in this kind of application.
UPDATE
First of all thanks to all for their answer.
Based on those answers, though, I realized that probably I need to reformulate my question as follows:
Is anyone aware of any library/framework/package/extension that allows to work with a native or managed application in windows or linux (C#, F# or C++) the exact same way as a PAC development platform, specifically:
1) Put the dev platform in "status" mode, where it shows automatically the runtime value for variables and expressions present in the code exceprt currently visible, without interrupting execution?
2) Create watch windows that show the runtime value of variables and expressions, again without interrupting execution?
Also, what I am looking for is something that (like any PAC platform) offers these features OUT OF THE BOX, without requiring any change in the application code (like adding log instructions).
Thank you in advance
UPDATE 2
It looks like there is something (see http://vsdevaids.webs.com/); does anyone know whether they are still available somewhere?
UPDATE 3
For those interested, I managed to download the last available release of VSDEVAIDS. I installed it and looks working, but it's pointless without a licence and couldn't find information on how to reach the author.
http://www.mediafire.com/file/vvdk2e0g6091r4h/VSDevAidsInstaller.msi
If somebody has better luck, please let me know.
this is a normal requirement - needing instrumentation / diagnostic data from a production system. Its not really a debugger. Its usually one of the first things you should establish in your system design.
Not knowing your system at all its hard to say what you need but generally they fall into 2 categories
human readable trace - something like log4net is what I would recommend
machine readable counters etc. Say 'number of widget shaving in last pass',..... This one is harder to generalize, you could layer it onto log4net too. Or invent your own pipe
With regards to your edited question, I can almost guarantee you that what you are looking for does not exist. Consequence-free debugging/monitoring of even moderate usefulness for production code with no prior effort? I'd have heard of it. Consider that both C++ and C# are extremely cross-platform. There are a few caveats:
There are almost certainly C++ compilers built for very specific hardware that do what you require. This hardware is likely to have very limited capabilities, and the compilers are likely to otherwise be inferior to their larger counterparts, such as gcc, clang, MSVC, to name a few.
Compile-time instrumentation can do what you require, although it affects speed and memory usage, and even stability, in my experience.
There ARE also frameworks that do what you require, but not without affecting your code. For example, if you are using WPF as your UI, it's possible to monitor anything directly related to the UI of your application. But...that's hardly a better solution than log4net.
Lastly, there are tools that can monitor EVERY system call your application makes for both Windows (procmon.exe/"Process Monitor" from SysInternals) and Linux (strace). There's very little you can't find out using these. That said, the ease of use is hardly what you're looking for, and strictly internal variables are still not going to be visible. Still might be something to consider if you know you'll be making system calls with the variables you're interested in and can set up adequate filtering.
Also, you should reconsider your "No impact on the code" requirement. There are .NET frameworks that can allow you to monitor an entire class merely by making a single function call during construction, or by deriving from a class in the framework. Many modern UIs are predicated on the UIs being able to be notified of any change to the data they are monitoring. Extensive effort has gone into making this as powerful and easy as possible. But it does require you to at least consider it when writing your code.
Many years ago (think 8 bit 6502/6809 days) you could buy (or usually rent, I seem to remember a figure of £40K to purchase one in the late 80s) a processor simulator, that would allow you replace the processor in your design with a pin compatible device that had a flying lead to the simulator box. this would allow things like capturing instructions/data leading up to a processor interrupt, or some other way of stopping the processor (even a 'push button to stop code' was possible). You could even step-backwards allowing you to see why an instruction or branch happened.
In these days of multi-core, nm-technology, I doubt there is such a thing.
I have been searching for this kind of features since quite a long time with no luck, unfortunately. Submitting the question to the StackOverflow community was sort of a "last resort", so now I'm ready to conclude that it doesn't exist.
VSDevAids (as #zzxyz pointed out) is not a solution, as it requires significant support from the application itself.
Pod cpu emulators (mentioned by #Neil) aka in-circuit emulators (ICE) and their evolutions are designed to thoroughly test the interaction between firmware and hardware, not so useful in high level programming (especially if managed like .NET).
Thanks for all contributions.

Measure startup performance c# application

I noticed that sometimes a .net 4.0 c# application takes a long time to start, without any apparent reason. Can can I determine what's actually happening, what modules are loaded? I'm using a number of external assemblies. Can putting them into the GAC improve performances?
Is .NET 4 slower than .NET 2?
.NET programs have two distinct start-up behaviors. They are called cold-start and warm-start. The cold-start is the slow one, you'll get it when no .NET program was started before. Or when the program you start is large and was never run before. The operating system has to find the assembly files on disk, they won't be available in the file system cache (RAM). That takes a while, hard disks are slow and there are a lot of files to find. A small do-nothing Winforms app has to load 51 DLLs to get started. A do-nothing WPF app weighs in at 77 DLLs.
You get a warm start when the assembly files were loaded before, not too long ago. The assembly file data now comes from RAM instead of the slow disk, that's zippedy-doodah. The only startup overhead is now the jitter.
There's little you can do about cold starts, the assemblies have to come of the disk one way or another. A fast disk makes a Big difference, SSDs are especially effective. Using ngen.exe to pre-jit an assembly actually makes the problem worse, it creates another file that needs to be found and loaded. Which is the reason that Microsoft recommends not prejitting small assemblies. Seeing this problem with .NET 4 programs is also highly indicated, you don't have a lot of programs that bind to the version 4 CLR and framework assemblies. Not yet anyway, this solves itself over time.
There's another way this problem automatically disappears. The Windows SuperFetch feature will start to notice that you often load the CLR and the jitted Framework assemblies and will start to pre-load them into RAM automatically. The same kind of trick that the Microsoft Office and Adobe Reader 'optimizers' use. They are also programs that have a lot of DLL dependencies. Unmanaged ones, the problem isn't specific to .NET. These optimizers are crude, they preload the DLLs when you login. Which is the 'I'm really important, screw everything else' approach to working around the problem, make sure you disable them so they don't crowd out the RAM space that SuperFetch could use.
The startup time is most likely due to the runtime JIT compiling assembly IL into machine code for execution. It can also be affected by the debugger - as another answerer has suggested.
Excluding that - I'll talk about an application ran 'in the wild' on a user's machine, with no debugger etc.
The JIT compiler in .Net 4 is, I think it's fair to say, better than in .Net 2 - so no; it's not slower.
You can improve this startup time significantly by running ngen on your application's assemblies - this pre-compiles the EXEs and DLLs into native images. However you lose some flexibility by doing this and, in general, there is not much point.
You should see the startup time of some MFC apps written in C++ - all native code, and yet depending on how they are linked they can take just as long.
It does, of course, also depend on what an application is actually doing at startup!
I dont think putting your assemblies in GAC will boot the performance.
If possible do logging for each instruction you have written on Loading or Intialize events which may help you to identify which statement is actually taking time and with this you can identify the library which is taking time in loading.

I want to reduce my VS.NET project's compile time - what are your ideas for how to do this?

My project is developed in Visual Studio 08, in C#. It's a standalone desktop application, about 60k lines of code.
Once upon a time I loved working on this software - now that the compliation time has grown to approx 2 minutes, it becomes a far less enjoyable experience...
I think that my lack of experience in C# may be a factor; I have developed everything under one namespace for example - would having a well structured codebase enable the compiler to recompile only the necessary parts of the code when changes are made? Or do I need to separate sections into separate projects/DLLs to force this to happen?
How much of a difference would upgrading to the latest quad-core processor make?
The other thought is, perhaps this is a typical thing for programmers to deal with - is a long compile time like this simply something that must be managed?
Thanks in advance.
Things that increase compile time:
The number of projects in a solution makes more difference than the number of files in a particular project.
Custom build tasks can make a huge difference, especially if they are generating code or running post-build analysis (FxCop, StyleCop, Code Contracts).
Native code projects take longer to build.
A single project containing 60K lines of C# code with no special build features enabled should compile in seconds on any machine made in the past 5+ years.
I'm surprised that 60k lines of code take 2 minutes to compile. I have an application that is 500,000 lines of code, and it only takes about a minute and a half. Make sure you are not doing full rebuilds each time, and make sure you are not cleaning the solution between builds. A normal build should perform an incremental build, only recompiling code that has changed since the last build (along with anything affected by that change.)
Perhaps some other factors might include heavy use of large resources (images?), broad-sweeping changes in the lowest level libraries (i.e. those used by everything else), etc. Generally speaking, on a relatively modern machine, compiling 60,000 lines of C# code should take less than a minute on average, unless you are rebuilding the entire solution.
There is this thread about hardware to improve compile time. Also this really excellent blog post from Scott Guthrie on looking at hard drive speed for performance.
Splitting your project up into multiple projects will help. Only those projects that have changes (and projects that depend on it) will need recompilation.
A single namespace however, shouldn't affect compile time. However, if you do split up your project into multiple projects/assemblies, then a single namespace is definitely not a good idea.
Upgrading to a faster CPU will probably help, but you might find that faster I/O (better disks, RAID, etc will be more useful).
And yes, avoiding long compile times are one of the things developers need to take care of. When it comes to productivity, do whatever you can (better tools, bigger screens, faster machines, etc...)

Have you ever used ngen.exe?

Has anybody here ever used ngen? Where? why? Was there any performance improvement? when and where does it make sense to use it?
I don't use it day-to-day, but it is used by tools that want to boost performance; for example, Paint.NET uses NGEN during the installer (or maybe first use). It is possible (although I don't know for sure) that some of the MS tools do, too.
Basically, NGEN performs much of the JIT for an assembly up front, so that there is very little delay on a cold start. Of course, in most typical usage, not 100% of the code is ever reached, so in some ways this does a lot of unnecessary work - but it can't tell that ahead of time.
The downside, IMO, is that you need to use the GAC to use NGEN; I try to avoid the GAC as much as possible, so that I can use robocopy-deployment (to servers) and ClickOnce (to clients).
Yes, I've seen performance improvements. My measurements indicated that it did improve startup performance if I also put my assemblies into the GAC since my assemblies are all strong named. If your assemblies are strong named, NGen won't make any difference without using the GAC. The reason for this is that if you have strong named assemblies that are not in the GAC, then the .NET runtime validates that your strong named assembly hasn't been tampered with by loading the whole managed assembly from disk so it can validate it circumventing one of the major benefits of NGen.
This wasn't a very good option for my application since we rely on common assemblies from our company (that are also strong named). The common assemblies are used by many products that use many different versions, putting them in the GAC meant that if one of our applications didn't say "use specific version" of one of the common assemblies it would load the GAC version regardless of what version was in its executing directory. We decided that the benefits of NGen weren't worth the risks.
Ngen mainly reduces the start-up time of .NET app and application's working set. But it's have some disadvantages (from CLR Via C# of Jeffrey Richter):
No Intellectual Property Protection
NGen'd files can get out of sync
Inferior Load-Time Performance (Rebasing/Binding)
Inferior Execution-Time Performance
Due to all of the issues just listed, you should be very cautious when considering the use of
NGen.exe. For server-side applications, NGen.exe makes little or no sense because only the
first client request experiences a performance hit; future client requests run at high speed. In
addition, for most server applications, only one instance of the code is required, so there is no
working set benefit.
For client applications, NGen.exe might make sense to improve startup time or to reduce
working set if an assembly is used by multiple applications simultaneously. Even in a case in
which an assembly is not used by multiple applications, NGen'ing an assembly could improve
working set. Moreover, if NGen.exe is used for all of a client application's assemblies, the CLR
will not need to load the JIT compiler at all, reducing working set even further. Of course, if
just one assembly isn't NGen'd or if an assembly's NGen'd file can't be used, the JIT compiler
will load, and the application's working set increases.
ngen is mostly known for improving startup time (by eliminating JIT compilation). It might improve (by reducing JIT time) or decrease overall performance of the application (since some JIT optimizations won't be available).
.NET Framework itself uses ngen for many assemblies upon installation.
i have used it but just for research purpose. use it ONLY if you are sure about the cpu architecture of your deployment environment (it wont change)
but let me tell you JIT compilation is not too bad and if you have deployments across multiple cpu environments (for example a windows client application which is updated often) THEN DO NOT USE NGEN. thats coz a valid ngen cache depends upon many attributes. if one of these fail, your assembly falls back to jit again
JIT is a clear winner in such cases, as it optimizes code on the fly based on the cpu architecture its running on. (for eg it can detect if there are more then 1 cpu)
and clr is getting better with every release, so in short stick with JIT unless you are dead sure of your deployment environment - even then your performance gains would hardly justify using ngen.exe (probably gains would be in few hundred ms) - imho - its not worth the efforts
also check this real nice link on this topic - JIT Compilation and Performance - To NGen or Not to NGen?
Yes. Used on a WPF application to speed up startup time. Startup time went from 9 seconds to 5 seconds. Read about it in my blog :
I recently discovered how great NGEN can be for performance. The
application I currently work on has a data access layer (DAL) that is
generated. The database schema is quite large, and we also generate
some of the data (list of values) directly into the DAL. Result: many
classes with many fields, and many methods. JIT overhead often showed
up when profiling the application, but after a search on JIT compiling
and NGEN I though it wasn’t worth it. Install-time overhead, with
management my major concern, made me ignore the signs and focus on
adding more functionality to the application instead. When we changed
architecture to “Any CPU” running on 64 bit machines things got worse:
We experienced hang in our application for up to 10 seconds on a
single statement, with the profiler showing only JIT overhead on the
problem-area. NGEN solved the problem: the statement went from 10
seconds to 1 millisecond. This statement was not part of the
startup-procedure, so I was eager to find out what NGEN’ing the whole
application could do to the startup time. It went from 8 seconds to
3.5 seconds.
Conclusion: I really recommend giving NGEN a try on your application!
As an addition to Mehrdad Afshari's comment about JIT compilation. If serializing a class with many properties via the XmlSerializer and on a 64-bit system a SGEN, NGEN combo has a potentially huge (in our case gigabytes and minutes) effect.
More info here:
XmlSerializer startup HUGE performance loss on 64bit systems see Nick Martyshchenko's answer especially.
Yes, I tried it with a small single CPU-intensive exe and with ngen it was slightly slower!
I installed and uninstalled the ngen image multiple times and ran a benchmark.
I always got the following times reproducable +/- 0.1s:
33.9s without,
35.3s with

Speeding up compilation in UWP?

I'm working on UWP after working for a long time in WPF and compilation is abysmally slow.
I understand why release compilation is slow (net native compilation) but that's disabled in debug yet it still takes a lot of time between F5 and the application being displayed on screen, even for a blank application.
Is there any way to speed this up (even at the cost of runtime performance)? It's really hard to work with when you're used to C# giving you extremely fast compile times and testing after every small changes.
For exemple just a right click and rebuild on a very simple (4 pages, each < 200 line of xaml, and pretty much 0 C#) uwp project takes almost exactly 20 seconds in debug (without .net native) with no project references. Meanwhile a much larger WPF application (dozens of windows, thouthand of lines of codes) takes a few seconds and most of that time is copying child projects!
As suggested you can find a minimal example for download here :
https://www.dropbox.com/s/0h89qsz66erba3x/WPFUWPCompileSpeed.zip?dl=0
It's just a solution with a blank wpf & blank uwp app. Compilation time for WPF is just over 1 second, for UWP 12 seconds, in each case solution was cleaned and the single project i was testing was right clicked and rebuilt. This is in debug (without .net Native compilation)
I definitely agree that UWP compilation is significantly slower than WPF. I've always had to break apart my WPF assemblies whenever I reached about 5-6 dozen xaml windows or views, in order to keep the incremental compilation times down at 10-20 seconds. The way things are looking, my UWP assemblies will probably grow to only about 2 or 3 dozen items before they take way too long to compile. Any assembly that takes over 10 seconds to compile is problematic when you are trying to do iterative coding & debugging.
Here are two recommendations, if you haven't tried them. The first thing to do is go to the build tab and always remember to uncheck "compile with .NET native tool chain". That is pretty useless for normal debug/test iterations.
The next thing to do is to monitor your build operations with procmon. That will surface any issues that may be specific to your workstation (like virus scanning, other apps that may be interfering).
Here are the factors I noticed that would slow down UWP compilation compared to WPF:
Lots of compilation passes (see them in the build output window):
> MarkupCompilePass1:
> XamlPreCompile:
> MarkupCompilePass2:
> GenerateTargetFrameworkMonikerAttribute:
> CoreCompile:
> CopyGeneratedXaml:
> CopyFilesToOutputDirectory:
> ComputeProcessXamlFiles:
> CustomOutputGroupForPackaging:
> GetPackagingOutputs:
> _GenerateProjectPriConfigurationFiles:
> _GenerateProjectPriFileCore:
Core/Nuget dependencies
Unlike with the .Net Framework, all your dependencies for UWP are coming from .Net core and nuget. You should be able to use procmon and see tons of reads against the directory where VS keeps the stuff: C:\Program Files (x86)\Microsoft SDKs\UWPNuGetPackages. Note that the dependencies themselves aren't of a totally different quality than the .Net framework dependencies (in WPF). But there is a big difference in how those dependencies are gathered and how they are pushed around in your output directories during compile operations (eventually ending up in the final Appx directory).
No "shared output directory" for optimization.
With WPF we could send class library targets individually to a shared output directory (by checking them for build, while unchecking other libraries and the application exe itself). Then we could launch the debugger and without compiling a whole ton of stuff that wasn't necessarily changing. However, UWP requires us to build the entry-level application, regardless of whether we try to configure a shared output directory.
The new "Deploy" step is required in UWP
WPF didn't have the new "deploy" step that is required every time you build a UWP app. You will see the checkbox in the build configuration, and it applies to the entry-level application. If you don't deploy, you won't see all your changes while debugging.
UWP is still actively changing (unlike WPF)
They keep changing the UWP compilation operation. Soon we are going to start to see something called the "WinUI" library that will introduce another dependency from NuGet for UWP apps. This is probably not going to help matters where compilation performance is concerned.
I think once the pace of change starts to slow down, Microsoft may decide to focus on finding ways to improve the performance of compiles. It seems pretty clear when reading the procmon output that less than half of the work done by devenv.exe is especially being done for me. The rest of it is pulling in all the dependencies so that my code can compile against it. There has to be a way to optimize that repetitive processing.
Ronan the (twelve-second?) compile time for an empty project does seem a bit high... Mine generally runs for under five seconds (when empty).
You might want to watch what is going on with CPU. The compilation of a single project should be entirely cpu-bound, especially on the second or third attempt, after the file system has cached your source code (and all the nuget dependencies are downloaded). You may want to keep an eye on task-manager and make sure that you are running at full speed on a single core (ie 25% if you have four cores, 12% for eight cores, etc). If you see CPU drop down too far then something is going wrong. Also make sure to check that CPU is only being used by the usual things that you would expect, eg devenv.exe, VBCSCompiler.exe and MSBuild.exe.
For those who claim to be compiling UWP projects so much faster than everyone else, it might be interesting to hear their benchmarks for the windows community toolkit. (https://github.com/windows-toolkit/WindowsCommunityToolkit/tree/rel/5.0.0) The most interesting compile times would be for projects that are most heavy on xaml. Here are my times:
10 seconds for Microsoft.Toolkit.Uwp.UI.Controls
8 seconds Microsoft.Toolkit.Uwp.UI.Controls.Graph
8 seconds Microsoft.Toolkit.Uwp.UI.Controls.DataGrid
The basic Van Arsdel inventory app is another thing to benchmark. https://github.com/Microsoft/InventorySample Here is my result:
14 seconds for Inventory.App
The VanArsdel sample app would be another good one to benchmark. (Careful with this; you may have to be on a Windows insider build and it may mess up your VS like it did mine). The url is here: https://github.com/microsoft/vanarsdel Here is my result:
13 seconds for VanArsdel project
Remember that projects which are heavily xaml-oriented are also usually the slowest because they involve more processing work during the various compilation passes.
Using Diagnostic Output
While the total elapsed time to build a project is critical, it may also be helpful to understand where the time is coming from. You can get a summary of that if you go to Tools->Options->Build-and-Run and then turn up the project build output verbosity to Diagnostic. Then compile the project in question. At the end you will see a Task Performance Summary that looks like the following. Note that CompileXaml should take the most time. These five items seem to be pretty common.
Task Performance Summary:
100 ms ValidateAppxManifest 1 calls
400 ms ExpandPriContent 1 calls
400 ms Csc 2 calls
600 ms ResolveAssemblyReference 1 calls
7000 ms CompileXaml 2 calls
If you see anything else (eg GenerateAppxManifest) taking up a few seconds every time you compile then that is probably a corruption problem within your project. You should be able to troubleshoot by searching for the word "completely", as in "Building target _GenerateCurrentProjectAppxManifest completely". When you find that, it should tell you why it is doing the extra work.
Min Version Targeting
I noticed that changing the min version on the targeting tab will cut out three seconds out of my "CompileXaml" time.
See below that changing min version to 15063 helps cut down on compile time. I suspect this is related to bloat in the related dependencies.
15063 (creators update) [4 seconds]
16299 (fall creators) [7 seconds]
17134 (version 1803) [ 7.5 seconds]
Its not always an option to target the creators update, so this may not be general-purpose fix. But it is interesting to know about the impact of getting an updated Windows 10 SDK.

Categories