We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking.
The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces.
I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted)(edit: having trouble producing said documentation...), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system.
What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app.
I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it.
Thanks!
Tom
Start off by reading The Definitive Guide to Windows Installer.
Done? Great. Now think about scrapping your existing install, starting from scratch and filing an application bug for everything you currently have to "workaround" in setup. For nearly every single bit of "twisted logic" that you've been fighting with is there for a purpose, to make your installation more reliable and repairable.
If you don't want the reliability and resiliency of Windows Installer, or if you're trying to bypass it in any way then use something else. Windows Installer does much, much more than simply just "place files and registry keys where I tell it to".
When you're writing an MSI package, you define how you want the target system to look. That's the way it looks after setup, the way it should automatically repair to if a file is deleted or a key data file is corrupted. You define how the system should roll back to if a user later cancels during an upgrade from 1.0 to 2.0. Windows Installer is 100% data driven. From a development point of view this is the hardest concept to understand, you can't just edit a config file or write some more data and expect it to persist. Once you get your head around this, Windows Installer becomes a real piece of cake and you design your programs and features to work within it's limitations, rather than trying to break free of them :)
A few other useful links...
Tao of the Windows Installer, Part 1
Understanding UAC in MSI
Assuming you're on at least Wix 3.0, you can make use of MakeSfxCA.exe to package dependencies in a single DLL. (This was an add-in from the DFT -- Deployment Tools Foundation.) Basically, start off by making sure project is copying your dependent DLLs. Make a CustomAction.config file. Test with a simple .bat file like:
REM MyMakeSfxCA.bat - Run under $(TargetDir); abs. paths reqd.
"%WIX%\SDK\MakeSfxCA" ^
%cd%\Managed_custom_action_pkg.dll ^
"%WIX%\SDK\x86\sfxca.dll" ^
%cd%\Managed_custom_action.dll ^
%cd%\Dependent1.dll ^
%cd%\Dependent2.dll ^
%cd%\Microsoft.Web.Administration.dll ^
%cd%\Microsoft.Deployment.WindowsInstaller.dll ^
%cd%\CustomAction.config
Once that works, convert into a Post-Build Event:
"$(WIX)\SDK\MakeSfxCA" ^
$(TargetDir)\Managed_custom_action_pkg.dll ^
"$(WIX)\SDK\x86\sfxca.dll" ^
$(TargetDir)\Managed_custom_action.dll ^
$(TargetDir)\Dependent1.dll ^
$(TargetDir)\Dependent2.dll ^
$(TargetDir)\Microsoft.Web.Administration.dll ^
$(TargetDir)\Microsoft.Deployment.WindowsInstaller.dll ^
$(TargetDir)\CustomAction.config
In your .wxs file, your Binary Key will look like:
< Binary Id="Managed_custom_action_CA_dll" SourceFile="$(var.Managed_custom_action.TargetDir)$(var.Managed_custom_action.TargetName)_pkg.dll" / >
For they CustomAction.config, you can find examples online such as
This is the best way that I've found when working with managed code.
I'm kind of afraid to step into this one but the limitations are only there if you haven't followed best practices upstream.
Take the "delete files" "bug". I haven't seen this one in a while, but if I recall, this is typically caused by the following scenario:
build A:
file 1: 1.0.0.0
build B:
file 1: 1.0.0.0
Now you go do a major upgrade where you have RemoveExistingProducts scheduled after CostFinalize. FindRelated Products detects the ProductCode to be removed, Costing says oh I see I have 1.0.0.0 but 1.0.0.0 is already installed so nothing for me to do here and then RemoveExistingProducts comes by and removes the old version thereby deleting file1.
I always worked around this one by scheduling RemoveEXistingProducts way earlier then suggested by Microsoft. It's been working for me for years.
As for dependency hell, I manage a software product line that consists of hundreds of features expressed by hundreds of merge modules and thousands of files ( 15,000 + ) consumed by dozens of installers on dozens of various feature, integration, main, maintenance and maintenance_integ branches.
How do I do it? One fragment at a time and lot's of automation, SCM and virtual machines to make sure that everything does actually work the way it's intended.
Being this close to your product shipping, the only "answer" I can offer truely offer you is to let you know people like myself are always available for hire. You'd be suprised how fast some of us can turn projects around and get software shipping.
Related
I have a WIX installer running upgrade.
it completes successfully the vast majority of times. but in some cases I'm getting:
"IO.FileNotFoundException: Could not load file or assembly.... or one of its dependencies. The system cannot find the file specified."
the actual file not being found is not consistent and changes between errors.
Note: I have a WIX installation that wraps several MSIs together. the error I'm getting happens during the upgrade process itself where I run some custom c# code (to configure the machine and the environment). this code is NOT run as a custom action, rather it runs after all the inner MSIs complete (and they complete successfully).
Since the same installer completes successfully almost all the time I'm inclined to think this is an environmental issue but I cannot come up with a plausible theory to even start and test this.
this upgrade process can run on windows server 2008 r2 to the latest windows server.
The installer makes sure all .net framework prerequisites are already installer before proceeding.
Any clue on a possible reason for this to happen will be greatly appreciated.
Quick Brainstorm: 1) You could have downgraded files? 2) you have mismatched component GUIDs? 3) you have removed files without realizing? (could work if older setup version had file set permanent) 4) you have missing pre-requisites? 5) custom actions can have deleted files? 6) virus scanners can have quarantined files?
There are edge and fringe cases. Right now I can only think of transitive conditions for components (msidbComponentAttributesTransitive => component conditions are re-evaluated on reinstall potentially removing the file), but that should not affect major upgrades I think.
Downgraded Binary: It could be an issue of a downgraded binary. Downgrading a binary in your upgrade setup (including a lower version file to replace a higher version from the original setup) can result in files being missing after the installation. Try to run repair from Add / Remove Programs and launch the application again. If this solves the problem and the application launches you very likely have this problem on your hands. An ugly fix that I have recommended for this is to update the version number of an old binary to a higher version using File => Open as resource in Visual Studio. There are other ways, but I use that approach for pragmatic reasons.
Logging: There are several other possibilities. The first thing you should do is to make a proper log file:
msiexec.exe /i C:\Path\Your.msi /L*v C:\Your.log
and then look for entries like these:
MSI (s) (0C:5C) [16:13:25:890]: Disallowing installation of component: {015A4DC1-56F4-562B-96B5-B3BE0D45FA5F} since the same component with higher versioned keyfile exists
MSI (s) (0C:5C) [16:13:25:890]: Disallowing installation of component: {4B6A1404-3892-5BEF-AB47-8FE3149211A4} since the same component with higher versioned keyfile exists
See this old answer from Rob Mensching himself. And here is more from Chris Painter.
Logging How-To: Here is an answer on MSI logging. I would just enable the global logging policy so all MSI operations create a log file in the TEMP folder.
Mismatched Component GUIDs: You should keep component GUIDs stable between setups unless you move files to another location (in the source media - in other words install to a different absolute target path). If you sequence the RemoveExistingProducts action to run late your component referencing errors might cause files to be missing after major upgrade as the setup enforces component rules in this scenario (with early sequencing these rules can be bent).
If you use mismatched component GUIDs - in other words you don't keep the component GUIDs stable for files targeting the same absolute path AND you use late sequencing for the .
When should you change component GUIDs? (recommended).
The situation:
I'm working on a research project which, due to some constraints, has a C# user interface (used mostly for visualization) but does most of the processing with PInvoke and unmanaged C++ code. The unmanaged code has TONS of dependencies on various 3rdparty libraries: Boost, PCL, OpenCV, CGAL, VTK, Eigen, FLANN, OpenMesh, etc. (if you can name it, we probably depend on it!). The C# project interacts with a C++ project (which I simply refer to as "wrapper" from now on). Obviously, the wrapper is where all the 3rdparty dependencies are consumed and is where entry points for PInvokes are defined. The wrapper is compiled into a DLL and copied into the output directory of the C# project via a post-build event.
I am the sole developer of the project. My primary development platform is Windows 10 with Visual Studio 2015 and Git is my version control. I mostly develop on 3 differenct machines, but sometimes I need to develop on other machines which only have Visual Studio 2015 installed.
What I've done so far:
Obsiously, managing all those 3rdparty dependencies is a hassle for one person, and I'd hate to have to install those libraries on new development machines . What I've done is that I've compiled all those 3rdparty libraries from source into static lib files (except the header-only ones obviously). All sources are built once for Debug configuration and once for Release configuration. I spent some time and integrated them into my wrapper project (i.e. defining extra include directories, using lots of #pragma comment (lib, "blah.lib") which reference different builds depending on the build configuration, etc.). I also followed some of the advice in Microsoft's linker best practices, to reduce link times. Specifically, I'm using the incremental linker, I've disabled /LTCG and /OPT.
Now I have this gigantic "Dependencies" folder in my VS solution which is around 8GBs, and is version-controlled separately from the project (using a Git submodule). The wrapper project gets statically linked to all these, as a result and as mentioned above, only one DLL is produced after building the wrapper project. The upside of this approach is that on any new development machine, I clone the main repository, clone the Dependencies submodule and I'm ready to roll! But...
The worst part:
You've guessed it! Terrible link times. Even on a powerful computer, after I change a single line in the wrapper project, I would have to sit for a couple of minutes till the linker finishes. The thing I didn't see when I took the above approach was that I forgot how much I valued rapid prototyping: quick and dirty testing of some random feature in the wrapper project before exposing that feature to PInvoke. Ideally, I would like to be able to change something small in the wrapper project, quickly build, run and test that change and get on with exposing the feature to PInvoke.
My Question:
Clearly, I'm inexperienced in this! How should I have done things differently, specifically given the dependencies I mentioned above? Was building DLLs instead of static libraries better? But then I would've had to add the Dependencies to PATH everytime the C# program started (as mentioned here). As a side question, how do you evaluate my current approach?
Based on the comment by #silverscania, I decided to just take the DLL route. It was a bit of pain rebuilding all the dependencies, but I'm now super happy about the results.
Now, building the whole solution from scratch takes 36 seconds! It used to be about 4 minutes before, so I have nothing to complain about. Also, modifying a single file in the wrapper project and building again takes 3 seconds which is amazing! The fact that all the compiled dependencies are now about 1 GB (opposed to ~8GB with the static libraries) is a plus! I couldn't be happier.
A coupt of notes:
On the main machine where I do most of my development, I have a SanDisk SSD. I noticed that for some reason beyond my comprehension, building the project on that device was way slower compared to a regular HDD. I'm looking into this issue, but haven't found an reason for this (TRIM is enabled and the drive is in AHCI mode).
I played around with the flags a bit more. I noticed that the compiler flag /GL (Whole program optimization) caused considerable slowdown during linking. I disabled that option too.
I'm currently working on a big project for a company and I'm stuck. We use TFS 2012, we have several branches (Dev => Main => pre-prod => prod).
When a project is in prod, if a bug occurs, we do a patch. It means we only deliver DDLs that are impacted by the bug's correction.
To do it, the developer in charge of the bug's correction check-in his code, and gives me the changeset number so I can know what are the files impacted by the checkin and deduce the dlls that need to be delivered.
And my problem is here, how can I know these DLLs names thanks to the changeset number? I'm currently parsing all the .csproj and I'm looking if the files that are in the changeset log are present in the csproj. If yes, then I'm looking for the AssemblyName (which gives me the DLL's name).
But this is not good to me since I'm parsing it as a String, it's not relyable and not evolutive.
If you have any better way (or even something already written :) ) go for it please ;)
Thanks !
You should in fact be deploying all of the DLL's and not just the Diff. If you have many separate components to your application then it may be advisable to deploy only one component but you need to have concrete interfaces for that to work.
The best way to achieve what you are talking about is to have all of your DLL's created at the same time using a Build Server. There are many options from TFS through to Cruse Control and Hudson, but they all create a specific 'build' of your software. This will have all of the files necessary to deploy a new version of your software packaged (however you like) and gives you assurance that everything works together as you would push a build, without ever recompiling or changing the DLL's, through each of your stage gates (Dev, test, QA, PreProd) and into production.
When you fix a bug even if it only hist a single DLL' you should deploy all of your DLL's as they are known to work together in that package.
This is not something that is or can be solved by branching or changesets. You need Builds...
I do have many components to my application.
The issue is that some DLLs are in several components. Indeed, they are not absolutely independants.
Moreover, there are 2 main bricks in the app (composed of many little bricks) : Launcher + AppFabric. The Launcher is the client side app, and AppFabric is the server side one. My goal is to deploy only the brick that has changed since the last build.
So my question would be : what is your advise to deploy this app regardless where are the DLLs because that would mean going back at the beginning and parsing all .csproj etc ... ?
FYI : the app I'm deploying weights ~175Mo (10 developers + 5 functionals), it's a really big project which is in development for 6 years now.
I'm working with Visual Studio 2010 and WinForms, .Net 4.0 (C#). I'm building an application with a lot of DLL (150). When I provide the application to my client, it's :
The Executable (.exe)
Dll files (.dll)
Each Dll is related to a module of the application, for example :
Ado.dll (provide access to database)
AccesManagement.dll (this module allows to manage users in the application)
Import.dll (this module allows the user to import data to the application)
etc.
When my client find a bug in the application, I correct it and I provide him impacted DLLs (in order to avoid him to test all the application). It can be for example the Import Dll.
The thing is, after some deliveries, we can have compatibility problems between Dll (a method that doesn't exist anymore in a new DLL for example). To avoid this problem, I would like to find a tool capable of checking compatibility between differents DLL.
I would like something like :
I specify the directory of the program to analyse (executable + Dll)
I launch the analyse
The program tells me for example : Error between Import.dll and Ado.dll, there is a class xxx in Import.dll expecting a method named xxx in the class xxx of Ado.dll
I've found some tools able to compare two versions of a Dll and provide added and removed members (Libcheck, ApiChange), but it's too complicated for me to do that because there are to many changes.
I think you may have a configuration management problem here -- at least as much as you've got a "compatibility" problem.
I'd recommend you find a way to track what versions of which assemblies each of your customers is using so that (1) you know what they're using when you decide what to ship, and (2) when they report bugs, you can replicate their setup (and thus, replicate their bug). If that sounds like a lot of work, it is. This is why a lot of software development shops take steps to ensure that there's a limit to the variation in setups among customers. It's nearly certain that you'll end up with some variation from customer-to-customer, but anything you can do to manage that problem will be beneficial.
Beyond the process implications, if you really need to create a "pluggable" environment, you probably need to create some interfaces for your objects to control the points where they connect, and you should probably look at Microsoft's Managed Extensibility Framework (MEF). MEF can help you manage the way objects "demand" behaviors from other objects.
I finally found a solution to my problem.
Since I'm :
Using SourceSafe and adding labels with the version of the application I'm building
Tagging each of my DLL with the version of the application
I built a program which is capable of :
Opening each Dll of a folder to read the version of the application in it
Getting from SourceSafe each project for the version specified in the DLL (With the functionnality "Get Label")
Then I just have to build the projet. If there is any compilation error, there is a compatibility problem.
This solution can avoid big compatibility problems, but you can still have compatibility problems which can't be seen with a compilation...
Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code.
Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and now without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time.
That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B.
The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects.
Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ...
I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute).
To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.
By default Visual Studio will always perform build of every project in your solutuion when you run a single project. Even if that project doesn't depend on every other project in your solution.
Go to Tools | Options | Projects and Solutions | Build and Run and check the box "Only build startup projects and dependencies on Run".
Since now when run your project (F5 key), Visual Studio will only build your startup project and the those projects in your solution that it depends on.
Is there a way to simply tell VS to build only projects where some
changes were done and projects which are dependent on changed ones
(preferably this part as another build option)? I worry you will tell
me that it is exactly what VS is doing but in MS way ...
Not really (you understand it already).
You are talking about a "build system". MSVS is not that. It is an IDE, which happens to permit you to organize your assets into projects-and-solutions, and yes, to "build". But, it is not a build system. It will never be a build system (long story, but a very different technology is required).
In contrast, MSVS is an IDE for accelerated iterative development, including the "debugging" cycle (e.g., "step-into" and "step-over" in the debbugger during system run). That's where MSVS "shines".
It does not, and will never, "shine" as a build system. That's not what it was created to do. And, this will likely never change (long story, even Microsoft will likely agree).
I'm not trying to be cute, and I sincerely apologize for delivering this news. This answer hurts me too.
I expect working incremental compilation of solutions. Especially in
product like VS 2010 Ultimate which costs several thousands dollars.
MSVS is an IDE for interactive debugging/development, and not a build system (see above). So, you are measuring it in a product scenario for which it was not designed, and in which it will likely never function as you desire.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers . Those are not acceptable solutions.
We're not paying for VS to do such compromises.
Your expectations are reasonable. I want them too. However, MSVS is not a product that will ever deliver that.
Again, I'm not trying to be "cute". If you are willing to invest in a "build system", you may find value in using something like CMake to manage your configurations and export Makefiles (or something) to perform your "real" builds, but to also "export" *.vcproj and *.sln files for when you want to do work iteratively and interactively within the MSVS IDE.
EDIT: Rather, what you want is a SSD (solid-state-disk) for your build workspace to get a 10x improvement-in-speed, or a RAM disk for a 100x improvement-in-speed for builds (not kidding, 64MB RAM on an LGA2011 socket gives you a 32MB RAM disk, which is what we use.)
One things you can do is to break your app into small solutions, each one being a cohesive part. Build each solution separately. Have each solution use the outputs of the solutions it depends on, rather than using the source code.
This will allow for shorter feedback cycles for each component
EDIT: Modified Solution
Additionally, you will create an integrative build that rather than getting all of the sources, compiling and testing, it will get the binary build products of the component CI builds. This integrative build should be triggered to run after every successful component build.
This build should be the binary equivalent of a complete build (which you still should build every night), but will take considerably less time to run, because it triggers after a component increment and doesn't need to compile or get any sources.
Moreover, if you use an enterprise grade build system that supports the concept of distributing your builds among multiple agents, you will be able to scale your efforts and shorten your complete CI cycle to the amount of time it takes to build the longest component, and test the integrative suite (at most).
Hope this helps.
Weighing a bit late on this, but have you considered having different build configurations?
You can tell visual studio not to build certain projects depending on the build configuration.
The developer could simply select the configuration relevant for the project their working on.
Pretty ancient thread, but I can say I was suffering from a smaller version of the same thing and I upgraded to Visual Studio 2012 and the problems seems to have finally been fixed. The RedGate .NET Demon solution mentioned above also seems to work pretty well so far.
This is an old problem.
Use parallel build and SSD . See here (I think - quick google):
http://www.hanselman.com/blog/HackParallelMSBuildsFromWithinTheVisualStudioIDE.aspx
I found a tool which does mostly what I want (and even more): RedGate .NET Demon. It is probably still the first version because I encountered few issues in our big solution (problems with C++ projects, problems with switching build targets and few others) but I really like it so far. I especially like the way how it tries to track changed files in VS IDE and rebuilds only affected projects.
Edit: .NET Demon has been retired as it should not be needed for VS 2015. It still works with previous versions.