The situation:
I'm working on a research project which, due to some constraints, has a C# user interface (used mostly for visualization) but does most of the processing with PInvoke and unmanaged C++ code. The unmanaged code has TONS of dependencies on various 3rdparty libraries: Boost, PCL, OpenCV, CGAL, VTK, Eigen, FLANN, OpenMesh, etc. (if you can name it, we probably depend on it!). The C# project interacts with a C++ project (which I simply refer to as "wrapper" from now on). Obviously, the wrapper is where all the 3rdparty dependencies are consumed and is where entry points for PInvokes are defined. The wrapper is compiled into a DLL and copied into the output directory of the C# project via a post-build event.
I am the sole developer of the project. My primary development platform is Windows 10 with Visual Studio 2015 and Git is my version control. I mostly develop on 3 differenct machines, but sometimes I need to develop on other machines which only have Visual Studio 2015 installed.
What I've done so far:
Obsiously, managing all those 3rdparty dependencies is a hassle for one person, and I'd hate to have to install those libraries on new development machines . What I've done is that I've compiled all those 3rdparty libraries from source into static lib files (except the header-only ones obviously). All sources are built once for Debug configuration and once for Release configuration. I spent some time and integrated them into my wrapper project (i.e. defining extra include directories, using lots of #pragma comment (lib, "blah.lib") which reference different builds depending on the build configuration, etc.). I also followed some of the advice in Microsoft's linker best practices, to reduce link times. Specifically, I'm using the incremental linker, I've disabled /LTCG and /OPT.
Now I have this gigantic "Dependencies" folder in my VS solution which is around 8GBs, and is version-controlled separately from the project (using a Git submodule). The wrapper project gets statically linked to all these, as a result and as mentioned above, only one DLL is produced after building the wrapper project. The upside of this approach is that on any new development machine, I clone the main repository, clone the Dependencies submodule and I'm ready to roll! But...
The worst part:
You've guessed it! Terrible link times. Even on a powerful computer, after I change a single line in the wrapper project, I would have to sit for a couple of minutes till the linker finishes. The thing I didn't see when I took the above approach was that I forgot how much I valued rapid prototyping: quick and dirty testing of some random feature in the wrapper project before exposing that feature to PInvoke. Ideally, I would like to be able to change something small in the wrapper project, quickly build, run and test that change and get on with exposing the feature to PInvoke.
My Question:
Clearly, I'm inexperienced in this! How should I have done things differently, specifically given the dependencies I mentioned above? Was building DLLs instead of static libraries better? But then I would've had to add the Dependencies to PATH everytime the C# program started (as mentioned here). As a side question, how do you evaluate my current approach?
Based on the comment by #silverscania, I decided to just take the DLL route. It was a bit of pain rebuilding all the dependencies, but I'm now super happy about the results.
Now, building the whole solution from scratch takes 36 seconds! It used to be about 4 minutes before, so I have nothing to complain about. Also, modifying a single file in the wrapper project and building again takes 3 seconds which is amazing! The fact that all the compiled dependencies are now about 1 GB (opposed to ~8GB with the static libraries) is a plus! I couldn't be happier.
A coupt of notes:
On the main machine where I do most of my development, I have a SanDisk SSD. I noticed that for some reason beyond my comprehension, building the project on that device was way slower compared to a regular HDD. I'm looking into this issue, but haven't found an reason for this (TRIM is enabled and the drive is in AHCI mode).
I played around with the flags a bit more. I noticed that the compiler flag /GL (Whole program optimization) caused considerable slowdown during linking. I disabled that option too.
Related
I got many .dll files for my project.
It is quite troublesome that moving a lot of .dll around for a project.
Is there any simple method to group many .dll file into one?
I heard something call dll wrapper but I cannot find out any concrete method related to it.
Can anyone give me a hand please.
Thank you very much.
By the way, all my .dll files and project are written in C#.
You can use ILMerge utility
ILMerge is a utility for merging multiple .NET assemblies into a single .NET assembly.
It is quite troublesome that moving a lot of .dll around for a project.
Really? Define many. I have projects consolidating 50ΓΌ+ dll#s and you know what - it is trivial to move them. Scripts, installers all do that automatically. Including configuring a dozen build server agents with the necessary copies etc.
Really, the only time I have to copy thm around is when I deploy manually to another machine for hotfixing or manual testing. I do that quite a lot at the moment (develop local, copy / paste the folder content to another machine to run tests - faster and closer to the database). Trivial. if it gets more work, I put in a little script. Trivial again.
Being a programmer is not about just knowing how to write some small classes, it also involves optimizting your environment a little. In times of CI (Continuous integration) and pretty much mandatory installers knowing more than just your programming langauge is a must. And then this is trivial.
You could unite your DLLs into a single multi-module assembly, or just create one giant C# project that includes all the DLL source files and compiles everything into a single DLL.
However, what's the problem with moving several DLLs around?
I'm working with Visual Studio 2010 and WinForms, .Net 4.0 (C#). I'm building an application with a lot of DLL (150). When I provide the application to my client, it's :
The Executable (.exe)
Dll files (.dll)
Each Dll is related to a module of the application, for example :
Ado.dll (provide access to database)
AccesManagement.dll (this module allows to manage users in the application)
Import.dll (this module allows the user to import data to the application)
etc.
When my client find a bug in the application, I correct it and I provide him impacted DLLs (in order to avoid him to test all the application). It can be for example the Import Dll.
The thing is, after some deliveries, we can have compatibility problems between Dll (a method that doesn't exist anymore in a new DLL for example). To avoid this problem, I would like to find a tool capable of checking compatibility between differents DLL.
I would like something like :
I specify the directory of the program to analyse (executable + Dll)
I launch the analyse
The program tells me for example : Error between Import.dll and Ado.dll, there is a class xxx in Import.dll expecting a method named xxx in the class xxx of Ado.dll
I've found some tools able to compare two versions of a Dll and provide added and removed members (Libcheck, ApiChange), but it's too complicated for me to do that because there are to many changes.
I think you may have a configuration management problem here -- at least as much as you've got a "compatibility" problem.
I'd recommend you find a way to track what versions of which assemblies each of your customers is using so that (1) you know what they're using when you decide what to ship, and (2) when they report bugs, you can replicate their setup (and thus, replicate their bug). If that sounds like a lot of work, it is. This is why a lot of software development shops take steps to ensure that there's a limit to the variation in setups among customers. It's nearly certain that you'll end up with some variation from customer-to-customer, but anything you can do to manage that problem will be beneficial.
Beyond the process implications, if you really need to create a "pluggable" environment, you probably need to create some interfaces for your objects to control the points where they connect, and you should probably look at Microsoft's Managed Extensibility Framework (MEF). MEF can help you manage the way objects "demand" behaviors from other objects.
I finally found a solution to my problem.
Since I'm :
Using SourceSafe and adding labels with the version of the application I'm building
Tagging each of my DLL with the version of the application
I built a program which is capable of :
Opening each Dll of a folder to read the version of the application in it
Getting from SourceSafe each project for the version specified in the DLL (With the functionnality "Get Label")
Then I just have to build the projet. If there is any compilation error, there is a compatibility problem.
This solution can avoid big compatibility problems, but you can still have compatibility problems which can't be seen with a compilation...
Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code.
Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and now without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time.
That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B.
The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects.
Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ...
I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute).
To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.
By default Visual Studio will always perform build of every project in your solutuion when you run a single project. Even if that project doesn't depend on every other project in your solution.
Go to Tools | Options | Projects and Solutions | Build and Run and check the box "Only build startup projects and dependencies on Run".
Since now when run your project (F5 key), Visual Studio will only build your startup project and the those projects in your solution that it depends on.
Is there a way to simply tell VS to build only projects where some
changes were done and projects which are dependent on changed ones
(preferably this part as another build option)? I worry you will tell
me that it is exactly what VS is doing but in MS way ...
Not really (you understand it already).
You are talking about a "build system". MSVS is not that. It is an IDE, which happens to permit you to organize your assets into projects-and-solutions, and yes, to "build". But, it is not a build system. It will never be a build system (long story, but a very different technology is required).
In contrast, MSVS is an IDE for accelerated iterative development, including the "debugging" cycle (e.g., "step-into" and "step-over" in the debbugger during system run). That's where MSVS "shines".
It does not, and will never, "shine" as a build system. That's not what it was created to do. And, this will likely never change (long story, even Microsoft will likely agree).
I'm not trying to be cute, and I sincerely apologize for delivering this news. This answer hurts me too.
I expect working incremental compilation of solutions. Especially in
product like VS 2010 Ultimate which costs several thousands dollars.
MSVS is an IDE for interactive debugging/development, and not a build system (see above). So, you are measuring it in a product scenario for which it was not designed, and in which it will likely never function as you desire.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers . Those are not acceptable solutions.
We're not paying for VS to do such compromises.
Your expectations are reasonable. I want them too. However, MSVS is not a product that will ever deliver that.
Again, I'm not trying to be "cute". If you are willing to invest in a "build system", you may find value in using something like CMake to manage your configurations and export Makefiles (or something) to perform your "real" builds, but to also "export" *.vcproj and *.sln files for when you want to do work iteratively and interactively within the MSVS IDE.
EDIT: Rather, what you want is a SSD (solid-state-disk) for your build workspace to get a 10x improvement-in-speed, or a RAM disk for a 100x improvement-in-speed for builds (not kidding, 64MB RAM on an LGA2011 socket gives you a 32MB RAM disk, which is what we use.)
One things you can do is to break your app into small solutions, each one being a cohesive part. Build each solution separately. Have each solution use the outputs of the solutions it depends on, rather than using the source code.
This will allow for shorter feedback cycles for each component
EDIT: Modified Solution
Additionally, you will create an integrative build that rather than getting all of the sources, compiling and testing, it will get the binary build products of the component CI builds. This integrative build should be triggered to run after every successful component build.
This build should be the binary equivalent of a complete build (which you still should build every night), but will take considerably less time to run, because it triggers after a component increment and doesn't need to compile or get any sources.
Moreover, if you use an enterprise grade build system that supports the concept of distributing your builds among multiple agents, you will be able to scale your efforts and shorten your complete CI cycle to the amount of time it takes to build the longest component, and test the integrative suite (at most).
Hope this helps.
Weighing a bit late on this, but have you considered having different build configurations?
You can tell visual studio not to build certain projects depending on the build configuration.
The developer could simply select the configuration relevant for the project their working on.
Pretty ancient thread, but I can say I was suffering from a smaller version of the same thing and I upgraded to Visual Studio 2012 and the problems seems to have finally been fixed. The RedGate .NET Demon solution mentioned above also seems to work pretty well so far.
This is an old problem.
Use parallel build and SSD . See here (I think - quick google):
http://www.hanselman.com/blog/HackParallelMSBuildsFromWithinTheVisualStudioIDE.aspx
I found a tool which does mostly what I want (and even more): RedGate .NET Demon. It is probably still the first version because I encountered few issues in our big solution (problems with C++ projects, problems with switching build targets and few others) but I really like it so far. I especially like the way how it tries to track changed files in VS IDE and rebuilds only affected projects.
Edit: .NET Demon has been retired as it should not be needed for VS 2015. It still works with previous versions.
We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking.
The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces.
I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted)(edit: having trouble producing said documentation...), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system.
What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app.
I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it.
Thanks!
Tom
Start off by reading The Definitive Guide to Windows Installer.
Done? Great. Now think about scrapping your existing install, starting from scratch and filing an application bug for everything you currently have to "workaround" in setup. For nearly every single bit of "twisted logic" that you've been fighting with is there for a purpose, to make your installation more reliable and repairable.
If you don't want the reliability and resiliency of Windows Installer, or if you're trying to bypass it in any way then use something else. Windows Installer does much, much more than simply just "place files and registry keys where I tell it to".
When you're writing an MSI package, you define how you want the target system to look. That's the way it looks after setup, the way it should automatically repair to if a file is deleted or a key data file is corrupted. You define how the system should roll back to if a user later cancels during an upgrade from 1.0 to 2.0. Windows Installer is 100% data driven. From a development point of view this is the hardest concept to understand, you can't just edit a config file or write some more data and expect it to persist. Once you get your head around this, Windows Installer becomes a real piece of cake and you design your programs and features to work within it's limitations, rather than trying to break free of them :)
A few other useful links...
Tao of the Windows Installer, Part 1
Understanding UAC in MSI
Assuming you're on at least Wix 3.0, you can make use of MakeSfxCA.exe to package dependencies in a single DLL. (This was an add-in from the DFT -- Deployment Tools Foundation.) Basically, start off by making sure project is copying your dependent DLLs. Make a CustomAction.config file. Test with a simple .bat file like:
REM MyMakeSfxCA.bat - Run under $(TargetDir); abs. paths reqd.
"%WIX%\SDK\MakeSfxCA" ^
%cd%\Managed_custom_action_pkg.dll ^
"%WIX%\SDK\x86\sfxca.dll" ^
%cd%\Managed_custom_action.dll ^
%cd%\Dependent1.dll ^
%cd%\Dependent2.dll ^
%cd%\Microsoft.Web.Administration.dll ^
%cd%\Microsoft.Deployment.WindowsInstaller.dll ^
%cd%\CustomAction.config
Once that works, convert into a Post-Build Event:
"$(WIX)\SDK\MakeSfxCA" ^
$(TargetDir)\Managed_custom_action_pkg.dll ^
"$(WIX)\SDK\x86\sfxca.dll" ^
$(TargetDir)\Managed_custom_action.dll ^
$(TargetDir)\Dependent1.dll ^
$(TargetDir)\Dependent2.dll ^
$(TargetDir)\Microsoft.Web.Administration.dll ^
$(TargetDir)\Microsoft.Deployment.WindowsInstaller.dll ^
$(TargetDir)\CustomAction.config
In your .wxs file, your Binary Key will look like:
< Binary Id="Managed_custom_action_CA_dll" SourceFile="$(var.Managed_custom_action.TargetDir)$(var.Managed_custom_action.TargetName)_pkg.dll" / >
For they CustomAction.config, you can find examples online such as
This is the best way that I've found when working with managed code.
I'm kind of afraid to step into this one but the limitations are only there if you haven't followed best practices upstream.
Take the "delete files" "bug". I haven't seen this one in a while, but if I recall, this is typically caused by the following scenario:
build A:
file 1: 1.0.0.0
build B:
file 1: 1.0.0.0
Now you go do a major upgrade where you have RemoveExistingProducts scheduled after CostFinalize. FindRelated Products detects the ProductCode to be removed, Costing says oh I see I have 1.0.0.0 but 1.0.0.0 is already installed so nothing for me to do here and then RemoveExistingProducts comes by and removes the old version thereby deleting file1.
I always worked around this one by scheduling RemoveEXistingProducts way earlier then suggested by Microsoft. It's been working for me for years.
As for dependency hell, I manage a software product line that consists of hundreds of features expressed by hundreds of merge modules and thousands of files ( 15,000 + ) consumed by dozens of installers on dozens of various feature, integration, main, maintenance and maintenance_integ branches.
How do I do it? One fragment at a time and lot's of automation, SCM and virtual machines to make sure that everything does actually work the way it's intended.
Being this close to your product shipping, the only "answer" I can offer truely offer you is to let you know people like myself are always available for hire. You'd be suprised how fast some of us can turn projects around and get software shipping.
Anyone had experience of managing C# based projects with Maven?
If yes , please tell me a few words about it , how weird would it be to create such a setup.
Thanks
Maven is language agnostic and it should be possible to use it with other languages than Java, including C#. For example, the Maven Compiler Plugin can be configured to use the csharp compiler. There is also a .NET Maven plugin and there was a maven-csharp on javaforge.com (seems dead).
But Java is getting most attention and man power and there is not much done with other languages. So, while using Maven with C# is in theory possible, I wouldn't expect much support and feedback from the community (i.e. in case of problem, you'll be alone). I don't know if using Maven for C# would thus be a good idea. I wouldn't recommend it actually (I may be wrong of course).
I work with a suite of C# and C++ components and applications that are dependency-managed via maven. The general rule of "If it can be done via command-line, it can be done in maven" holds, so we end up having a lot of .bat, .exe and powershell "glue" to get all the pieces playing together.
The biggest problem with using maven for a Microsoft stack is a complete lack of familiarity with the build/deployment/ALM cycle for ANY new developer. You can find many developers with MSBuild, TFSBuild, ANT, etc., experience, but it's a rare thing to find a C# or C++ dev who's worked with maven in a pure Microsoft shop. The rollout of maven for dependency management and build process is consequently extremely difficult, since you end up spending a LOT of time training developers (what's the difference between a snapshot and a release?), over-componentizing the product then scaling it back to get it right, etc.
I've also found that we've had to work around maven to do something resembling continuous integration and continuous delivery. About 70% of our technology stack is C# (the rest being C++), and we want to deploy most of that to QA servers every single night with the latest-and-greatest code by default. To balance the value of release builds vs. dev productivity via snapshots, we ended up constructing a build process where we create a release build of every component each night, followed by a snapshot build. This let the developers not have to worry about bumping POMs to consume snapshots in the morning. Overall, it's a royal pain, at least for someone coming from robust continuous integration, "build and deploy everything" environments.
Maven holds a lot of promise for dependency management and isolating breaking changes (particularly in interface components where the consumer and producer have to agree). Those problems have been solved other ways (svn externs, deployment builds, interface version management, etc.). But it is relatively nice to download any component, run "mvn compile", and see the code compile (assuming a basic level of build portability). For me, though, the overhead and the meta-conversations about getting the build right (as opposed to focusing on customer value) minimize the value of maven overall.
For .NET Core, you can use the dotnet-maven-plugin which drives the dotnet and nuget commands, as well as adds support for e.g. cleaning, releasing etc. in the "Maven way".
Here's an example plugin configuration:
<project>
[...]
<packaging>dotnet</packaging>
[...]
<build>
<plugins>
<plugin>
<groupId>org.eobjects.build</groupId>
<artifactId>dotnet-maven-plugin</artifactId>
<version>0.11</version>
<extensions>true</extensions>
</plugin>
</plugins>
</build>
[...]
</project>
(Notice the packaging type set to dotnet).
This will then read from the project.json file and run dotnet and nuget commands according to the maven lifecycle phases such as clean, compile, test, install etc.
You might also check out NPanday (it is a project I am involved in). While it still needs some work to more closely align to Maven's best practices, it is the most complete and active alternative available now. One feature that is unique to it is the existence of a Visual Studio Add-in for generating the correct pom.xml from the IDE.
There is a NMaven project at codeplex but it doesn't seem to be active or popular. See also these questions:
maven for .NET (DroidIn.net's link to his tutorial looks promising)
Why is there no need for maven in
.NET
Is there a Maven Alternative or port
for the .NET world?
maven-compiler-plugin with plexus-compiler-csharp works just fine with the following configuration. Of course you'll have to point to an actual C# compiler on your machine with the "executable" parameter.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<compilerId>csharp</compilerId>
<fork>true</fork>
<executable>C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe</executable>
<outputFileName>myDLL</outputFileName>
</configuration>
<dependencies>
<dependency>
<groupId>org.codehaus.plexus</groupId>
<artifactId>plexus-compiler-csharp</artifactId>
<version>2.2</version>
</dependency>
</dependencies>
</plugin>
Check this out: http://interfaceable.blogspot.com/2019/01/how-to-mavenize-visual-studio-project.html
At the time i developed those scripts/solution i was unaware that such csharp support existed from Maven, but i do recommend using Maven for the build since it enables you to automate/orchestrate everything such as IIS + ActiveMQ + MongoDB bring-up on the pre-integration-test phase, and then we are able to run tests using vstest. Not to mention that you can integrate it with Jenkins and run your builds on a remote machine.
I personally recommend it, but bear in mind that you will be faced with some challenges sometimes.