Anyone had experience of managing C# based projects with Maven?
If yes , please tell me a few words about it , how weird would it be to create such a setup.
Thanks
Maven is language agnostic and it should be possible to use it with other languages than Java, including C#. For example, the Maven Compiler Plugin can be configured to use the csharp compiler. There is also a .NET Maven plugin and there was a maven-csharp on javaforge.com (seems dead).
But Java is getting most attention and man power and there is not much done with other languages. So, while using Maven with C# is in theory possible, I wouldn't expect much support and feedback from the community (i.e. in case of problem, you'll be alone). I don't know if using Maven for C# would thus be a good idea. I wouldn't recommend it actually (I may be wrong of course).
I work with a suite of C# and C++ components and applications that are dependency-managed via maven. The general rule of "If it can be done via command-line, it can be done in maven" holds, so we end up having a lot of .bat, .exe and powershell "glue" to get all the pieces playing together.
The biggest problem with using maven for a Microsoft stack is a complete lack of familiarity with the build/deployment/ALM cycle for ANY new developer. You can find many developers with MSBuild, TFSBuild, ANT, etc., experience, but it's a rare thing to find a C# or C++ dev who's worked with maven in a pure Microsoft shop. The rollout of maven for dependency management and build process is consequently extremely difficult, since you end up spending a LOT of time training developers (what's the difference between a snapshot and a release?), over-componentizing the product then scaling it back to get it right, etc.
I've also found that we've had to work around maven to do something resembling continuous integration and continuous delivery. About 70% of our technology stack is C# (the rest being C++), and we want to deploy most of that to QA servers every single night with the latest-and-greatest code by default. To balance the value of release builds vs. dev productivity via snapshots, we ended up constructing a build process where we create a release build of every component each night, followed by a snapshot build. This let the developers not have to worry about bumping POMs to consume snapshots in the morning. Overall, it's a royal pain, at least for someone coming from robust continuous integration, "build and deploy everything" environments.
Maven holds a lot of promise for dependency management and isolating breaking changes (particularly in interface components where the consumer and producer have to agree). Those problems have been solved other ways (svn externs, deployment builds, interface version management, etc.). But it is relatively nice to download any component, run "mvn compile", and see the code compile (assuming a basic level of build portability). For me, though, the overhead and the meta-conversations about getting the build right (as opposed to focusing on customer value) minimize the value of maven overall.
For .NET Core, you can use the dotnet-maven-plugin which drives the dotnet and nuget commands, as well as adds support for e.g. cleaning, releasing etc. in the "Maven way".
Here's an example plugin configuration:
<project>
[...]
<packaging>dotnet</packaging>
[...]
<build>
<plugins>
<plugin>
<groupId>org.eobjects.build</groupId>
<artifactId>dotnet-maven-plugin</artifactId>
<version>0.11</version>
<extensions>true</extensions>
</plugin>
</plugins>
</build>
[...]
</project>
(Notice the packaging type set to dotnet).
This will then read from the project.json file and run dotnet and nuget commands according to the maven lifecycle phases such as clean, compile, test, install etc.
You might also check out NPanday (it is a project I am involved in). While it still needs some work to more closely align to Maven's best practices, it is the most complete and active alternative available now. One feature that is unique to it is the existence of a Visual Studio Add-in for generating the correct pom.xml from the IDE.
There is a NMaven project at codeplex but it doesn't seem to be active or popular. See also these questions:
maven for .NET (DroidIn.net's link to his tutorial looks promising)
Why is there no need for maven in
.NET
Is there a Maven Alternative or port
for the .NET world?
maven-compiler-plugin with plexus-compiler-csharp works just fine with the following configuration. Of course you'll have to point to an actual C# compiler on your machine with the "executable" parameter.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<compilerId>csharp</compilerId>
<fork>true</fork>
<executable>C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe</executable>
<outputFileName>myDLL</outputFileName>
</configuration>
<dependencies>
<dependency>
<groupId>org.codehaus.plexus</groupId>
<artifactId>plexus-compiler-csharp</artifactId>
<version>2.2</version>
</dependency>
</dependencies>
</plugin>
Check this out: http://interfaceable.blogspot.com/2019/01/how-to-mavenize-visual-studio-project.html
At the time i developed those scripts/solution i was unaware that such csharp support existed from Maven, but i do recommend using Maven for the build since it enables you to automate/orchestrate everything such as IIS + ActiveMQ + MongoDB bring-up on the pre-integration-test phase, and then we are able to run tests using vstest. Not to mention that you can integrate it with Jenkins and run your builds on a remote machine.
I personally recommend it, but bear in mind that you will be faced with some challenges sometimes.
Related
The situation:
I'm working on a research project which, due to some constraints, has a C# user interface (used mostly for visualization) but does most of the processing with PInvoke and unmanaged C++ code. The unmanaged code has TONS of dependencies on various 3rdparty libraries: Boost, PCL, OpenCV, CGAL, VTK, Eigen, FLANN, OpenMesh, etc. (if you can name it, we probably depend on it!). The C# project interacts with a C++ project (which I simply refer to as "wrapper" from now on). Obviously, the wrapper is where all the 3rdparty dependencies are consumed and is where entry points for PInvokes are defined. The wrapper is compiled into a DLL and copied into the output directory of the C# project via a post-build event.
I am the sole developer of the project. My primary development platform is Windows 10 with Visual Studio 2015 and Git is my version control. I mostly develop on 3 differenct machines, but sometimes I need to develop on other machines which only have Visual Studio 2015 installed.
What I've done so far:
Obsiously, managing all those 3rdparty dependencies is a hassle for one person, and I'd hate to have to install those libraries on new development machines . What I've done is that I've compiled all those 3rdparty libraries from source into static lib files (except the header-only ones obviously). All sources are built once for Debug configuration and once for Release configuration. I spent some time and integrated them into my wrapper project (i.e. defining extra include directories, using lots of #pragma comment (lib, "blah.lib") which reference different builds depending on the build configuration, etc.). I also followed some of the advice in Microsoft's linker best practices, to reduce link times. Specifically, I'm using the incremental linker, I've disabled /LTCG and /OPT.
Now I have this gigantic "Dependencies" folder in my VS solution which is around 8GBs, and is version-controlled separately from the project (using a Git submodule). The wrapper project gets statically linked to all these, as a result and as mentioned above, only one DLL is produced after building the wrapper project. The upside of this approach is that on any new development machine, I clone the main repository, clone the Dependencies submodule and I'm ready to roll! But...
The worst part:
You've guessed it! Terrible link times. Even on a powerful computer, after I change a single line in the wrapper project, I would have to sit for a couple of minutes till the linker finishes. The thing I didn't see when I took the above approach was that I forgot how much I valued rapid prototyping: quick and dirty testing of some random feature in the wrapper project before exposing that feature to PInvoke. Ideally, I would like to be able to change something small in the wrapper project, quickly build, run and test that change and get on with exposing the feature to PInvoke.
My Question:
Clearly, I'm inexperienced in this! How should I have done things differently, specifically given the dependencies I mentioned above? Was building DLLs instead of static libraries better? But then I would've had to add the Dependencies to PATH everytime the C# program started (as mentioned here). As a side question, how do you evaluate my current approach?
Based on the comment by #silverscania, I decided to just take the DLL route. It was a bit of pain rebuilding all the dependencies, but I'm now super happy about the results.
Now, building the whole solution from scratch takes 36 seconds! It used to be about 4 minutes before, so I have nothing to complain about. Also, modifying a single file in the wrapper project and building again takes 3 seconds which is amazing! The fact that all the compiled dependencies are now about 1 GB (opposed to ~8GB with the static libraries) is a plus! I couldn't be happier.
A coupt of notes:
On the main machine where I do most of my development, I have a SanDisk SSD. I noticed that for some reason beyond my comprehension, building the project on that device was way slower compared to a regular HDD. I'm looking into this issue, but haven't found an reason for this (TRIM is enabled and the drive is in AHCI mode).
I played around with the flags a bit more. I noticed that the compiler flag /GL (Whole program optimization) caused considerable slowdown during linking. I disabled that option too.
I'm building an REST API on ASP.NET CORE 1.0. In production it'd be IMHO very useful NOT to use JIT because the docker containers with the app are scaling up and down, redeploying during CI over and over, so the just-in-time compilation for every deployed container causes terrible lags, LB health-check deaths and other pains.
As I read, the native compilation with dotnet CLI is discontinued.
I tried building with CoreRT but without luck (details on demand due to complexity).
Since this question is quite abstract I'm not providing sample codes or detailed info, so for the start there are few questions instead:
Is my presumption correct - will ahead-of-time compilation solve the problem with slow first execution of each path -or - isn't there any other solution anyway?
If it's true, is currently possible to build "native" app (ubuntu x64 target) from .NET Core?
If it's, what's the best practice - how can I do it? Does anyone has experience with that?
(The target platform would be ubuntu-14.04-x64 docker image as well as the compilation platform. For develop purposes would be also nice to compile it on OSX.)
Thank you in advance.
Having full native ahead of time compilation isn't possible at this time. It's one of the goals of the CoreRT project linked above but isn't in any state I'd call production ready. The demo at Connect last year should be taken with a pretty big grain of salt. For example they still don't have a reflection subsystem. However, we have a couple of solutions that can greatly reduce the amount of code needing to be generated at JIT time. For .NET Core the tooling is called CrossGen and it's pretty baked these days.
While I have your attention I'll also mention that we're working on an evolution of the NGEN/CrossGen format that alleviates a big chunk of the typical pain involved with typical ni files. That goes under then name ReadyToRun
Hope that helps. Let me know if you have other questions.
Disclosure: I work on the .NET Native runtime and compiler team for UWP (a sister project to CoreRT and LLILC etc)
There is a guide for using CrossGen at https://github.com/dotnet/coreclr/blob/master/Documentation/building/crossgen.md. It's a little out of date - I'll see if I can get it updated sometime. The most important part of using CrossGen is to specify the -Platform_Assemblies_Paths switch on the command line, to tell CrossGen the location of all the dependencies that it needs (e.g., System.Private.CoreLib.dll).
Hope that helps. Please let me know if you run into any further issues.
Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code.
Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and now without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time.
That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B.
The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects.
Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ...
I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute).
To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.
By default Visual Studio will always perform build of every project in your solutuion when you run a single project. Even if that project doesn't depend on every other project in your solution.
Go to Tools | Options | Projects and Solutions | Build and Run and check the box "Only build startup projects and dependencies on Run".
Since now when run your project (F5 key), Visual Studio will only build your startup project and the those projects in your solution that it depends on.
Is there a way to simply tell VS to build only projects where some
changes were done and projects which are dependent on changed ones
(preferably this part as another build option)? I worry you will tell
me that it is exactly what VS is doing but in MS way ...
Not really (you understand it already).
You are talking about a "build system". MSVS is not that. It is an IDE, which happens to permit you to organize your assets into projects-and-solutions, and yes, to "build". But, it is not a build system. It will never be a build system (long story, but a very different technology is required).
In contrast, MSVS is an IDE for accelerated iterative development, including the "debugging" cycle (e.g., "step-into" and "step-over" in the debbugger during system run). That's where MSVS "shines".
It does not, and will never, "shine" as a build system. That's not what it was created to do. And, this will likely never change (long story, even Microsoft will likely agree).
I'm not trying to be cute, and I sincerely apologize for delivering this news. This answer hurts me too.
I expect working incremental compilation of solutions. Especially in
product like VS 2010 Ultimate which costs several thousands dollars.
MSVS is an IDE for interactive debugging/development, and not a build system (see above). So, you are measuring it in a product scenario for which it was not designed, and in which it will likely never function as you desire.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers . Those are not acceptable solutions.
We're not paying for VS to do such compromises.
Your expectations are reasonable. I want them too. However, MSVS is not a product that will ever deliver that.
Again, I'm not trying to be "cute". If you are willing to invest in a "build system", you may find value in using something like CMake to manage your configurations and export Makefiles (or something) to perform your "real" builds, but to also "export" *.vcproj and *.sln files for when you want to do work iteratively and interactively within the MSVS IDE.
EDIT: Rather, what you want is a SSD (solid-state-disk) for your build workspace to get a 10x improvement-in-speed, or a RAM disk for a 100x improvement-in-speed for builds (not kidding, 64MB RAM on an LGA2011 socket gives you a 32MB RAM disk, which is what we use.)
One things you can do is to break your app into small solutions, each one being a cohesive part. Build each solution separately. Have each solution use the outputs of the solutions it depends on, rather than using the source code.
This will allow for shorter feedback cycles for each component
EDIT: Modified Solution
Additionally, you will create an integrative build that rather than getting all of the sources, compiling and testing, it will get the binary build products of the component CI builds. This integrative build should be triggered to run after every successful component build.
This build should be the binary equivalent of a complete build (which you still should build every night), but will take considerably less time to run, because it triggers after a component increment and doesn't need to compile or get any sources.
Moreover, if you use an enterprise grade build system that supports the concept of distributing your builds among multiple agents, you will be able to scale your efforts and shorten your complete CI cycle to the amount of time it takes to build the longest component, and test the integrative suite (at most).
Hope this helps.
Weighing a bit late on this, but have you considered having different build configurations?
You can tell visual studio not to build certain projects depending on the build configuration.
The developer could simply select the configuration relevant for the project their working on.
Pretty ancient thread, but I can say I was suffering from a smaller version of the same thing and I upgraded to Visual Studio 2012 and the problems seems to have finally been fixed. The RedGate .NET Demon solution mentioned above also seems to work pretty well so far.
This is an old problem.
Use parallel build and SSD . See here (I think - quick google):
http://www.hanselman.com/blog/HackParallelMSBuildsFromWithinTheVisualStudioIDE.aspx
I found a tool which does mostly what I want (and even more): RedGate .NET Demon. It is probably still the first version because I encountered few issues in our big solution (problems with C++ projects, problems with switching build targets and few others) but I really like it so far. I especially like the way how it tries to track changed files in VS IDE and rebuilds only affected projects.
Edit: .NET Demon has been retired as it should not be needed for VS 2015. It still works with previous versions.
Here are the steps I take to create a package shipped to the end users:
Use visual studio 2005 Build the project (which is library DLL written in C#), both in debug and release mode.
I run doxygen and create documentation
I create a folder structure where I put my dll documentation and some release notes
zip it
ship it
the directory tree structure looks like this:
--NetApi:
--Api
--vs2005
--relesae
--dll
--debug
--dll
--documentation
--htmls files generated by doxygen
--ReleaseNotes.html
--Examples
I am thinking of rolling out a script to automate that. But before I do that, I would like to find out the common practices of packaging library api type of project, particularly structure, and tools used. References and examples are highly appreciated
Thanks
I am a big believer in continuous-integration and automated builds.
We have a rule in our shop that we never, ever, ever provide deliverables to a customer that were not produced by a fully automated, zero or one step build (that means that it took no more than 1 mouse click by a human to baseline, build, package, and release the thing.) These fully automated, one-step builds work by recognizing when a change is made to your source code control system, and automatically triggering the "build script."
For C#, I can recommend both CruiseControl.NET and Hudson.
I can also recommend the Pragmatic Project Automation series of books. Variants of this title should be available for both Java and .NET.
There are lots of prewritten build servers out there that can help you automate this.
For deployment I really like Inno Setup.
It is free, flexible, and can be easily customized to your tastes.
It would be nice if it did both a list of methods to choose from and the list of potential input parameters. This was done for powershell and I was curious if there was any similar functionality implemented for emacs or vim?
Clarification:
A fellow developer I work with wants to use either vim or emacs for the low overhead without running visual studio. In essence he would like to be able to write tests, edit code in emacs or vim then just run NANT scripts to compile the code and run the tests. The only feature from Visual Studio he wants is code completion. The rest he can live without for 98-99 percent of the time.
You can use a vim editor emulator for Visual Studio.
http://www.viemu.com/
I haven't come across an emacs mode that would offer code completion suggestions based on "knowledge" of the API(s) that the user's environment is offering. To a lot of people this is an issue which prevents them from attempting to use Emacs or VIM when working with rich/large/unwieldy (delete as applicable) APIs.
However I am wondering how much of a problem this would present during day-to-day work. I've been using Emacs with C#-mode to crank out quite a lot of C# code. I also tend to run dabbrev-mode or pabbrev-mode, which tends to take care of the more common function and variable names I tend to use. To my eternal shame I have to admit that I tend to have a browser open on the MSDN website to look up the rest - those APIs that I don't use often enough to remember. Another potential helper that your colleague might want to look into is icicles, which may also be a step in the right direction. Neither of these libraries however will offer the full breadth of completion support that something the like Visual Studio IDE will offer. I'd see this as part of the trade-off when using a more efficient editor.
As an aside, if your colleague is working in a team and other members working on the same project are using Visual Studio, MSBuild might offer a better solution for building outside of VS than Nant as MSBuild reads the same solution and project files that VS uses (in fact a lot of the build work in VS2008 is handled by MSBuild). The syntax isn't too far away from Nant and with the community tasks added (which gives you NUnit integration etc) and it'll ensure that everybody is using very similar mechanisms to build the executables.
The furthest along completion I've seen for C# is at this blog, specifically at this post. (Blog link included for context and other Emacs posts.)
If you can live with dumb completion, you might be able to roll your own with tags and tag completion.
A previous stack on the same issue.
Your source code should be processed through the CEDET framework: http://cedet.sourceforge.net/
Then either use the example UIs bundled with cedet or else try any of these two:
- company-mode: http://nschum.de/src/emacs/company-mode
- completion-ui: http://www.dr-qubit.org/emacs.php
both supporting CEDET as a completion search backend.
apa!
for emacs and C# you can look at this tool : http://code.google.com/p/idebridge/
OmniSharp provides contextual intellisense for C# in vim.
Some of the suggestions in Eclipse Style Function Completions in Emacs for C, C++ and JAVA? may be relevant for emacs.
Not c# specific, but still.
I have found the http://code.google.com/p/csense this is an emacs c# intellisense/code sense. I found it from this blog post http://osdir.com/ml/emacs.sources/2007-11/msg00018.html, this may be close to the answer I was looking for.
After looking further it has not been updated since November 2007, looks stale to me.
For Vim, you can install insenvim. It support for the C# code completion.
After download the plugin you could install the installation file or install manually by following steps:
Copy the file cs_vis.vim into your $VIM\vimfiles\ftplugin directory.
Copy the file csft.dll into your $VIM_INTELLISENSE directory.
Copy CSVimHelper.dll,reg.bat to your $VIM_INTELLISENSE directory.
Run reg.bat to register the dlls. You need to set the directory gacutil.exe
in the path. You need the latest version of .NET SDK.