I have a simple project that is used in lots of other solutions. Whenever I update this project I have to remember to go into the other solutions that use this project and recompile and deploy as well.
Is there a way to automate this?
If you use any sort of continuous integration tool like TeamCity, Jenkins, or Cruise Control, you could have your commits automatically cause the other solutions to be built.
I'm always uneasy about any solution that doesn't require a recompile when an API you depend on changes. Updating something that acts as a module without rebuilding makes sense, but if something you depend on changes you really want to make sure that doesn't break things elsewhere.
Using a CI server will allow you to run any sort of testing you want on each individual solution and notify you of a failure on one of them. You can also add steps for things like packaging a deployment or if you really enjoy playing with fire you could have the CI server do the deploy automagically.
Edit: Typically this is all done on an integration server, but there is no reason you couldn't set it up on your local machine.
If you're using any form of continuous integration e.g. cruise control, teamcity, TFS etc then you can easily set up your CI to rebuild you're dependent soutions.
Another less elegant solution may be to either have a .sln file that contains all of your projects and work in that solution.
Alternatively, you could add a post build event that build the dependent solutions when you make a change.
Mark those projects as Build/Deploy in Configuration Manager
You probably missing that.
That sounds very scary, actually. You make a change to a single project, and then, without regression testing all its dependent projects are automatically recompiled with the new version and deployed?
If you manage to find a way to do this, I predict great turmoil and calamity.
Is the reused project a class library? If so, what I usually do is add a reference to the dll in the output bin of the class library. Whenever I recompile the dll, the other projects almost immediately detect it (specially Intellisense.)
This doesn't work for dependent projects that are already deployed, though.
Beside having a build server setup, that would launch the build/publication of the other projects, I don't think so. If you want to check out a continuous build program, we've used "CruiseControl" (http://cruisecontrol.sourceforge.net/) where I use to work, and it was a nice setup with a lot of customized possibilities.
Related
I'm quite new to C# and Unity so have mercy on me. I'm using visual studio.
I have what seems like a pretty common problem. Which is - i want to use functions i write across several projects in unity. I don't want to have to go search for the code in some folder, copy paste it into the new project, or fiddle with symbolic links or use .dll's. These are all not great solutions to the problem. Can't i just somehow create a class I can access across all my projects? Custom namespace perhaps that is not project specific? that i can simply just call at the beginning of wherever i want to use my homemade scripts.
If you don't want to build a custom DLL and the headache that comes with maintaining its versioning alongside Unity releases, consider building an AssetPackage. You can right-click in one of your projects and export a bunch of scripts that you want to be re-used in other places together as a package. When you start a new project, just load that package into it by dragging and dropping it.
If you're using git for your projects, you could add the shared code into a separate repository and add them to your projects as a git submodule.
You have 2-3 things to consider in this situation:
Ease of deployment
Whether or not you will update that code
Ease of update, if you will update it
If you just want to bring it in once, then Erik's answer should be simple enough.
If you want to keep things as an updated library though, you will need another method.
Symbolic links as you mentioned would be the best, or DLLs. However, with this, you'd run the risk of breaking your other projects if you ever modify the common lib from inside your project.
Another option would be to have a separate VCS (git, svn, etc) inside your project for your common code. This way you can update if/when you want, you can roll-back if something breaks, and you can even fork your "common" code to make a project specific change.
Since OP mentioned it, in this case, OP could specifically use Git Submodules or simply add a second Git project and add that sub-Git to the parent's .gitignore file
I'm about to start developing a desktop application (WPF) based on a "plugin" architecture, and was going to use MEF (and its DirectoryCatalog) to discover and load plugin assemblies. We're going to be developing many plugins, so it seems sensible to keep them in separate VS solutions rather than bloat the "core" application solution, but having only ever worked on single, standalone solutions, I suspect this is going to make debugging a bit tricky. I'm using VS2013 if that makes a difference.
I'm assuming that I'll still be able to step into a plugin in scenarios where the "core" application calls a method in that plugin? And I'm guessing that once in there, I'll be able to set breakpoints in those source code files that have been "visited"? But what if I want to add a breakpoint to a different source code file - one that hasn't been visited while stepping-through? How can I open that file? In a single solution I could just open it via Solution Explorer, but not (I'm guessing) when it's in a separate assembly.
I'm trying to pre-empt any problems I might have with this multi-solution approach, and wondered if VS had any clever features to simplify some of this stuff. Having separate solutions also means first compiling the plugin solution(s) that I want to test, then compiling and running the "core" application solution. While it's only a couple of extra mouse clicks, are there (again) any VS features that could help here?
This is a common scenario and not tricky at all.
In the project properties of your plug-ins, go to Debug -> Start Action and set Start external program to the executable of your core application.
This way, you only have to compile your core application once (probably using a build script that just builds everything), and debugging a plug-in will start the core application with the debugger attached and you can debug the plug-in (as soon as your core apllication loads the plug-in assembly).
Also keep in mind that you can dettach the debugger from the running application, switch to another instance of Visual Studio with another solution opened, and again attach to your running application. This comes in handy if you e.g. debug your plug-in and want to set or use existing break points in your core application.
As long as Visual Studio is able to find the debugging symbols (the *.pdb files), stepping through the code of e.g. your core application while debugging your plug-in is also no problem.
I see two ways to do this.
The more comfortable option:
1. You can add the external solution to the core solution.
Walkthrough: Adding an existing Visual Studio solution to another solution
By doing this you can organize your solution to reference the code and still keep each plugin solution separate at the same time.
You just reference those plugin solutions from your core solution that you currently want to work on. Also, using this approach you can organize the other solutions just like you would with normal projects and move thembetween virtual solutios folders to your liking until you have the most adequate folder structure.
Quote from the article:
The nice thing about this approach is that not only are all the
projects now in one solution but at any time, you can open the
separate solutions without impacting the "master" solution and vice
versa.
The files in the references solution can be opened and edited just like any other file from your "normal" projects, and of course, you can set breakpoint like in any other code file, too.
This way you can both edityour code and step through it, which I personally find much more convenient than switching and attaching to multiple processes.
2. Add the PDB files.
Put the DLLs with their corresponding PDBs of those plugins you want to debug into a folder and configure your core application to use that folder for the DirectoryCatalog. This enables you to step into the plugin code, but you will not be able to edit them.
#Andrew
Regarding debugging, it shouldn't be an issue as long as you drop the .pdb files with assembly in directory which you are using as DirectoryCatalog.
Regarding building plugin solution before Core- as you have 1 build file for each solution, you should check if you can write msbuild commands in a .bat file to get it executed one after other.
Besides all the above suggestions, another way to debug is to attach your addin solution to the running core process. Attach to Running Processes with the Visual Studio Debugger
The project that I'm currently working on is being developed by multiple teams where each team is responsible for different part of the project. They all have set up their own C# projects and solutions with configuration settings specific to their own needs. However, now we need to create another, global solution, which will combine and build all projects into the same output directory.
The problem that I have encountered though, is that I have found only one way to make all projects build into the same output directory - I need to modify configurations for all of them. That is what we would like to avoid. We would prefer that all these projects had no knowledge about this "global" solution. Each team must retain possibility to work just with their own sub-solution.
One possible workaround is to create a special configuration for all projects just for this "global" solution, but that could create extra problems since now you have to constantly sync this configuration settings with the regular one, used by that specific team. Last thing we want to do is to spend hours trying to figure out why something doesn't work when building under global solution just because of some check box that developers have checked in their configuration, but forgot to do so in the global configuration.
So, to simplify, we need some sort of output directory setting or post build event that would only be present when building from that global, all-inclusive solution. Is there any way to achieve this without changing something in projects configurations?
Update 1
Some extra details I guess I need to mention:
We need this global solution to be as close as possible to what the end user gets when he installs our application, since we intend to use it for debugging of the entire application when we need to figure out which part of the application isn't working before sending this bug to the team working on that part.
This means that when building under global solution, the output directory hierarchy should be the same as it would be in Program Files after installation, so that if, for example, we have Program Files/MyApplication/Addins folder which contains all the addins developed by different teams, we need the global solution to copy the binaries from addins projects and place them in the output directory accordingly.
The thing is, the team developing an addin doesn't necessary know that it is an addin and that it should be placed in that folder, so they cannot change their relative output directory to be build/bin/Debug/Addins.
The key here is that team is responsible for a deliverable. That deliverable is a collection of binaries. So the "global" solution ... or "product that uses the deliverables from teams" is interested in ensuring that all of the 'current deliverables' work together. That is, that you have a deliverable from the collaborative effort.
So this begs a few questions. Do the team deliver what they consider to be a 'release'. This may be automatic in the build system. If it builds and all tests pass then publish it.
What you are looking for is a team publishing or promoting a release. The source code is how you got there, the binaries are the result. Each team controls what binaries it considers to be a release (this may be automated by the build system).
Not exactly what you asked, but I hope it is the answer that leads to the right questions to give good results.
One very simple way would be to create the solution. Include all the projects and add a project (or more) to handle the global solution build tasks. The projects in the global solution should then have a reference to the projects they need and then let Visual Studio handle how to get the binaries from each project. They will (under normal circumstances) be copied to the output folder of the build project. So the project added specifically for the global build tasks would have a copy of all the referenced projects
Another way would be to create a global MSBuild script that references the rest of the build scripts. Each project is on it's own a MSBuild script
EDIT
From the comments it would seem that there are two categories of projects. One that needs building and one that does not.
For those that need building reference them as projects in the aggregating project for those that do not require building add them either as references or add the dll as resources.
Using the later change the property of the Build action to None and copy to output directory to Copy if newer
In both cases you now have all dll's in the output directory you can then have a post build action on the aggregating project moving the dlls that should be in a specific folder (ie not in the output folder)
Have a look at the practice of Continuous Integration and the usage of a Build Server with scripted builds. This is an indispensable instrument when developing different parts of an application as a team, and your problems are a great illustration of the reason why.
You've not mentioned if you use a Version Control system. I've found in practise that each developer maintains his/hers/their teams configuration and builds locally on there machine, since you don't check *.suo or *.user files most of the personal configuration only affects the individual team member.
On a completely seperate machine check-out the same code from all repositories and compile the project on the build machine (this can be completely automated). This maintains your build servers independance.
Don't worry about it being a "Solution". You can easily build multiple solutions one after the other.
Since the output path is relative (and probably "bin\Debug") it'll get built wherever you check it out to. If you want all the binaries in the same output folder you could tweak the output path on every configuration to match. Something like "....\bin\Debug" (obviously this affects where the projects get built to on the local machines but it might not matter). That way multiple projects would get built the same target output.
You could also include a seperate setup build on the build server which isn't on each developers local machine to package up the final product.
Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code.
Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and now without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time.
That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B.
The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects.
Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ...
I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute).
To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.
By default Visual Studio will always perform build of every project in your solutuion when you run a single project. Even if that project doesn't depend on every other project in your solution.
Go to Tools | Options | Projects and Solutions | Build and Run and check the box "Only build startup projects and dependencies on Run".
Since now when run your project (F5 key), Visual Studio will only build your startup project and the those projects in your solution that it depends on.
Is there a way to simply tell VS to build only projects where some
changes were done and projects which are dependent on changed ones
(preferably this part as another build option)? I worry you will tell
me that it is exactly what VS is doing but in MS way ...
Not really (you understand it already).
You are talking about a "build system". MSVS is not that. It is an IDE, which happens to permit you to organize your assets into projects-and-solutions, and yes, to "build". But, it is not a build system. It will never be a build system (long story, but a very different technology is required).
In contrast, MSVS is an IDE for accelerated iterative development, including the "debugging" cycle (e.g., "step-into" and "step-over" in the debbugger during system run). That's where MSVS "shines".
It does not, and will never, "shine" as a build system. That's not what it was created to do. And, this will likely never change (long story, even Microsoft will likely agree).
I'm not trying to be cute, and I sincerely apologize for delivering this news. This answer hurts me too.
I expect working incremental compilation of solutions. Especially in
product like VS 2010 Ultimate which costs several thousands dollars.
MSVS is an IDE for interactive debugging/development, and not a build system (see above). So, you are measuring it in a product scenario for which it was not designed, and in which it will likely never function as you desire.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers . Those are not acceptable solutions.
We're not paying for VS to do such compromises.
Your expectations are reasonable. I want them too. However, MSVS is not a product that will ever deliver that.
Again, I'm not trying to be "cute". If you are willing to invest in a "build system", you may find value in using something like CMake to manage your configurations and export Makefiles (or something) to perform your "real" builds, but to also "export" *.vcproj and *.sln files for when you want to do work iteratively and interactively within the MSVS IDE.
EDIT: Rather, what you want is a SSD (solid-state-disk) for your build workspace to get a 10x improvement-in-speed, or a RAM disk for a 100x improvement-in-speed for builds (not kidding, 64MB RAM on an LGA2011 socket gives you a 32MB RAM disk, which is what we use.)
One things you can do is to break your app into small solutions, each one being a cohesive part. Build each solution separately. Have each solution use the outputs of the solutions it depends on, rather than using the source code.
This will allow for shorter feedback cycles for each component
EDIT: Modified Solution
Additionally, you will create an integrative build that rather than getting all of the sources, compiling and testing, it will get the binary build products of the component CI builds. This integrative build should be triggered to run after every successful component build.
This build should be the binary equivalent of a complete build (which you still should build every night), but will take considerably less time to run, because it triggers after a component increment and doesn't need to compile or get any sources.
Moreover, if you use an enterprise grade build system that supports the concept of distributing your builds among multiple agents, you will be able to scale your efforts and shorten your complete CI cycle to the amount of time it takes to build the longest component, and test the integrative suite (at most).
Hope this helps.
Weighing a bit late on this, but have you considered having different build configurations?
You can tell visual studio not to build certain projects depending on the build configuration.
The developer could simply select the configuration relevant for the project their working on.
Pretty ancient thread, but I can say I was suffering from a smaller version of the same thing and I upgraded to Visual Studio 2012 and the problems seems to have finally been fixed. The RedGate .NET Demon solution mentioned above also seems to work pretty well so far.
This is an old problem.
Use parallel build and SSD . See here (I think - quick google):
http://www.hanselman.com/blog/HackParallelMSBuildsFromWithinTheVisualStudioIDE.aspx
I found a tool which does mostly what I want (and even more): RedGate .NET Demon. It is probably still the first version because I encountered few issues in our big solution (problems with C++ projects, problems with switching build targets and few others) but I really like it so far. I especially like the way how it tries to track changed files in VS IDE and rebuilds only affected projects.
Edit: .NET Demon has been retired as it should not be needed for VS 2015. It still works with previous versions.
A lot of my projects contain the Castle/NHibernate/Rhino-Tools stack. What's confusing about this is that Castle depends on some NHibernate libraries, NHibernate depends on some Castle libraries, and Rhino-Tools depends on both.
I've built all three projects on my machine, but I feel that copying the NHibernate/Castle libraries is a bit redundant since I built Rhino-Tools using the resulting libraries from my NHibernate and Castle builds.
Right now, I include all projects in seperate folders in my /thirdparty/libs folder in my project tree. Should I simply just have /thirdparty/libs/rhino-tools in my project and use the Castle/NHibernate libs from there? That would seem to make logical sense in not duplicating files, but I also like having each project in it's own distinct folder.
What are your views on this?
This is one of the problems that we're trying to tackle in the Refix open source project on CodePlex.
The idea is that Refix will parse all the projects in your solution, and before your project compiles, copy the necessary binaries from a single local repository on your machine into a folder within the solution tree and point the projects at them. This way, there's no need to commit the binaries. Your local Refix repository will pull binaries from a remote one (we're setting one up at repo.refixcentral.com), and you can set up an intermediate one for your team/department/company that can hold any additional software not held centrally.
It will also try to resolve conflicting version numbers - Visual Studio can be too forgiving of mismatched component version numbers, leading to solutions that compile but fall over at run time when they fail to load a dependency because two different versions would be needed.
So to answer the question "how do you package external libraries in your .Net projects", our vision is that you don't - you just include a Refix step in your build script, and let it worry about it for you.
I use a folder for each, which seems to be the convention.
Does it really make a difference if you're copying them?
What if you want to switch one out? Let's say you go with a new O/R mapper. It will be much easier to just delete the NHibernate folder than to selectively delete DLLs in your Rhino-Tools folder.
Take this to it's logical conclusion and you won't have any folder organization in your lib folder since everything uses log4net :)
Add additional probing paths to your app.config files to locate the dependency dlls. This way your can get away with having just one copy of everything you want. Though there are some quirks to using this feature (you must create the folder structure in a certain way). Look here for more details on the tag.
I will definetly recommend having a thirdparty or vendor folder in each of your project trees. If you find it annoying to have 32 copies of the rhino-tools package, you can have a single copy of it in your code repository, and do external references to it in your project tree.
Lets say you are using SVN, you can make a repository called "thirdparty libs" and in this have versioned copies of the libs. You then make an external property on your "thirdparty"-folder in your project tree which then in turn automaticly will do a check out of your centralized thirdparty libs. This way you for instance only have to update in one place if a security or a bugfix comes out, but each project is still in command of choosing which thirdparty libs, and which versions to use.
About the deps internally in thirdparty libs, i wouldn't mind those. The first time you compile your project, and some of the libs arent copied to your bin-folder because of implicit dependencies you can add an external attribute into your bin-folder, which will then automaticly check out the missing libs. That way you still only have to update your thirdparty libs in one place.