Management and structure of a growing .NET project - c#

We're building a .NET software platform for test automation for in house use in our company.
The application is composed of a GUI (WinForms) component, and various "Actions" that are dynamically being loaded into it to be executed.
There are approximately ~ 100 Action projects going already, with this number increasing.
Some of these projects are interdependent on other projects and so on.
All actions loaded must reference our "SDK" dll for various activities (results to the main app, logging, etc).
With this relatively simple design, we are facing some management decisions that we'd like to solve in the best way:
Should actions ("plugins") reference our SDKs project output, or some known stable version of it?
For example, when developing a large application (MS Office just for the example), not all teams work with source code for all components naturally.
What is the best solution for something like this ? and why?
How to properly verify that all needed dependencies (3rd party libraries for example) are indeed taken from the correct location ?
What are common practices in scenarios where managing multitude of projects that are linked in between? are there any tips for doing so ?

This is a problem that doesn't have a clear answer, but...
There are two paths you can take. A strongly coupled system or a loosely coupled system.
For a strongly coupled system I can suggest two directories for binaries: a 3rd party directory and a directory that houses DLLs that you company builds that other developers can reference. The 3rd party DLLs (outside your company) should be located in source control so that all developers reference the same versions of the 3rd party DLLs from the same location this avoids developer machine inconsistencies and having the problems of installing 3rd party software on every machine. The in house DLLs should not be referenced in source control and should be built on each developers machine via an automated build batch file or similiar. In a build post step you can copy them all to the same directory and as long as developers get the latest source control and build, everyone has the same DLLs from within your company.
For example, get latest, build (using a batch file to build all the projects needed), and then as a post build step copy the output to common. Now all of your other projects can reference the common compnay DLLs and the third party DLLs from the same location and everyone is consistent.
The problem is that the references are strong coupled, so changes can sometimes be problematic if not communicated properly.
A loosely coupled system uses a framework such as MEF (Managed Extensibility Framework) and your components reference "Contract DLL" which define the interfaces for your components. The project reference the interface or contract DLLs and don't really care about the implementation and then MEF manages the plugin for you.
In this case, you reference the interface DLL but not the actual DLL that implements.
For example, say I have an interface called ILog with a method called LogMessage.
private ILog _logger;
_logger.LogMessage();
So, in a strongly coupled case: Action.DLL references Logger.DLL directly.
In a loosely coupled case Action.DLL references ILog.DLL (just the interface). Logger.DLL implements ILog.DLL. But Action.DLL has no refernce to Logger.DLL directly.
Now I can have any number of DLLs that implment the ILog interface, but the Action.DLL does not reference them directly. That's pretty cool and one of the more exciting features of MEF and loose coupling in general, the ability to not to have dependencies.
How you choose to go, either way is acceptable, I think the loosely coupled idea fits your scenario the best as teams would just have to know the contracts versus the actual implementations.
I wouldn't have one massive contract DLL, I would try and break the interfaces into logical groupings. For example, logging seems like a Utility type of interfance, so I would create a Utility contract DLL with a ILog interface. How it is split up depends on what you are trying to do. Or each interface could be a contract DLL, but maybe that is a little extreme.

This is a somewhat complex topic, especially in .NET land. I don't know about "best" solution, but I'll explain how we manage it; perhaps you will it useful for yourself.
This allows you to build large systems with lots of linked projects, but incurs in a lot of complexity issues. As, I think, any solution of this kind would.
First: physical structure (we use SVN).
There is a source control root for each project
Each project has its own trunk, branches and tags
The trunk folder has a versioned \src and \build folder, and an unversioned \lib folder
The \lib folder contains binaries to reference.
Those binaries could be 3rd party libraries or other projects that you need to link to (e.g., your SDK). All binaries under \lib come from an enterprise ivy repository (see http://ant.apache.org/ivy/). There is a lot of movement these days in .NET land concerning NuGet, so you could check that out too.
Your versioned \build folder contains build scripts, for example to get binaries from ivy, publish your project to ivy, or compile each project. They will also come in handy when you want to point a Continuous Integration server at each of your projects.
Second: To define where dependencies come from
Answer: They come from your ivy repository (it could be as simple as a network shared file system).
You have created your repository, so you have control as to its contents.
Be very careful with 3rd party binaries that get installed in the GAC. Visual Studio is a pain in the a^^ to deal with this.
Specifically:
How to properly verify that all needed
dependencies (3rd party libraries for
example) are indeed taken from the
correct location ?
Ivy gives you a great flexibility with dependencies, and it also resolves transitive dependencies; e.g., you can depend on SDK rev="1.+" status="latest.release" which would mean "the latest stable 1.x release of SDK, or on SDK rev="2.+" status="latest.integration" which would mean the latest available binary of 2.x SDK (probably as produced from a continuous integration build).
So you will always depend on compiled binaries, never on project output. And you can control which version of the binaries to get. 3rd party dependencies will probably be brought in as transitive upon your SDK.
This also means that the amount of code in your projects will stay as small as you need to have workable Visual Studio solutions. It also means that refactoring tools like ReSharper will be a lot less useful. There will also be a certain amount of complexity concerning your build scripts and your branching strategy. That depends a lot on the logic of your components.
This is a brief overview, if you think this is the sort of thing you want I can expand the answer. Good luck; the .NET ecosystem, and Visual Studio in particular, isn't really thought to work like this.

Related

Testing GAC Style Plugin Architecture?

Background
I have a deployed system that consists of:
A few dozen "core" DLLs
Hundreds of "plugins" that make use of the "core" DLLs
The core (and each plugin) are independently deployable, but the plugins can only execute correctly if they're deployed alongside the correct version of the core.
The Problem
What if the core gets modified in a way that breaks a plugin? The actual APIs/interfaces might remain the same, but a logic change within one of those methods could (in theory) cause negative downstream impacts. How can we prevent those?
This architecture is old and "GAC style" and isn't really what I'd suggest today. But it's what I have. I've started moving the "core" DLLs into individual nuget packages to try and move the system to a "bring your own" model. In the interim time, I am trying to figure out how to execute existing (~5000) integration tests on the core+plugins, but I'm having a hard time coming with a good way to achieve this
What I have Tried
You can't just compile the tests in the plugin's solution because the plugins won't necessarily be compiled against the latest core DLL set, they'll be at whatever version they consumed when they were last modified. Similarly, you can't try and attach the *.csproj files for those plugins to the core's solution because the plugin's xunit test project will be executing against the (outdated) nuget packages for that plugin.
The best I could come up with was:
Publish each plugin with its test DLLs
When building the core, consume all the plugins + test DLLs
This will be the "latest" core, so will mimic what will happen if the core updates
I don't like this solution. It's awfully kludgy. Will it work? Yes, at least well enough to tell you that plugin X was broken and needs attention. Do I like it? Nope.
Does anyone have a better idea?

Dealing with References & Nuget Packages

I am having a bit of trouble. I have no idea what question title is appropiate in this case, feel free to edit it, if you have something better in mind.
So, basically this is the current situation:
Currently I have four projects (not the real names, but the architecture is identical):
Client (the main client logic)
Server.Main (the main server logic)
Server.Extensions (some functions for the server, e.g. helpers etc. can be used standalone, shouldn't rely on something from Server.Main)
Shared (shared code between client & server)
For each of the projects I create a Nuget-Package and upload it to my online repository. This repository is private for now and only for development purposes.
Here is a summary, what project uses what Nuget-packages:
Client uses the Nuget Package of Shared.
Server.Main uses the Nuget Package of Shared & Server Extensions.
Server.Extensions uses the Nuget Package of Shared.
This works fine for me at the moment... I can easily update my repository for testing purposes and use the freshly updated version of my package.
But here comes the problem:
I would like to share my project with other people now (e.g. the GitHub community). But when they have gotten the projects, they don't have any access to my private repository and the nuget package manager will not find the packages.
Further more there is another problem with my architecture: When they will fix something, e.g. in Shared, they wouldn't be able to test the changes, because the Client & Server would always use the Nuget package from the repository and not the fixed/changed local code.
And I thought about referencing the Shared project directly in all other three projects. Would that mean, that whenever I update Shared, I have update all other three projects aswell?
I think, my whole Nuget architecture is wrong. But I don't know how to do it correctly / in any better way. Does anyone have a better approach for me?
I wouldn't say this is necessarily wrong. If someone is trying to consume and work on your solution, P2P (project to project) references are probably the best since there is minimum overhead and there is a higher probability of catching issues early on during build and subsequent debugging sessions.
You can still easily create NuGet packages for all three during build and consume them in lets say a integration test by either packing them in a post build step or using tools like NuProj.

Multiple solutions working with shared library project in Visual Studio

I have an independent solution with multiple projects including class libraries and control libraries. This solution and all its projects are under TFS source control.
I reference the output of one or more of these libraries in all new projects I develop. References are currently binary rather than project references.
The new projects are also always under source control and now I need to add debugging support for the libraries.
If I reference the library projects from them, the project file is modified and no longer works with the original library solution since source control providers for the library and referencee may be different.
Is there an easy way to accommodate this?
You should package the shared binaries, along with indexed PDB's, into a Nuget package. Nuget was specifically designed to solve these problems.
You can index your PDB's by running an indexing tool. TF Build can automatically index your PDB's.
Nope.
There are some strategies you can use, however. Easiest (possibly, but not in some cases) is to build the project you wish to debug, drop the binaries on top of the application that hosts them, and attach your debugger to the running application. This makes sure you have the correct version of the assembly under debug, but you might have to do unwanted things, such as making sure you're not targeting a specific version of the assembly
Which may be bad news for an assembly under development. It also requires lots of handiwork, which depending on where your application runs may require you run remote debugging, deal with issues transmitting dlls across untrusted networks, etc etc.

Best way to keep COM-exposed assemblies in sync with .NET-referenced assemblies

I have a C#, COM-exposed .NET assembly, which I use heavily as a library for VB6 clients (Office VBA). I am extremely happy with it.
That same COM-exposed library is useful for me in some newer, .NET clients I have written as well. From Googling, the consensus is that the only way to do this is to reference the .NET libraries themselves (link 1, link 2), which I have done.
When these .NET apps are deployed, VS naturally wants to bring my COM .NET assemblies with them. But I now have several, independent copies of my COM assembly floating around with my .NET apps-- in addition to it being registered as a COM object on the machines in question.
This means that every time I make a bug fix or add functionality to my COM, I need to update these "floating" copies as well; which makes maintenance annoying at best. I want to expose my needed functionality once, for all apps that use it (isn't that the purpose of COM?!)
I tried activating the COM using late binding, hoping I could get around the problem-- but I got different behavior on two different machines, so I decided to ditch that idea.
Is there an elegant way to handle this? I thought perhaps it would make sense to register the COM assembly in the GAC upon installation, but it just seems like the wrong thing to do since it's already registered as COM (plus, it seems like registering in the GAC is not considered good practice).
I believe the easiest way to manage this scenario is to distribute the COM assemblies as a separate deployment that installs the assemblies to the GAC. When adding the assembly reference to the .NET projects, make sure the Copy Local property on the reference is set to False. This tells the .NET build to not include a copy of the assembly in the deployment, which ensures that both the deployed .NET app and the VB6 apps are both referencing the same version (the one installed in the GAC and registered with COM services)

Best approach to managing software versions via code branches?

I would like to customize an off-the-shelf software that has a Lite Edition and an Enterprise Edition. The features are almost the same so that my extended customizations can work for both, but I have to recompile for each version because they have different version assemblies.
Can someone help advise me on how maintain this? I am using Visual Studio 2008 and Visual SVN. Should I create 2 completely different solutions, create one solution with duplicate projects, or create branches? Branches seem like the elegant route, but what is the idea? Create a "Lite Version" and "Enterprise Version" from the trunk... with the trunk being the "Lite Version"?
It depends on how much your code differs between the two. In the best case, if it's simply a matter of linking to different assembly versions, use NAnt or similar and simply create a build target for each one.
If life isn't quite that utopian, I'd create three projects on one branch: one class library to contain all common code, and another class library per version that only contains unshared code.
If the shared code has dependencies on those multi-version assemblies, though, you're more or less stuck doing things manually, as far as I can tell. That means maintaining a branch-per-target and doing regular merges between them to keep shared pieces in sync. Using a distributed CMS would ease the pain of merging, and creating a battery of unit tests will help reduce the amount of error these cross-project merges introduce.

Categories