Say you are writing an app like Photoshop where you have effects (filters), etc, should one make each of these filters a separate assembly using a separate project?
The main idea is to have each of these filters as nodes, so think of it like:
sourceImage -> Sharpen -> Darken -> Contrast -> Blur ...
It seems to me that it would make sense to have dll files like:
[Filters folder]
Sharpen.dll
Darken.dll
Contrast.dll
Blur.dll
But it would be hard to manage them like that, and that would prevent me to use the internal keyword for class members, right?
So right now I only have 1 dll for all filters.
What's the best practices for organizing assemblies?
I wouldn't restrict yourself to one filter per assembly. You may well want to group assemblies which implement similar functionality - e.g. colour/contrast together, while keeping them separate from very different kinds of filters (e.g. edge enhancing).
Just one bit of anecdotal evidence: I've often seen applications be difficult to manage due to having too many assemblies. I can't remember ever seeing one which had problems because it hadn't split the assemblies up enough. I'm not saying it can't happen - just that I haven't seen it.
Patrick Smacchia author of NDepend tool suggest that the number of assemblies is kept low. Look here. This also implies to a certain level that you use NDepend to manage dependencies between namespaces.
Also, compilation is faster if you have less assemblies and deployment is easier.
I would second Reed Copsey that DI (like StructureMap) solution could provide you with extensibility and testability if that's what you are after.
Reasons to have separate assemblies:
You can deploy them individually and load them at runtime using reflection (i.e., plugins). But make sure this level of added complexity is worth it.
You can separate your business logic from your UI and theoretically (occasionally in practice) have a separate UI.
You have an library of utility classes that you might want to use in both a Windows Forms and a ASP.NET application.
You can put your business logic in a DLL and then have a DLL of unit tests to exercise that code.
Along with the last point, you can configure some assemblies to only build in Debug or Release mode. So you might build your unit test assembly only in Debug mode and not ship it in Release mode. Or you might have an additional helper program (maybe for installation) that only builds for Release.
Reasons to avoid separate assemblies:
It adds complexity. Don't organize your code into multiple assemblies just based on a theoretical "modeling" of your program. Make sure the additional complexity actually buys you some greater value.
It slows down compiles/builds.
You can't have circular references between assemblies, so you have to jump through a lot of hoops and "plumbing abstractions" if you discover that your assembly needs to access classes and methods from a higher-level assembly. For instance, if your Windows Forms UI (.exe) assembly calls into a business logic DLL, the DLL can't reference the UI classes without a lot of messing around with interfaces and passing references across the layers.
I think this is something that comes with experience. My own experience has been that as I've matured as a .NET developer, I've become less inclined to create more assemblies unless there's a very compelling reason.
I think a better way may be to have a Filters assembly with Sharpen, Darken, Contrast, and Blur namespaces for the types in that assembly. It really all depends on how modular you need each bit of functionality to be.
Having separate assemblies makes it much easier to add more without recompilation. In particular, if you use some type of DI solution (like MEF), you can load these assemblies at runtime and inject them into your program.
However, this will require that your main program exposes an appropriate, public interface, so the other assemblies can work with your types. Internal modifiers become trickier, since you'd have to set the internals visible attribute for each assembly you want to have access to your internals.
Related
Question
Is there a mechanism in the .NET Framework to hide one custom Type from another without using separate projects/assemblies? I'm not talking about access modifiers to hide members of a Type from another type - I mean to hide the Type itself.
Background
I'm working in an ASP.NET Website project and the team has decided not to use separate project assemblies for different software layers. Therefore I'm looking for a way to have, for example, a DataAccess/ folder of which I disallow its classes to access other Types in the same ASP.NET Website project. In other words I want to fake the layers and have some kind of security mechanism around each layer to prevent it from accessing another.
More Info and Details ...
Obviously there's not a way to enforce this restriction using language-specific OO keywords so I am looking for something else, for example: maybe a permission framework or code access mechanism, maybe something that uses meta data like Attributes. Even something that restricts one namespace from accessing another. I'm unsure the final form it might take.
If this were C++ I'd likely be using friend to make as solution, which doesn't translate to C# internal in this case although they're often compared.
I don't really care whether the solution actually hides Types from each other or just makes them inaccessible; however I don't want to lock down one Type from all others, another reason access modifiers are not a solution. A runtime or design time answer will suffice. Looking for something easy to implement otherwise it's not worth the effort ...
You could use NDepend to do this:
http://www.ndepend.com/
NDepend could allow you to enforce "layering" rules by specifying that certain namespaces should not reference each other. You then plug NDepend and the ruleset into your automated build, and it will fail the build (with a full report) if there are any misdemeanours.
In this way you can enforce logical software layering concepts within an assembly without having to use project structures to do it physically.
Update
I answered the question late last night, and rather literally i.e. how you can directly solve the question. Although a tool can be used to solve the issue, developing in one project across the whole team is more than likely going to be a pretty miserable experience as the project grows:
Unless people are incredibly disciplined, the build will keep breaking on layering violations.
There will be source control merge thrashing on the VS project file - not pleasant.
Your unit of re-use is very large and undefined if you want to share assemblies with other applications\projects you are developing. This could lead to very undesired coupling.
Although I do not advocate having lots of tiny assemblies, a sensible number defined around core concepts is very workable and desirable e.g. "UI", "data access", "business logic", "common library" and "shared types".
Nothing out of the box; there may be some 3rd-party tools that you can use to kludge some rules together, based perhaps on namespaces etc. Something like a custom fx cop rule...
I put different component in different dll, then end up finding i am having too many dlls, should i put some in one dll, but use namespace to separate them?
You don't have to build one assembly for one namespace. Maybe you could use nested namespaces within one assembly. Just try to avoid splitting a namespace between several assemblies, it's harder to understand when you enter the project afterwards.
Yes, Personally when building some library, I tend to put all related functions within a single assembly. The basic rule of thumb I go by is if a single DLL depends on another DLL that is related in any way, I will generally combine them into one DLL. That does assume of course that both projects are being developed concurrently.
You can also use ILMerge in your build process so that you can combine many projects into a single DLL.
I usually use DLL's to enforce architectural constraints. For instance, I don't want my data layer to know anything about my business layer. When this results in too many assemblies (and long time to compile) moving them together might be wise. In that situation you could a tool like NDepend to check your architectural constraints.
We currently have a rapidly growing C# codebase. Currently we have about 10 projects, split up in the usual categories, common/util stuff, network layer, database, ui components/controls etc.
We run into the occasional circular dependency where project x depends on something in y and vice-versa. We are looking at maybe collapsing the projects down to one and just managing using structure folders/namespaces. We have a Java project which of course organises just using folders/packages so we're not sure what, if any, benefit having multiple projects brings. None of our projects require special project properties, except the main run project, which we may kept separate (and very thin).
Does anyone have any prior experience in why one project is better/worse than multiple projects and could suggest the best approach? And also any issues with circular dependencies would be useful in either approach would be useful.
Any input appreciated.
In my experience, separating code which creates a single executable in multiple projects can be useful if you want to
use different programming languages in different parts,
develop libraries that are also used by other applications, or
conceptually separate multiple layers (i.e., let Visual Studio ensure that there are no direct references from project Lib to project App).
Personally, I base most of my decisions on the second point. Do I think that part of the application can be a more general library that I am likely to need in other application? Put it in a separate project. Otherwise, as you point out, having a single project usually makes development easier.
About the circular dependencies: The recommended way to solve this is to put interfaces of the referenced stuff into a third project. For example, if you have two applications both sharing some objects through remoting, you put interfaces of the shared objects in a library project to ensure that they are available to both applications.
Without knowing the exact design of your application, it's difficult to give more concrete advise.
If you've got projects with circular dependencies, that indicates a problem with the design of the code, not with the solution/project model.
When making dependencies between projects, it helps to always think of one as "Lower" and the other as "Higher"
A higher level project (such as a web interface) should only depend on lower projects. A lower project (such as a utility) should never depend on something higher, such as a web interface. If this happens, it either means your higher level project has something that really should be in the lower project, or vice versa.
Generally speaking, having multiple VS projects (within a VS solution) does just make sense in these cases
You can potentially reuse the produced DLL in another project (a class library)
You want to separate things like in a layered architecture where you may drop the DAO dll and exchange it with another
There are just different front-end projects (i.e. ASP.net MVC apps) which need to be deployed in different physical locations but use the same BL, DAL.
If your saying you're having the problem of circular dependencies, then you're having a problem in your code design. Probably you may put that logic which is used by multiple projects inside a class library designed to be reused in many projects.
Generally I'd say you shouldn't add more projects if you don't really need it. Splitting up into projects means adding more complexity, so when you're doing so, you should gain a reasonable benefit from it.
We've noticed that Visual Studio's performance degrades significantly as the number of projects grows. Something as simple as switching from 'Debug' to 'Release' configurations can takes upwards of 15 seconds for solutions with around a dozen C# projects in them.
Also, as a counter point to Reed's comment about build times, I've seen build times grow because Visual Studio seems to be spending a lot of time on the project overhead. The actual compile times seem fast, but the total time from hitting build to being able to run is significant.
My advice would be keep the number of projects to the minimum you can get away with. If you need multiple projects for good reasons then use them as necessary, but prefer to keep things together. You can also refactor to split a project into two if necessary.
Multiple projects allows better reuse of specific types within multiple applications. It can also improve build time, since certain projects will not need to be rebuilt for all code changes.
A single project makes life easier, since you don't have to worry about dependencies. Just realize that the ease comes at a cost - it also makes it easier to let poor design decisions creep into the code base. Circular dependencies, whether in one project or multiple, are typically a design flaw, not a requirement.
There are several reasons for separating a solution into different projects (and thus assemblies), and it mainly comes down to re-usability and separation of responsibilities.
Now your goal should be to make an assembly (aka project) has the minimum amount of dependencies on other assemblies in your solution, otherwise you may as well have everything in fewer assemblies. If for example your UI components have a strong dependency on your data access code then there is probably something wrong.
Really, this comes down to programming against common interfaces.
Note However:
When I say "otherwise you may as well have everything in fewer assemblies", I wasn't necessarily suggesting this is the wrong thing to do. In order to achieve true separation of concerns your going to be writing a lot more code and having to think about your design a lot more. All this extra work may not be very beneficial to you, so think about it carefully.
You might find the following Martin article worthwhile: Design Principles and Design Patterns (PDF)(Java).
A revised version in C# specifically is available in Agile Principles, Patterns, and Practices in C# also by Martin.
Both express different guidelines that will help you decide what belongs where. As pointed out, however, cyclic dependencies indicate that there are either problems with design or that something is in a component that belongs in a different one.
Where I work, we opted for an approach where the aim is to have a single project per solution. All code library projects also have a test harness application and/or a unit test app.
As long as the code libraries pass testing, the release versions (with Xml Documentation file of course) get transferred into a “Live” folder.
Any projects that requires functionality from these other projects have to reference them from the “Live” folder.
The advantages are pretty clear. Any project always accesses known working code. There is never a chance of referencing a work in progress assembly. Code gets tested per assembly, making it far easier to understand where a bug originates. Smaller solutions are easier to manage.
Hope this helps!
Shad
Start with single project. The only benefit in splitting your codebase into more projects is simply to improve build time.
When I have some reusable functionality that I really want to isolate from main project, I'll just start brand new solution for it.
In my team we have hundreds of shared dlls, which many also reference other dlls that themselves reference other dlls, and so on. We have started to use a 'Shared' directory for all the dlls that we feel are generic enough to use in other projects, such as a database comms dll.
The problem is that if one of the dlls all the way down the tree is changed, then everything that references it needs to be recompiled to avoid versioning issues (which occur at runtime).
To avoid this, there is now talk of adding all our 'shared' dlls into one big assembly, and anyone creating new apps simply reference that, and that alone.
This obviously will get bigger and bigger and i'm not sure if this is the best way or not. Any thoughts please?
What we do is treat the maintenance of the shared DLLs as a project in itself, with its own source-control and everything. Then about twice a year, we do a 'release' of the shared DLLs to the public, with its own version number and everything. As long as you always use the DLLs as a 'set' (meaning all the ones you reference are from the same release) you're guaranteed not to have any dependency issues.
It's most definitely not the best way to do it. I have a few "shared" DLLs at my job that are kind of like that. They get unwieldy and difficult (read: impossible) to make meaningful changes to because it becomes too difficult to ensure that changes don't break apps downstream, which seems like the exact opposite of what you're trying to do.
It sounds like what you really need to do is separate your concerns a little bit better. If all of these DLLs are referencing each other, they're probably too tightly coupled. A true "shared" DLL should be able to stand on its own, or as part of a packet of three or four that travel as a group. If your dependencies are actually preventing you from making changes, then your coupling strategy has gone horribly wrong.
Putting everything in one large DLL certainly isn't going to make anything better. In fact, probably the opposite. Once you've got everything in one DLL, the temptation will be there to couple everything within it even more tightly together, which will make it impossible to pull things apart later.
you can make one solution that include all connected projects.
and when you need to release, just build this solution
Update.
As you say, the solution is cant hold so much dlls.
In other hand you can make an external MSBuild script
or using CruiseControl.NET that have possibilities to make such complicated tasks.
To quote from the GoF book, "Program to an interface, not an implementation." This could apply here to some of your libraries. You are already aware of how brittle your develop becomes when you have tight coupling. Now what needs to be addressed is how to give you breathing room.
You can create an interface. This will provide a contract that any application can use to specify that a minimum set of functionality is available.
You can create a Service that implements an interface. This will allow you to provide what would be thought of as an addon or a plugin. This allows you to design towards a contract version with expectations that your tools will adhere to.
You can create a Service that only uses an interface. This will allow your application to send in any concrete implementation that adheres to a contract of design.
Products like development editors and web browsers use this approach to make some code reuse possible. Thank you. Good day.
Design Principles from Design Patterns
Plugin
...if there is such a thing. Here's an image of two approachs for structuring DLLs/references in a .NET application: http://www.experts-exchange.com/images/t80668/compArch.png. The app can be a website (it is in this case) or a winform. Each box represents a DLL. For the winform app, just replace "webcontrols" with "winformcomponents".
The first (top) image is what I like. You might want to extend "some" of the base web controls and directly use others. The 2nd image makes you extend any web controls via interface. To me that seems overkill since you may want to simply use what is already there without modification. Which is better and what are the advantages/disadvantages?
The first image puts the lowest common constructs(exceptions, fileIO, constants, etc) into a common.dll. The 2nd image puts app business logic and common into one DLL. Which is better and what are the advantages/disadvantages of each apporach?
Having lots of references is usually bad because loading DLL's has a non-negligble cost. It's not as elegant perhaps, but having fewer modules improves your performance. As so often in our craft, you have to find the balance between elegance of total modularization and the harsh reality of performance. And as usual in our craft, you won't know what balance is until you start profiling to measure the performance of your application.
I think this is a really down to programmer preference.
It all boils down to dependencies really. More things in one DLL means it will naturally create many more depentdents on that DLL.
I personally tend to follow along similar lines to the MS structure, for these reasons:
It makes it easier for newcomers to the custom "framework" to find what they want (e.g. CompName.Web.UI and CompName.Data.
It helps reduce the dependencies to "obvious" choices. I am not too keen on CompName.Common type DLL's because it does not clearly indicate possible dependents, whereas CompName.Web.UI suggests that it is likely to be used by any web apps.
Obvious size reduction, since DLL content will be more "relevant".
DLL's for tiers within an app make sense, the types within should only be those types required by the business model, other objects (such as utility, data access etc.) should be in their own libraries.