I have a project called Data which is a data layer. In this project, all files are just lying in the top folder. I have enumerations, POCOs, repositories, partial classes and so on.
If i want to move those files into subfolders, what would be the preffered folder name for each folder? Is there any convention?
The "Repositories" folder is pretty obvious, but where should i keep POCOs and enumerations?
Thanks
I (currently - changes based on project) tend to use this approach when naming assemblies/projects/namespaces in a SAAS/Web style project)
CompanyName.
ProductName.
Data.
Business. (references data)
Model. (POCO and interfaces - referenced by all)
Services. (WCF service layer)
ServiceClient. (referenced by web clients)
Web. (web client business layer)
ViewModel. (view specific model)
{client facing product segment} [Commerce, CMS, CRM, Reporting, etc.]
To explain the Services/Service Client...I use an IoC (currently StructureMap) that allows my WebClient to either speak directly to the Business layer or to be redirected to speak through the ServiceClient through Services to the Business layer. This gives me the flexibility to either deploy my app layer to my web applications or to distribute pieces of my business layer (app layer) to different servers by way of WCF/SOA principles.
A good practice is to name the folder after the name of the project.
Design Guidelines for Developing Class Libraries has a set of Guidelines for Names
The last item should be of paticular interest for you:
Names of Assemblies and DLLs
Names of Namespaces
Types and Namespaces
I tend to use project folders as a way of separating out sub namespaces. So in your case, perhaps a folder called Repositories, which has class in the Data.Repositories namespace. Note, for partial classes, each file needs to be in the same namespace.
Best prectice is to divide entities in folders by object model meaning, not by type.
If it is not clear how to group the classes by usage or object model meaning, just leave them all in one folder. Using subfolders don't give values if they don't organise the classes in a meaningful way.
Dividing folders by type, e.g. enumerations, POCOs, repositories, partial classes etc is not likely to be useful.
You may wish to use a subfolder for generated code that should not be edited.
Also remember you can have folders within the solution explorer that are not part of the file system. Given how costly (in time) it is in some source code control systems to move files between directories, I would consider starting of just using msdev folders until you are clear on the structure you want.
There is no need to put each enumeration in its own file, if an enumeration is only used by one class, it is valid to put it in the same file as the class. E.g the PersonSex enumeration can be put in the person.cs file. Likewise if you have a lot of small and closely related classes, consider putting them in the same file.
Related
I will try to explain in as much detail as possible. There may be similar questions here on SO and I've gone through all of those but none of those have what I needed.
So, I'm starting out with a large scale C# MVC5 based Web Project and I want to organize everything in as much decoupled way as possible. For the database part I'm going to use Data Access ORM from Telerik (Previously known as Open Access) because I will be using MySQL for my project.
So far I have organized everything as below. I have defined solution level folders to divide the projects because I think there may be a possibility to have more projects in one layer in future.
**Solution**: td
- Business (Folder)
-- td.core (Project) (Contains Services and ViewModels)
-- td.interfaces (Project)
- Data (Folder)
-- td.data (Project) (Contains Database Models i.e. Telerik, Repository, Context Factory and Unit of Work class)
- Presentation (Folder)
-- td.ui (Project) (MVC5 Project, also Implemented IoC here)
- Shared (Folder)
-- td.common (Project)
Generally, when you bind models in your MVC project, if you have just one project in your solution, it works pretty easily without an issue.
i.e. in a MVC Controller
var obj = new TempClass();
return View(obj.getAllUsers());
and then in the corresponding view you use this at the top
#model (model type here)
When I separate all these layers in their own projects as mentioned above. The data layer would be the one directly communicating with the database hence I will generate the Telerik Data Access rlinq schema in my Data node where it will also generate the classes for the tables in my database (Default config)
Now, from the setup above, from the controller I'm supposed to call the Business layer to fetch the data and which will communicate with the Data node.
The question is that in the controller and in the view I will need the data types / references of the model I'm binding to. So, should I keep my automatically generated classes still in the Data node or can I move ONLY the generated classes to the Shared Node and then use those for the binding in the Controller/View? Which one is going to be a good practice? as I don't want to reference the Data nodes directly in the controller otherwise there is no point in separating everything like above.
Another quick question. I would be integrating so many third party APIs via REST/SOAP. In which layer should these best fit?
If anyone has any other Architectural suggestion or something that I'm missing here, please do suggest.
Thanks in advance everyone.
UPDATE!!!
Please see my updated architecture above.
Here's what I did so far.
I have added Repositories, Services and IoC.
In my Global.asax, I'm initializing the IoC which configures the Services etc for me.
My controller has an overloaded constructor now having the service from the business layer as the parameter.
Controller calls the service to get the data and the service calls the repository for it.
I have followed the generic repository path instead of creating repositories manually for each type
For 3rd party APIs, I will use the data layer and business later won't know where the data came from. It just needs to ask what it needs.
All this was made easier with the help of a dedicated Interfaces project which is being referenced from both the Business and Data layers when needed. Because as both want to implement abc interface I cannot declare it in either Business or Data layer since there would be circular referencing then which prevents me to reference both (Business/Data) projects to each other.
So, with the help of above changes, I can easily do what I want now and Everything is working perfectly as I want. Now the last question I have is
Is there any flaw in this architecture?
For a domain-centric architecture where it's easy to add another type of UI or change persistence technology and where business classes are easily testable in isolation, here's what I'd do :
Business layer with no dependencies. Defines business types and operations.
Data layer with data access objects/repositories that map from database to business types. You can also put your third party API accessors and adapters here. Depends on Business layer where repository interfaces are declared.
No Shared layer. Business types are the basic "currency" that flows through your system.
UI layer depending on the data access interfaces declared in the Business, but not on the Data layer. To decouple UI further, you can introduce an additional UI-agnostic Application layer.
You can read more about this at Onion Architecture or Hexagonal Architecture.
As it is, your architecture is pretty much data-driven (or Telerik Data driven) since the business and UI layers are tightly coupled to the Telerik schema. There's nothing wrong with that, but as I said in my comment, it enables other things such as quick development from an existing database schema, over full domain decoupling, framework agnosticism and testability.
Whether your Telerik generated model lives in the Data or Shared module makes little difference in that scenario IMO. It is still the reference model throughout your application and your controllers will be coupled to it anyway. The only thing I would advise against is moving the generated files manually - if it can be automated all the way, definitely do it or don't move the files at all.
I'm nether an expert for your special technologies, nor would I regard this as the ultimate answer, but I give you some hint's of the possibilities you may have (depending on your technologies):
Business should have exclusive access to data
Currently I don't really get, why your controller and view need access to any data-base related stuff at all? Shouldn't your business layer handle all of that and hide it from controller and view? But let's assume it's necessary for some reason.
Ways to split the data layer
You shouldn't move generated classes manually. You could change your generation-settings, to generate them elsewhere partially. But manually cherry-picking and moving them, results in an architecture which is hard to maintain.
The cleaner solution would be, if you can change the visibility of your classes. Can you generate classes with project or folder visibility instead? Or can you only export defined packages or classes in the project settings?
A workaround which requires more maintenance is the local extension. You could create new classes in your shared folder, which derive from the data layer classes.
Stucturing external APIs
Give them one or more own projects, so they are easier to change later. I know approaches where you have one main folder for each API. This makes each of them easy to change, but clutters your workspace. The important project will only be 4 out of 1000 projects. I normally prefer one folder containing all APIs. Thus the APIs are slightly harder to change, but your workspace stays clean. Your decision depends on two facts: how often do you change, add, remove or just study the APIs. And does your IDE provide a way to "hide" folders/projects from your workspace.
Hope this helps a little :)
I am trying to apply the onion architecture by J. Palermo, but I have a few things I am struggling with.
I have a few parts and I don't know exactly where to put these.
I have a plugin engine which reads a directory and determine what things to load en to do
Have some resource files with translations which are used in several projects. Where should I put these files?
I have some attributes which are used throughout the system. Where to put these?
I also have two 'base' controllers, some default results and views. Where should I put these?
All those items are used in several projects so I want to put the items at a central point.
My current solution structure looke like this:
Project.Core (contains the domain objects and interfaces of the repositories)
Project.Infrastructure (is the implementation of the core)
I am using MVC2.
I don't think it's something that the Onion architecture would solve by itself.
What I would do, is to put all these items in one or several projects, within another solution and build Nuget packages allowing me to deploy them everywhere I would need them.
This way I would have deployed items like your base controllers in your MVC project and plugin/translation stuff in your Infrastructure project.
That way, whenever you'll need to have those elements available in your newly created projects, you'll just have to deploy the package again.
Those items will become independent, stored in a central point (a new sln) and will have it's own release cycle!
Assuming that I'm using no ORM, and following DDD, consider the following case:
A Project has a set of Files.
I've created both a Project and a ProjectRepository and a File and a FileRepository classes.
My original idea was having all the File entities for a given Project being passed to it in its constructor. This Project instance would, of course, be created through the ProjectRepository.
The problem is that if I have a million files (and although I won't have a million files, I'll have enough ones to make this take a while), I'll have to load them all, even when I don't really need them.
What's the standard approach to this? I can't think of anything better than to pass a FileRepository to each Project instance.
Since you mention DDD: if there are two repositories it indicates that there are two Aggregate Roots. The whole point of the Aggregate Root concept is that each root is responsible for its entire object graph.
If you try to mix Files into a Project object graph, then the ownership of the Files is ambiguous. In other words, don't do this:
project
- file
- file
- file
Either treat them as two (associated) object graphs, or remodel the API so that there's only a single Aggregate Root (repository).
There is no standard way. This is domain driven design, so it depends on the domain, if you ask me.
Maybe you could add some more domain to your design.
You only have two concepts: a Project and a File. But you say you don't want to load the file (assuming that File will always load the content of the file).
So maybe you should think about a FileReference, which is a lightweight representation of a file (Name, Path, Size?).
For me it sounds like your problem is the handling of a large set of files and not OOP.
You could implement a service layer which your clients interact with which co-ordinates the repositories and returns the domain entities. This would provide a better separation of concerns; I personally don't think that your client should have access to your repositories.
I've a WCF Service with BLL, DLL and BE (Business Entities) on separate class libraries.
I would like to use the above BLL, DLL and BE for other project types such as Console Application, Web Application and Azure Worker Roles etc. The reason being all these application use the same data source and some of the same BE.
Could anyone please suggest if the above approach is the best pattern to use? OR should I create separate BLL and DLL on each project type of it's own.
Thankyou heaps.
There are a couple of ways of sharing the logic:
Reference the DLL in other projects directly. For code that is shared, adjust the output to a shared directory. Compile the shared logic first, and then in project that needs this logic, just add a DLL reference.
Link the source control files. Visual Studio allows to link source control files in other projects. I've done a few times, but it can get a little confusing because the source file is linked. To make changes, update the source file in the project that is not linked, then all the projects will be updated as they are linked to the source control file(s).
Implement Contracts via interfaces. Instead of referencing code directly, each BLL, DLL, BE exposes an interface via a Contract DLL. The project that uses the BLL, DLL, BE then references the contract DLL (not the actual DLL directly) and uses the interface. This is a loosely coupled model. To use this, one can use UNITY or MEF or any other type of framework that helps to bind the loosely coupled components together. The nice thing about this is that your code is just sharing the interface and not the actual implementation, so it can change in the future rather easily.
My advice is that if your implementation changes frequently, it is better to go with a loosely coupled system. If sharing logic that will not change, then go for a tighter coupled system with the first two options.
I've a project where some business logic is separated to an DLL project, this DLL contains the business logic for this software for a specific customer.
Now I've a problem after another client with different rules want to implement the software, I need someway that the application load the appropriate dll according to the client using the software, considering that this dll contains same function names but different bodies.
I'm using c# 3.5, is there a way to do so ??
Yes, you certainly can. You can branch the project, alter the implementation of the classes, keep the signatures of all the classes and class members the same, recompile, and your business logic will behave as you wish.
But, this is not good. You will have two different branches, with different implementations, for which you will have to keep the signatures in synch forever. And then you'll have another client, and another. This will be a nightmare that never ends.
Is is possible that the differing functionality can be separated out? You can:
put configuration in the database or configuration files (probably XML). A lot of your app should work based on tables or config files, for this reason.
you can implement plug-ins and providers for places where the code needs to be different.
kindof oldschool, but you can implement plug-and-play functionality using the part of CodeDom that compiles code (ignore the part about graphing out code). You can then put functionality in easily edited text files.
take a look at the Managed Extensibility Framework, built for just this type of thing.
Code the business Logic against an Interface - IBusinessLogic.
You can keep both business logics in the same assembly, and use config based dependency injection to specify which business logic is used during the deployment to the customer.
If I understood your problem correctly than you are looking for business logic customization. You can achieve it through several ways. one of them I am describing here.
Create a folder on your application directory for customization DLLs. Create all your business objects through a wrapper. which will 1st check on customization dll for appropriate Class before any business object by using reflection else it will create business logic from regular class. hope this will help.