Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am handling a project which needs restructuring set of projects belonging to a website. Also it has tightly coupled dependency between web application and the dependent project referenced. Kindly help me with some ideas and tools on how the calls could be re factored to more maintainable code.
The main aspect is that there are calls to apply promotion (promotion class has more than 4 different methods available) consumed from various functions, which could not be stream lined easily.
Kindly help me here with best practices.
Sorry guys- i could not share much code due ot restriction, but hope the below helps
My project uses N-Hibernate for data access
Project A- web project - aspx and ascx with code behind
Project B- Contains class definition consumed by project C (data operation class)
Project C - Business logic with saving to database methods (customer, order, promotion etc.)
The problem is with project C - which i am not sure if it does too many things or needs to be broken down.But there are already many other sub projects.
Project C supports like saving details to DB based on parameters
some of the class methods in this calls the promotion based on some condition, I would like to make things more robust - sample code below
Project -C
Class - OrderLogic
public void UpdateOrderItem(....)
{
....
....
...
}
Order order = itm.Order;
promoOrderSyncher.AddOrderItemToRawOrderAndRunPromosOnOrder(itm, ref order);
orderItemRepository.SaveOrUpdate(itm);
So just like the above class the promotion is called from may other places, i would like to streamline this calls to promotion class file. So i am looking for some concepts.
Most important in any project, especially web projects that often need to communicate with a persistent layer is to leverage dependency injection.
But before you do that, you need to make sure that the classes that provide services to communicate with the database all have an interface. Typically these classes are called data access objects (DAO). So, you'd have something like:
public class UserDao : IUserDao
{
public User GetUserById(int id)
{
...
}
}
As a rule of thumb, for these data access objects, if they contain conditional logic then you should probably refactor that out into a more business oriented service (class). It's best that your interface to the database contain has little logic as possible. It has to be thin because this layer is hard to unit tests because of its dependency on the database.
Once you've done this, use a dependency injection container and register IUserDao and its implementation.
Now, moving forward, you'll be able to create unit tests that completely mocks that database by mocking the UserDao implementation.
May I suggest:
Microsoft Unity for dependency injection
FakeItEasy for unit testing
mocking framework
Other fine ones:
Castle Windsor (DI)
Ninject (DI)
RhinoMocks (unit test - mocking
framework)
Good luck!
Hope it helps.
I strongly suggest not to start restructuring your application without a strong knowledge of SOLID principles and dependency injection. I did this mistake and now I have an application full of service locator (anti)pattern implementations that are not making my life simpler than before.
I suggest you to read at least the following books befor starting:
http://www.amazon.com/Agile-Principles-Patterns-Practices-C/dp/0131857258 (for SOLID principles)
http://www.manning.com/seemann/ (for .NET dependency injection)
http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
A possible strategy is not refactoring just for the sake of it, but consider refactoring only the parts that are touched more than others. If something is working and nobody is going to change it there's no need to refactor it, it can be a loss of time.
Good luck!
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
This post was edited and submitted for review 4 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I am trying to implement the clean architecture and my current understanding of it is that it is meant to increase loose coupling and database independence mostly through dependency injection & dependency inversion. I am currently using EF Core on the infrastructure layer with Masstransit(Mediator & Messaging) on the application layer. I use a Generic Repository that sits on the Infrastructure Layer where the EF related methods like "ToListAsync" and "FindAsync" are expressed and i access them from the Application Layer through an interface. My LINQ specification code also sits on the application layer.
This all made sense as long as i assumed that the reason to move the EF dependency to the Infrastructure layer was that i was making my code framework and database independent and that LINQ would work with other databases or with another DAL library. It just so happens that i recently decided to replace EF with Dapper and all this talk about database independence and loose coupling starts to make little sense to me. I still have to rewrite everything from the ground up since LINQ queries can't be used with Dapper which means my extension methods and other abstractions i built are now useless. And besides, there are many other very relevant (NoSQL) databases that don't have a mapping with LINQ.
Now here's my question. How to ensure that my Core project (Domain and Application Layer) stays agnostic of anything that relates to the persistence layer. That not only includes EF but also LINQ queries.
Why on earth do we need to make the application layer seemingly "independent" (but not really) from EF Core when it make no difference at the end of the day. It comes with no added value at all. The reliance of the application code on the database and data access libraries is still there.
You almost got it, but instead of concluding that you did something wrong, you concluded that there's something wrong with Clean Architecture. But, as you say, why would you make it seemingly independent but not really? Well, you don't! You have to really make it independent.
There are two important things to note from the description of your implementation:
Using (EF Core) LINQ in the application layer. Querying a DB using LINQ is a very specific EF thing. The fact that you managed somehow to hide part of the expression (ToListAsync) in the infrastructure layer, doesn't mean that you have abstracted anything. Your application code is custom made for EF and EF only.
You are using a generic repository. A generic repository, even behind a (single) interface, is not Clean Architecture friendly. In clean architecture it's the Core or business logic code which defines a very concrete interface for each specific scenario. As all scenarios are different, you can't create a (single) generic repository interface which covers them all without forcing some scenarios to depend on functionality that they don't need. This, not only is not SOLID, but also it can complicate your life a lot. For example, as Clean Architecture promises, you should be able to replace your DB, but not as a big bang change. You should be able to move your Products (for example) to MongoDB while leaving the rest of the application in SQL Server. Of course, if all your data access is behind a generic repository, you are forced to change everything at once. Instead, your business logic code should define an interface for every use case (IProductsRepository, ICustomersRepository, etc). Each interface will have only the concrete methods required on each case, no more. Note that if you wanted, you could still implement all interfaces with a single class or with a lot of shared code in a base class in the infrastructure layer, but you can always move one interface to a completely different implementation. Of course, the interface has to abstract the whole data access implementation, not only the ToListAsync part.
Maximizing business separation from storage layer technologies has always been one of my concerns.Just try to change how you define your repositories: Take a look at a very simple library I use https://github.com/mzand111/MZBase. The MZBase.Infrastructure (also available as a nuget package) is a .Net standard library which contains some base interfaces and classes.
Also to handle paging, sorting and filtering just using basic data-types I have developed another library https://github.com/mzand111/MZSimpleDynamicLinq which provides two main classes to use in your repositories: LinqDataRequest and
LinqDataResult. So if you take a look at ILDRCompatibleRepositoryAsync it has minimum technology dependence and implementing this interface is possible in most of the ORM technologies.
The idea of the clean architecture is to separate your business logic that much from any external service, DB or IO that you do NOT have to rewrite anything in your business logic if you want to replace one technology by another.
if you still have to rewrite parts of your business logic then it is obviously not separated properly. If your LinQ statements only work if the implementation is EF then the interface adapter is not really an adapter and the business logic is making an assumption about the implementation of the DAL.
Additionally this thread might be interesting in this context: How can Clean Architecture's interface adapters adapt interfaces if they cannot know the details of the infrastructure they are adapting?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a Window-App WPF project in MVVM pattern. At the moment, the app is a bit simple (can't really explain the nature of the product), but eventually it is expected to grow into a more complex app.
The wpf winapp has a local database and also connects to a REST service.
Development Time is not really the top concern; but maintainability, and testability.
Will use an IOC container and DI
Planning to do 1 ViewModel is to 1 View
I don't want to use any WPF/MVVM frameworks, as this is my first time in WPF-MVVM app (just like first time coding in bare DOM javascript even if there's jquery).
I decided to use multiple projects, and here's what I came up so far:
Product.Windows.Common (Utils, Logging, Helpers, etc.)
Product.Windows.Entities (Database and REST entities)
Product.Windows.Contracts (All Interfaces will reside in this namespace/project)
Product.Windows.Data (for local Database)
Product.Windows.ServiceClients (for REST client)
Product.Windows.App (the main WPF project, contains the Views/XAML)
Product.Windows.Models (INPChanged)
Product.Windows.ViewModels (INPChanged and ICommands)
Product.Windows.Tests (Unit Tests)
I just want to ask:
Is this architecture a bit over-kill?
Do I need to create a Product.Windows.Business for the business logic? Or should I just put business logic in the ViewModels?
Thank you in advance :)
i'm currently working on an app with a similar structure. the project structure looks ok. in my project i did things a little differently though.
the Data and ServiceClients assemblies might represent your DAL. it's good these are separated in different assemblies. in the Data assembly you'll have the repositories and in the ServiceClients you'll have the service agents. The Entities and Contracts assemblies might represent your BL. Here, i think you could have used a single assembly. this assembly should be referenced by both DAL assemblies.
it's good that logging is implemented separately and if you have security this should also be implemented in Common. From what i've read recently, in a great book, Dependency Injection in .NET, utils & helpers are a result of poor/incomplete design. these classes usually contain static methods. but i don't think this is relevant to the discussion.
on my projects i usually implement the VMs in the same assembly as the views. this includes the RelayCommand (the ICommand implementation) and the ViewModelBase that implements INPC.
i've recently viewed a presentation by Robert Martin. from what i can remember he said that an application's architecture should scream what the application does. classes should not be grouped in projects or folders called (MVC or MVVM). this tells us nothing about what the app does. classes should be grouped by what they do, by the features they implement. i'm not at this phase yet. i'm still grouping things like you :).
i see that you only have a single test project. this might also be fine if you add directories in this project for all the assemblies you are planning to test. if you're not doing that it will be a little hard to find the tests for a particular assembly. you might want to add test projects for every assembly you plan to test.
You can organize your components as you want but i prefere the following structure:
- create 2 class libraries (dll) for each screen in your project (one of them has views + View Models for this screen and the other dll has the business logic for it) so you can use your view and viewmodel with another business logic and also you can change, update in every screen business/view separately and the update will work when you replace a dll.
Use all of your components except:
Product.Windows.ViewModels
Product.Windows.Models
It's a bit of overkill, but I think only you can vouch for your own program. I think I would put Contracts inside Common and Entities (Depending on functionality). Also, I don't think you need to completely separate between the View and the ViewModel. It'll also ease the changing / debugging process if they are on the same project.
If your program is client side only you can have the BL in the ViewModel (At least if it's not TOO complicated to follow). If you have a main server and multiple client then you should not implement ANY logic (except cosmetics) in your ViewModel, and yes create a new project
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am in the very early phases of a WinForms product rewrite, and I am trying to determine the "best" strategy for implementing a new solution structure. The current solution contains 50+ projects, and (for the most part) it contains all of the logic required to run the application. Several of the projects do have dependencies to projects that exist in a separate "Framework" solution, but that is being replaced / modified as well.
As I said, the current solution produces a WinForms product; furthermore everything is tightly coupled together, front to back. Additionally, we want to start offering a Web / Mobile solution in addition to/alongside our WinForms product. Because of the desired changes, I am considering breaking this out into several separate solutions. Details follow.
Product.Framework Solution becomes Product.Core - A shared set of assemblies containing common interfaces, enums, structs, "helpers", etc.
Product.Windows - MVC pattern. Contains all views and business logic necessary to run the WinForms product.
Product.Web - MVC Pattern. Contains all views and business logic necessary to run the Web product.
Product.Services - Hostable WCF services. Contains the public service layer that Web/Win/Mobile call into with the underlying DAL.
This is where I am looking for a sanity check: I am planning on implementing DI/IoC in both the WinForms and Web project (I am not so much worried about injecting into the WCF services); in my mind it makes sense to have interfaces of all the concrete entities (representation of database tables) and services in the Product.Core solution. The only reference I would possibly need to Product.Services in the Web and Winforms solutions would be to register the concrete types with the container.
Does this make sense? Is there something glaring that I have overlooked? Thank you for any and all feedback!
The way I think about solutions is "all of the things necessary to run my program". In your case, your WinForms application is the final step. The goal is to be able to run the output executable from that project. The solution, then, should consist of every project necessary in order to build that executable from scratch. The last thing you want is to have a new developer have to checkout your source code from version control and then have to use tribal knowledge to figure out which solutions need to be built in which order and then how to tie them all together.
Now you mentioned that you may be adding some more final step applications such as a web application. Assuming that the dependencies for your WinForms application are similar to your web application, I am of the opinion that you should just add the web application to the same solution as your WinForms application. However, sometimes it makes sense to have a different solution for each, and then have each solution reference a similar set of projects.
One of the key things to remember is that when a project dependency is introduced, you will need to update all of your solutions to have that new dependency. This is the primary reason why I tend to have a single solution for most things.
Don't forget, in Visual Studio you can have solution folders to help you visually manage the solution as a whole. Also, you can utilize the build configurations and dependency tree such that building doesn't require compiling everything when you only need one final project built. Finally, you can utilize the Set Startup Project option to switch between which final output you want to work with.
Remember, any given project can very easily be part of multiple solutions. If you have a core set of frameworks that are used across an array of different products you can include the framework projects in each "Product" solution. If the framework is not primarily worked on by the same team that uses it you may want to consider splitting the framework into a separate repository and only distribute the output assemblies (which would be committed into other repositories and referenced in your other solutions).
In general, my opinion is to have a single solution for everything and utilize various features of Visual Studio so managing such a large and complex solution isn't very painful. The one thing I would advise against is having the build of one solution depend on the build output of another solution. If you are doing this, the two projects should reside in separate repositories and the build output should be copied and committed as needed (basically treat the output as a 3rd party library).
There is no "best" answer here. Here is an observation from your question:
Why do you need an interface for all concrete entities?
It appears that these are just data model classes. Unless you are looking to Mock these classes or write generic methods(or classes) that can operate on a category of classes such as all data model classes that implement an IEntity interface for instance so that you can constrain your generic method/class by the data model type.
Example:
public void MyGenericMethod<T>(T t) : where T:IEntity{ // do something}
It sounds like the refactoring / restructuring you are doing will have major impact on the business you are working for and the design/ architectural decisions made now will need to sustainable for the business as it evolves. I would highly suggest involving an architect in this process who can understand the needs and the nature of the business and come up with a game plan accordingly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
When I was optimizing my architecture of our applications in our website, I came to a problem that I don't know the best solution for.
Now at the moment we have a small dll based on this structure:
Database <-> DAL <-> BLL
the Dal uses Business Objects to pass to the BLL that will pass it to the applications that uses this dll.
Only the BLL is public so any application that includes this dll, can see the bll.
In the beginning, this was a good solution for our company.
But when we are adding more and more applications on that Dll, the bigger the Bll is getting. Now we dont want that some applications can see Bll-logic from other applications.
Now I don't know what the best solution is for that.
The first thing I thought was, move and separate the bll to other dll's which i can include in my application. But then must the Dal be public, so the other dll's can get the data... and that I seems like a good solution.
My other solution, is just to separate the bll in different namespaces, and just include only the namespaces you need in the applications. But in this solution, you can get directly access to other bll's if you want.
So I'm asking for your opinions.
You should have a distinct BLL and DAL for each "segment" of business... for example:
MyCompany.HumanResources.BLL
MyCompany.Insurance.BLL
MyCompany.Accounting.BLL
I agree with #MikeC. Separate the BLL in namespaces, for each segment. Also, separate the DAL too, like this:
MyCompany.HumanResources.DAL
MyCompany.Insurance.DAL
Another thing to do, is separate the dll's. This way, you dont need to make DAL public. It will be a Business Layer (like WCF or Web-service), responsible of BLL and DAL, for each system, making the support and maintenance more easy. I dont know if its the most affordable approach for your company right now (in terms of complexity), but its a better approach for design purposes.
Times before, the applications developed here in the company, used component architeture - sharing the components trough applications -. We realized that, it wasnt the best design and today, many systems (in production enviroment) use that design approach.
Furthermore: If you want more complexity, you could also generate a Generic dbHelper component, responsible to maintain the data access, including operations that controls the connections, commands and transactions. This way, preventing the rewrite of code. That assembly could makes use of Enterprise Library or others components. An operation example could be:
public DbCommand CreateCommand()
{
if (this._baseCommand.Transaction != null)
{
DbCommand command = this._baseConnection.CreateCommand();
command.Transaction = this._baseCommand.Transaction;
return command;
}
return this._baseConnection.CreateCommand();
}
You can make it virtual, implementing a SqlCommand CreateCommand and so on.
Remembering: the Generic dbHelper idea I exposed, is just an idea!
I suggest you to separate your business logic into different dll's in accordance to their pertence (in accordance with previous post), these classes will implement specific interface while this interface will be declared on you business login consumer. Then I suggest you to implement the containers (see Inversion of Control theory) to resolve dll implementation, this will allow you to separate business logic implementation from consumption and you will be able to replace some implementation by another, without difficulty.
I defend the use of provider with IoC and not the consumption of business manager classes directly (think about references which can result in nightmare). This solution resolves the problem of dll's isolation and their optimized consumption.
It sounds like you have common business logic, that applies the your organization in general, and more specific logic per section or department. You could set up your code in a way so that each dept. only depends on their specific logic, which behind the scenes uses any generic functionality in the "base" logic. To that end, you could set up the following projects:
Business.BLL
Business.Finance.BLL
Business.IT.BLL
(etc, ad infinitum, and so on...)
Note that each of these can be a separate project, which compiles to its own assembly. A department would only need to use their own assembly.
As far as data access goes, you can have generic data access routines in your base BLL. Your specific BLLs can have their own specialized data queries that are funnelled to the base BLL, which in turn uses the generic DAL and returns results back up the chain.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We work on a middle-size project (3 developers over more than 6 months) and need to make following decision: We'd like to have interfaces separated from concrete implementation. The first is to store the interface in a separate file.
We'd like to go further and separate the data even more: We'd like to have one project (CSPROJ) with interface in one .CS file plus another .CS file with help classes (like some public classes used within this interface, some enums etc.). Then, we'd like to have another project (CSPROJ) with a factory pattern, concrete interface implementation and other "worker" classes.
Any class which wants to create an object implementing this interface must include the first project which contains the interfaces and public classes, not the implementation itself.
This solution has one big disadvantage: it multiplies the number of assemblies by 2, because you would have for every "normal" project one project with interace and one with implementation.
What would you recommend? Do you think it's a good idea to place all interfaces in one separate project rather than one interface in its own project?
I would distinguish between interfaces like this:
Standalone interfaces whose purpose you can describe without talking about the rest of your project. Put these in a single dedicated "interface assembly", which is probably referenced by all other assemblies in your project. Typical examples: ILogger, IFileSystem, IServiceLocator.
Class coupled interfaces which really only make sense in the context of your project's classes. Put these in the same assembly as the classes they are coupled to.
An example: suppose your domain model has a Banana class. If you retrieve bananas through a IBananaRepository interface, then that interface is tightly coupled to bananas. It is impossible to implement or use the interface without knowing something about bananas. Therefore it is only logical that the interface resides in the same assembly as Banana.
The previous example has a technical coupling, but the coupling might just be a logical one. For example, a IFecesThrowingTarget interface may only make sense as a collaborator of the Monkey class even if the interface declaration has no technical link to Monkey.
My answer does depend on the notion that it's okay to have some coupling to classes. Hiding everything behind an interface would be a mistake. Sometimes it's okay to just "new up" a class, instead of injecting it or creating it via a factory.
Yes, I think this is a good idea. Actually, we do it here all the time, and we eventually have to do it because of a simple reason:
We use Remoting to access server functionality. So the Remote Objects on the server need to implement the interfaces and the client code has to have access to the interfaces to use the remote objects.
In general, I think you are more loosely coupled when you put the interfaces in a separate project, so just go along and do it. It isn't really a problem to have 2 assemblies, is it?
ADDITION:
Just crossed my mind: By putting the interfaces in a separate assembly, you additionally get the benefit of being able to reuse the interfaces if a few of them are general enough.
I think it you should consider first whether ALL interfaces belong to the 'public interface' of your project.
If they are to be shared by multiple projects, executables and/or services, i think it's fair to put them into a separate assembly.
However, if they are for internal use only and there for your convenience, you could choose to keep them in the same assembly as the implementation, thus keeping the overall amount of assemblies relatively low.
I wouldn't do it unless it offers a proven benefit for your application's architecture.
It's good to keep an eye on the number of assemblies you're creating. Even if an interface and its implementation are in the same assembly, you can still achieve the decoupling you rightly seek with a little discipline.
If an implementation of an interface ends up having a lot of dependencies (on other assemblies, etc), then having the interface in an isolated assembly can simply life for higher level consumers.
They can reference the interface without inadvertently becoming dependent on the specific implementation's dependencies.
We used to have quite a number of separate assemblies in our shared code. Over time, we found that we almost invariably referenced these in groups. This made more work for the developers, and we had to hunt to find what assembly a class or interface was in. We ended up combining some of these assemblies based on usage patterns. Life got easier.
There are a lot of considerations here - are you writing a library for developers, are you deploying the DLLs to offsite customers, are you using remoting (thanks, Maximilian Mayerl) or writing WCF services, etc. There is no one right answer - it depends.
In general I agree with Jeff Sternal - don't break up the assemblies unless it offers a proven benefit.
There are pros and cons to the approach, and you will also need to temper the decision with how it best fits into your architectural approach.
On the "pro" side, you can achieve a level of separation to help enforce correct implementations of the interfaces. Consider that if you have junior- or mid-level developer working on implementations, the interfaces themselves can be defined in a project that they only have read access on. Perhaps a senior-level, team lead, or architect is responsible for the design and maintenance of the interfaces. If these interfaces are used on multiple projects, this can help mitigate the risk of unintentional breaking changes on other projects when only working in one. Also, if you work with third party vendors who you distribute an API to, packaging the interfaces is a very good thing to do.
Obviously, there are some down sides. The assembly does not contain executable code. In some shops that I have worked at, they have frowned upon not having functionality in an assembly, regardless of the reason. There definitely is additional overhead. Depending on how you set up your physical file and namespace structure, you might have multiple assemblies doing the same thing (although not required).
On a semi-random note, make sure to document your interfaces well. Documentation inheritance from interfaces using GhostDoc is a beautiful thing.
This is a good idea and I appreciate some of the distinctions in the accepted answer. Since both enumerations and especially interfaces are by their very nature dependency-less this gives them special properties and makes them immune from circular dependencies and even just complex dependency graphs that make a system "brittle". A co-worker of mine once called a similar technique the "memento pattern" and never failed to point out a useful application of it.
Put an interface into a project that already has many dependencies and that interface, at least with respect to the project, comes with all the dependencies of the product. Do this often and you're more likely to face situations with circular dependencies. The temptation is then to compensate with patches that wouldn't otherwise be needed.
It's as if coupling interfaces with projects having many dependencies contaminates them. The design intent of interfaces is to de-couple so in most cases it makes little sense to couple them to classes.