I had seen somewhere the builder pattern used to create DML SQL statements. I was wondering, which pattern(s) would be (more) appropriate for building SQL DDL statements.
I am thinking about simple program (DB tool, solely for self-education purposes), that would create (dynamically) simple SQL DDL statements. I am not sure which patterns should I consider.
The factory pattern allows me to decouple client code from concrete database provider classes library. I suppose it's a clear choice here (please correct me if I am wrong). The decorator pattern was my first choice for building sql statements, but after coding some examples and then reading this answer I am almost sure, I shouldn't be using decorator, as I am building objects and not decorating already created objects.
So.. which patterns should I consider and why are they good/better in this case?
Updated for clarification.
The "core" of your DB tool needs to represent the knowledge it possesses in a DB-independent form, and when the time comes a database-specific code takes over and translates that knowledge into DB-specific DDL.
You'll probably need some sort of dependency injection to accomplish that. The basic idea is this: the core of your application works with interfaces only and never knows anything beyond what is declared in these interfaces. At run-time, DB-specific objects implementing these interfaces are instantiated and "injected" into the core. Create than blindly "calls" them as it would any other set of objects implementing the same set of interfaces.
If another DB needs to be supported, juts make another set of classes that implement these interfaces and instantiate them in run-time.
Of course this is all just a theory. You'll find that there are many nuances between different database systems and I suspect it will be hard for you to represent all that in a completely generalized way...
Related
There is this generic repository implementation
http://www.itworld.com/development/409087/generic-repository-net-entity-framework-6-async-operations
By the looks of it , it seems that i can just have a single generic repository for my whole project and for almost all of the entities in the database it will work fine. For the ones that it doesn't , i can create a more specific repository , e.g. MembershipRepository which derives from the base repository and overrides the methods as needed, such as Find for example.
Now one could also write a generic service class too.... similar to the above, and then creating only a few more specific services.
That will drastically reduce the project size. No need to write redundant repositories per entity, and a much smaller number of service layer classes.
Surely it can't be that simple. Is there a catch to this? Let's ignore for a moment that EntityFramework has the repository+UOW pattern built in and repository pattern isn't needed.
We do.
I am torn about it honestly. For smaller domains its perfectly fine and works a treat. For larger ones (like the one I am working with currently), your repository can never really be generic enough to warrant a single one.
For example, the generic repository in the code base that I currently work with is now littered with all sorts of very specific methods for things like eager fetching, paging, etc. Its much more than what it started out as. Looking back at the revision history, it once only had GetAll, GetById, Create and Update methods. Now it has things like GetAllEagerFetch with overloads for various JOIN types, GetAllPaged, GetAllPagedEagerFetch, DeleteById, ExecuteStoredProcedure, ExecuteSql (yuck), etc. There is a lot more.
One way around this is to perhaps follow the Interface Segregation Principle so that your repository can be huge and generic but consumers only care about what they need to care about. I don't particularly like that though.
That being said - we have moved away from a Repository-style setup in more recent projects. We prefer a CQRS setup now with Command and Query objects that have a specific purpose. This leans more towards the Single Responsibility Principle instead (doesn't follow it to the "Uncle Bob degree".. but the classes have some well defined responsibilities).
As of now, my project relies heavily on WCF which is linked to a database.
we use the classes generated from the database which are ORM if you will to do processing in our system.
i know that using DataSvcUtil, we can easily extract out all the classes and compile that as a DLL to be shared across our other systems.
But in our current project, we create another DLL which mirrors the WCF generated table class rather than using those classes directly.
So my question is there a best practice on these sort of things?
and
what's the pros and cons of these two methods?
are there other
methods?
thanks
Updates:
It seems like the consensus is on creating your own custom classes rather than relying on those that are created by WCF.
I am currently following this method, and as of now just using extension to create method to convert to the model and another one to convert it back to the type.
And having your own simpler class is good for extensibility and other stuff :)
I would suggest to still use WCF, but use compilied dll as client instead of service reference. This way you can still have your interface consistent, even if you will decide to change database in future. The pros of using DLL:
As your service will grow, users may occasionally start getting timeouts when trying to generate service reference
You will be safe from people having wrong service reference. When generating service reference some properties can be changed, thus users can generate potentially dead service reference
You will be protected from other IDEs generating slightly different references
It's a bit easier to be backwards compatible and to pinpoint the problem as you will be 100% sure that the way client is used is the same across users.
Cons of using DLL:
You will have additional reference
I'm not that familiar with WCF-- but I use Linq To Sql which I'm assuming generates the same types of classes (as does any ORM tool). I always create my own POCO classes which describe my domain model. I know there is a bit more work involved-- and you are then tasked with mapping your POCO classes with your generated classes. But I find it the best way to keep my domain classes pure. The generated classes can be somewhat complex with attributes describing the tables and columns which will be used to populate them. I like the generated classes because they make it easier for me to interact with the database-- but I always like the separation of having the simple domain classes-- it also gives me the flexibility to swap out database implementations.
It is better to have a separate dll as you do in your current project - decoupling is a best practice, generating the WCF DataContracts from the database is almost certainly not a good idea however - it can be used for the first shot but subsequent changes to your database should not be directly reflected in the web service.
One of the advantages of using WCF is that you can easily achieve decoupling through a service layer, if you were to distribute a dll compiled in the way you describe you would essentially be coupling all clients to your database representation.
Decoupling enables your ORM / database to be tweaked as necesarry without all you clients having to re-compile.
On the con side - decoupling like this is a bit slower to implement up front - so if you have a very small project can be overkill - but if you are working cross team or in any way distributed then it is essential.
I'm implementing a DAL using entity framework. On our application, we have three layers (DAL, business layer and presentation). This is a web app. When we began implementing the DAL, our team thought that DAL should have classes whose methods receive a ObjectContext given by services on the business layer and operate over it. The rationale behind this decision is that different ObjectContexts see diferent DB states, so some operations can be rejected due to problems with foreign keys match and other inconsistencies.
We noticed that generating and propagating an object context from the services layer generates high coupling between layers. Therefore we decided to use DTOs mapped by Automapper (not unmanaged entities or self-tracking entities arguing high coupling, exposing entities to upper layers and low efficiency) and UnitOfWork. So, here are my questions:
Is this the correct approach to design a web application's DAL? Why?
If you answered "yes" to 1., how is this to be reconciled the concept of DTO with the UnitOfWork patterns?
If you answered "no" to 1., which could be a correct approach to design a DAL for a Web application?
Please, if possible give bibliography supporting your answer.
About the current design:
The application has been planned to be developed on three layers: Presentation, business and DAL. Business layer has both facades and services
There is an interface called ITransaction (with only two methods to dispose and save changes) only visible at services. To manage a transaction, there is a class Transaction extending a ObjectContext and ITransaction. We've designed this having in mind that at business layer we do not want other ObjectContext methods to be accessible.
On the DAL, we created an abstract repository using two generic types (one for the entity and the other for its associated DTO). This repository has CRUD methods implemented in a generic way and two generic methods to map the DTOs and entities of the generic repository with AutoMapper. The abstract repository constructor takes an ITransaction as argument and it expects the ITransaction to be an ObjectContext in order to assign it to its proctected ObjectContext property.
The concrete repositories should only receive and return .net types and DTOs.
We now are facing this problem: the generic method to create does not generate a temporal or a persistent id for the attached entities (until we use SaveChanges(), therefore breaking the transactionality we want); this implies that service methods cannot use it to associate DTOs in the BL)
There are a number of things going on here...The assumption I'll make is that you're using a 3-Tier architecture. That said, I'm unclear on a few design decisions you've made and what the motivations were behind making them. In general, I would say that your ObjectContext should not be passed around in your classes. There should be some sort of manager or repository class which handles the connection management. This solves your DB state management issue. I find that a Repository pattern works really well here. From there, you should be able to implement the unit of work pattern fairly easily since your connection management will be handled in one place. Given what I know about your architecture, I would say that you should be using a POCO strategy. Using POCOs does not tightly couple you to any ORM provider. The advantage is that your POCOs will be able to interact with your ObjectContext (probably via Repository of some sort) and this will give you visibility into change tracking. Again, from there you will be able to implement the Unit of Work (transaction) pattern to give you full control over how your business transaction should behave. I find this is an incredibly useful article for explaining how all this fits together. The code is buggy but accurately illustrates best practices for the type of architecture you're describing: Repository, Specification and Unit of Work Implementation
The short version of my answer to question number 1 is "no". The above link provides what I believe to be a better approach for you.
I always believed that code can explain things better than worlds for programmers. And this is especially true for this topic. Thats why I suggest you to look at the great sample application in witch all consepts you expecting are implemented.
Project is called Sharp Architecture, it is centered around MVC and NHibernate, but you can use the same approaches just replacing NHibernate parts with EF ones when you need them. The purpose of this project is to provide an application template with all community best practices for building web applications.
It covers all common and most of the uncommon topics when using ORM's, managing transactions, managing dependencies with IoC containers, use of DTOs, etc.
And here is a sample application.
I insist on reading and trying this, it will be a real trasure for you like it was for me.
You should take a look what dependency injection and inversion of control in general means. That would provide ability to control life cycle of ObjectContext "from outside". You could ensure that only 1 instance of object context is used for every http request. To avoid managing dependencies manually, I would recommend using StructureMap as a container.
Another useful (but quite tricky and hard to do it right) technique is abstraction of persistence. Instead of using ObjectContext directly, You would use so called Repository which is responsible to provide collection like API for Your data store. This provides useful seam which You can use to switch underlying data storing mechanism or to mock out persistence completely for tests.
As Jason suggested already - You should also use POCO`s (plain old clr objects). Despite that there would still be implicit coupling with entity framework You should be aware of, it's much better than using generated classes.
Things You might not find elsewhere fast enough:
Try to avoid usage of unit of work. Your model should define transactional boundaries.
Try to avoid usage of generic repositories (do note point about IQueryable too).
It's not mandatory to spam Your code with repository pattern name.
Also, You might enjoy reading about domain driven design. It helps to deal with complex business logic and gives great guidelines to makes code less procedural, more object oriented.
I'll focus on your current issues: To be honest, I don't think you should be passing around your ObjectContext. I think that is going to lead to problems. I'm assuming that a controller or a business service will be passing the ObjectContext/ITransaction to the Repository. How will you ensure that your ObjectContext is disposed of properly down stream? What happens when you use nested transactions? What manages the rollbacks, for transactions down stream?
I think your best bet lies in putting some more definition around how you expect to manage transactions in your architecture. Using TransactionScope in your controller/service is a good start since the ObjectContext respects it. Of course you may need to take into account that controllers/services may make calls to other controllers/services which have transactions in them. In order to allow for scenarios where you want full control over your business transactions and the subsequent database calls, you'll need to create some sort of TransactionManager class which enlists, and generally manages transactions up and down your stack. I've found that NCommon does an extraordinary job at both abstracting and managing transactions. Take a look at UnitOfWorkScope and TransactionManager classes in there. Although I disagree with NCommon's approach of forcing the Repository to rely on the UnitOfWork, that could easily be refactored out if you wanted.
As far as your persistantID issue goes, check this out
Platform: C# 2.0
Using: Castle.DynamicProxy2
I have been struggling for about a week now trying to find a good strategy to rewrite my DAL. I tried NHibernate and, unfortunately, it was not a good fit for my project. So, I have come up with this interaction thus far:
I first start with registering my DTO's and my data mappers:
MetaDataMapper.RegisterTable(typeof(User)):
MapperLocator.RegisterMapper(typeof(User), typeof(UserMapper));
This maps each DTO as it is registered using custom attributes on the properties of the DTO essentially:
[Column(Name = "UserName")]
I then have a Mapper that belongs to each DTO, so for this type it would be UserMapper. This data mapper handles calling my ADO.Net wrapper and then mapping the result to the DTO. I however am in the process of enabling deep loading and subsequently lazy loading and thus where I am stuck. Basically my User DTO may have an Address object (FK) which requires another mapper to populate that object but I have to determine to use the AddressMapper at run time.
My problem is handling the types without having to explicitly go through a list of them (not to mention the headache of always having to keep that list updated) each time I need to determine which mapper to return. So my solution was having a MapperLocator class that I register with (as above) and return an IDataMapper interface that all of my data mappers implement. Then I can just cast it to type UserMapper if I am dealing with User objects. This however is not so easy when I am trying to determine the type of Data Mapper to return during run time. Since generics have to know what they are at compile time, using AOP and then passing in the type at run time is not an option without using reflection. I am already doing a fair bit of reflection when I am mapping the DTO's to the table, reading attributes and such. Plus my MapperFactory uses reflection to instantiate the proper data mapper. So I am trying to do this without reflection to keep those expensive calls down as much as possible.
I thought the solution could be found in passing around an interface, but I have yet to be able to implement that idea. So then I thought the solution would possibly be in using delegates, but I have no idea how to implement that idea either. So...frankly...I am lost, can you help, please?
I will suggest a couple of things.
1) Don't prematurely optimize. If you need to use reflection to instantiate your *Mappers, do it with reflection. Look at the headache you're causing yourself by not doing it that way. If you have problems later, than profile them to see if there's faster ways of doing it.
2) My question to you would be, why are you trying to implement your own DAL framework? You say that NHibernate isn't a good fit, but you don't elaborate on that. Have you tried any of the dozens of other ORM's? What's your criteria? Your posted code looks remarkably like Linq2Sql's mapping.
Lightspeed and SubSonic are both great lightweight ORM packages. Linq2Sql is an easy-to-use mapper, and of course there's Microsoft's Entity Framework, which has a whole team at Microsoft dealing with the problems you're describing.
You might save yourself a lot of time, especially maintenance wise, by looking at these rather than implementing it yourself. I would highly recommend any of those that I mentioned.
I'm puzzling with where to place some code. I have a listbox and the listitems are stored in the database. The thing is, where should I place the code that retrieves the data from the database? I do already have a database class with a method ExecuteSelectQuery(). Shall I create Utility class with a method public DataTable GetGroupData() ? /* group is the listbox */ This method then calls the method ExecuteSelectQuery() from the database class.
What should I do?
There are many data access patterns you could look at implementing. A Repository pattern might get you where you need to go.
The idea is to have a GroupRepository class that fetches Group data. It doesn't have to be overly complicated. A simple method like GetAllGroups could return a collection you can use for your ListBox items.
You could simply abstract the database code into a utility class, as you suggest, and this wouldn't be a terrible solution. (You probably can't get much better with WebForms.) Yet if the system is going to end up quite complicated, then you might want to pick a more formal architecture...
Probably the best option for ASP.NET is the ASP.NET MVC Framework, which fully replaces WebForms. It's built around the Model-View-Controller architecture pattern, which is designed specifically to clearly separate code for the user interface and backened logic, which seems to be exactly what you want. (Indeed, it makes the design of websites a lot more structured in many ways.)
Also, if you want to create a more structured model for your database system, consider using an ORM such as the ADO.NET Entity Framework. However, this may be overkill if your database isn't very complicated. Using the MVC pattern is probably going to make a lot more difference.
consider adding a layer between your UI and data access layers. There you can implement some kind of caching if the data is not being changed frequently - this way you will avoid retrieving multiple times the same data from the DB.
I would recommend you the Application Architecture Guide book.
Pavel
Looks like you already have the data access layer implemented via your database class. I guess the confusion right now is in implementing the presentation and business layers. To me 'GetGroupData' is more of a part of the Business layer and is the right thing to implement. It will allow you to make changes in future with minimal or no impact to the presentation layer. Therefore, the flow that you suggested looks right. LoadListBox followed by GetGroupData followed by ExecuteSelectQuery.