Is it good practice to reference my web applications domain layer class library to WCF service application.
Doing that gives me easy access to the already existing classes on my domain model so that I will not need to re-define similar classes to be used by the WCF service
On the other hand, I don't like the coupling that it creates between the application and service and i am curious if it could create difficulties for me on the long run.
I also think having dedicated classes for my WCF app would be more efficient since those classes will only contain the members that will be used by the service and nothing else. If I use the classes from my domain layer there will be many fields in the classes that will not be used by the service and it will cause unnecessary data transfer.
I will appreciate if you can give me your thoughts from your experience
No it's not. Entities are all about behaviour. data contract is all about... data. Plus as you mentioned you wouldn't want to couple them together, because it will cripple you ability to react to change very soon.
For those still coming across this post, like I....
Checkout this site. Its a good explanation on the topic.
Conclusion: Go through the effort of keeping the boundaries of your architecture clear and clean. You will get some credit for it some day ;)
I personally frown on directly passing domain objects directly through WCF. As Krzysztof said, it's about a data contract not a contract about the behavior of the the thing you are passing over the wire.
I typically do this:
Define the data contracts in their own assembly
The service has a reference to both the data contracts assembly and the business entity assemblies.
Create extension methods in the service namespace that map the entities to their corresponding data contracts and vice versa.
Putting the conceptual purity of what a "Data Contract" is aside, If you begin to pass entities around you are setting up your shared entity to pulled in different design directions by each side of the WCF boundary. Inevitably you'll end up with behaviors that only belong to one side, or even worse - have to expose methods that conceptually do the same thing but in a different way for each side of the WCF boundary. It can potentially get very messy over the long term.
Related
I've been in the process of re-writing the back-end for a web site and have been moving it towards a three-tiered architecture.
My intention is to structure it so:
Web site <--> WCF Service (1) <--> Business Layer (2) <--> Data Layer (3)
My issue is with the placement of the DTO's within this structure. I'll need to use DTO's to move data between the business layer and the WCF service and from the WCF service to the consuming web site.
During my research on here I've read some excellent discussions though I've been left scratching my head a bit:
Davide Piras makes some great points in this post and if I were to follow this design then I'd declare interfaces for the POCOs in a separate project. These would then be implemented by tiers (1) and (2). Whilst I like the use of interfaces it does seem like I'd be making more work for myself by declaring POCOs in (1) and (2) and then copying their data back and forth using something like AutoMapper.
This post uses a system where a business objects project is created which would be referenced by all tiers. This seems to be simplier than the other solution and seems to lead me to a solution that would be
Web site <--> WCF Service (1) <--> Business Layer (2) <--> Data Layer (3)
^ ^ ^
| | |
[ -- Business Objects Referenced here --]
My question is this: is there a code smell from sharing the business objects across three layers of the solution or are the two methods I've listed above just two different ways of cracking the same nut?
Let's think of your UI (website) and service (WCF + BL + DAL) as two distinct entities.
If you have 100% control over both, you should choose approach #2 since it will avoid translation between WCF proxy objects and your business objects in the UI layer.
Else, you are better off with approach #1 since one of the entities is kind of a 'black box' and is subject to change by external stakeholders. So, it's safer to maintain an internal set of business objects. This will need translation between your business objects and the WCF proxy objects (through Extension methods or a translator framework).
Now, not exactly sure of your UI layer complexity or its implementation (MVC, WebForms, etc.), so you may or may not need View specific objects (lighter for data-binding, faster to serialize to JSON, etc.), let's call this object as the Model.
If you don't need a distinct UI specific Model, suggest to mark your business objects as DataContract (in the context of WCF) and use them across layers. Don't forget to explicitly mark entities as Serializable if you are invoking WCF via the web ($.ajax).
Else, use DataContract in the service and a translator to convert DataContract to Model in the UI layer. A Service Adapter is a good fit here - it sits in the UI layer and is responsible for consuming the WCF service and translating between DataContract and Model. You may use a Service Proxy in the UI layer, which is a wrapper over your WCF service and is consumable over the web.
Lastly, aren't you are missing a reference to the Business Objects in your Data Layer? I believe you will populate your Business Objects from the data-store in the Data Layer itself.
I would say that it often depends on the complexity of the project you're building. For smaller projects that I've built, I shared my 'entities' (they were just simple DTOs, serializable data buckets with getters and setters) across the layers and didn't care too much about this. On of the biggest downsides was that the logic got scattered across the project (not only in the 'Business Layer') and had a procedural-style all around. This really seems like the Anemic Domain Model, but the complexity didn't creep in as the project didn't grow too much. Entity Framework has some templates you can build you entities off of as such (if I'm not mistaken they're called self tracking entities?).
For medium/bigger projects, I wouldn't use this approach and would keep entities and DTOs separately as they would serve different roles. The DTOs might have a totally different shape than your entities and trying to share them between the tiers/layers would be often smelly.
As of now, my project relies heavily on WCF which is linked to a database.
we use the classes generated from the database which are ORM if you will to do processing in our system.
i know that using DataSvcUtil, we can easily extract out all the classes and compile that as a DLL to be shared across our other systems.
But in our current project, we create another DLL which mirrors the WCF generated table class rather than using those classes directly.
So my question is there a best practice on these sort of things?
and
what's the pros and cons of these two methods?
are there other
methods?
thanks
Updates:
It seems like the consensus is on creating your own custom classes rather than relying on those that are created by WCF.
I am currently following this method, and as of now just using extension to create method to convert to the model and another one to convert it back to the type.
And having your own simpler class is good for extensibility and other stuff :)
I would suggest to still use WCF, but use compilied dll as client instead of service reference. This way you can still have your interface consistent, even if you will decide to change database in future. The pros of using DLL:
As your service will grow, users may occasionally start getting timeouts when trying to generate service reference
You will be safe from people having wrong service reference. When generating service reference some properties can be changed, thus users can generate potentially dead service reference
You will be protected from other IDEs generating slightly different references
It's a bit easier to be backwards compatible and to pinpoint the problem as you will be 100% sure that the way client is used is the same across users.
Cons of using DLL:
You will have additional reference
I'm not that familiar with WCF-- but I use Linq To Sql which I'm assuming generates the same types of classes (as does any ORM tool). I always create my own POCO classes which describe my domain model. I know there is a bit more work involved-- and you are then tasked with mapping your POCO classes with your generated classes. But I find it the best way to keep my domain classes pure. The generated classes can be somewhat complex with attributes describing the tables and columns which will be used to populate them. I like the generated classes because they make it easier for me to interact with the database-- but I always like the separation of having the simple domain classes-- it also gives me the flexibility to swap out database implementations.
It is better to have a separate dll as you do in your current project - decoupling is a best practice, generating the WCF DataContracts from the database is almost certainly not a good idea however - it can be used for the first shot but subsequent changes to your database should not be directly reflected in the web service.
One of the advantages of using WCF is that you can easily achieve decoupling through a service layer, if you were to distribute a dll compiled in the way you describe you would essentially be coupling all clients to your database representation.
Decoupling enables your ORM / database to be tweaked as necesarry without all you clients having to re-compile.
On the con side - decoupling like this is a bit slower to implement up front - so if you have a very small project can be overkill - but if you are working cross team or in any way distributed then it is essential.
Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.
I have a quick question that I am hoping is fairly simple to answer. I am attempting to develop a shared Employee object library for my company. The idea is to create a centralized database that contains information about our employees (Reporting Hierarchy, Office Locations, General Info, etc) and then create an shared object library for this database.
My question is what is the best way to create this library so it can be shared among applications.
Do I create a self contained library that stores the database connection (I can see concurrency issues here and it doesn't feel right).
Client -> Server and then deploy the "client library" for use among any application.
OR would a Web/WCF service be more ideally suited to this situation.
There are many options because the question can be translated broadly. I suggest taking to heart all answers. Having said that here's my spin on it...
I used to view software layers as vertical because of n-tier training, and have a hard time breaking away from those notions to something conceptually broader and less restrictive. I strive to view .NET assembles as just pieces of a puzzle.
You're right to separate connection string from code and that's easily supported by .NET .config file, or application settings.
I often prefer a small, core library having the business logic, concepts and flows although each of those can be broken out. And within that concept you can still break out business from data access as different assemblies to swap in a new kind of data access. But sticking with the core module (a kind of "business kernel" or "engine" if you will).
You can express your "business kernel" through many presentation types, for example
textual/Console I-O
GUI: WinForms, WPF, Silverlight, ASP.NET, LED/pixelboard, etc
as cmdlets for Powershell interactions
web service expressions
kinds of mobile apps
etc.
You can accelerate development using patterns to bend software to your will and related implementations like: Microsoft Enterprise Library, loosen the coupling with dependency injection e.g. Ninject (one of many), or inversion of control techniques, etc.
I usually prefer to have a middle tier layer (so some sort of Web/WCF service between the client and the database). This way you separate the clients from the database, so that you can control the number of connections, or you can change the schema of the database in a way that will be transparent for the clients.
Depending on your situation, you can either make the clients connect to the WCF service (preferred in most cases), or create a dll that will wrap the connection to the service and perform some additional processing on the client side.
It depends how deep you need to integrate you library into main application. If you want to extend application domain with custom entities, you have following options:
Built-in persistence into library. You will need to pass connection string to repository class, but also database must include the hardcoded scheme for your library. If you use LINQ to SQL as data access library, you may mark up you entities with mapping attributes (see http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.aspx)
Provide domain library only, but implement persistence outside, if your data layer supports POCO mapping (EF 4 do).
Usually, putting domain model into separated assembly causes few problems:
Integration into application. Application itself usually provides few services, like data access, security, logging, web services etc. If your application have ideal design and layers fully decoupled from each other, there is no problem to add new entities, but usually data access layer requires inheritance from base class, logger is singleton, security checks are hardcoded into business logic methods etc. Such applications must be refactored, services must be extracted into interfaces, and such interfaces must be passed to components in separated assembly.
Entity references. If you use rich domain model, you probably want to reference entities declared in another assembly . Partially this problem can be solved by generics, but you need to have special design of your data access layer that allows you to get lists of generic entities, or get entity by id etc.
Database integration. It may be hard to maintain database changes, if some entities are developed separately from others, espesially by other team.
Just be sure to keep your connection method separate from your data access layer, and then you can change the connection method later if requirements change. If you have a simple DLL that holds your real logic, then adding a communication layer on top should be simple. This will also allow you to use all three methods you mentioned and have all your actual logic in a single DLL used amongst all three.
As described above I'm implementing a multi-tier architecture to work with WCF and Entity Framework 4 (with poco). Since I'm already have persistence ignorance with POCO I do need implement DTO or I can use WCF in its pure way?
The main quote is - I do need DTO to pass a lightweight object on the network or I can use my POCO entities.
What you guys recommend?
Its hard to answer unless you define what the "pure way" is. Are we talking SOA pure or WCF pure?
WCF proxies already are DTOs in a way because they do not bring along any business logic across your service contract. Creating another layer of DTOs on top of the proxy classes generated by WCF seems redundant.
The biggest question you want to answer is "how SOA is this solution?". You cannot share your POCO entities across service boundaries if you want to be SOA compliant. SOA is all about disparate contracts.
If you go all SOA based than you lose a lot of functionality because the classes your web tier will be working with most of the time will be stupid proxies. You'll have to repeat a lot of logic and you lost a lot of the "meta data, convention over configuration" functionality that MVC 2 provides.
If you throw the SOA buzzword into the shredder, which you should do ( http://soafacts.com/ ), then you'll have a much easier time sharing business logic and meta data information across tiers. If the only consumer of your web service is yourself than this method is probably your best choice.
This is where you could use DTOs to send across the wire instead of your POCO entities. The only downside is again, repeating logic, and lots of boiler plate ceremonial code that does nothing. Really depends on the size of your project. If its small, forget about DTOs, but if you have 20 developers working with 200,000 LoC than DTOs are probably worth creating.
As jfar said it depends on whether you are going to the be only one consuming the service, or whether the presentation tier is going to be you only.
If you are doing the later and it's only going to be you using your service then you can serialise you POCOs across the wcf service boundaries. This is something I have done recently and wrote this blog post about getting it to work. This will allow you to use the same entities in the app tier as well as the presentation tier.
Hope it helps.
The strongest reason to recommend a DTO when using WCF with EF is that EF database-first classes drag implementation-dependencies into your proxy classes. If you are using code-first with POCO classes, then there should be no implementation dependencies.
Try returning just your POCO classes, but then take a close look at the generated proxy classes. Make sure that there is nothing in those classes that is part of the EF infrastructure. If the proxy classes are clean, then you should be all set.