The reason why I need loosely-coupled WCF because Entity Framework is tightly-coupled. When I say loosely-coupled, there's no need to instantiate the database context or add the service reference of WCF. It just rely on web configuration or some .ini file that does not require compilation when developers need to change servers, ip address or service url's.
Instead, the MVC(say controller) will just send request message and then gets the response data from WCF service. But still we cannot afford without having Models based on the database (since we need it in intellisense for views markup), where the WCF will get the data. Let say we have those database objects class already, create some repository that binds the WCF data to the MVC Models.
What I mean of WCF web service, it ONLY contains messages, no more passing of object reference, because thats the new SOA definition. It makes more sense to pass messages instead of objects.
Is this a better approach? In terms of scalability and performance, I don't mean to offend the Entity Framework Fans.
It is an entirely valid approach to define a WCF web service in terms of message schemas which just use basic types, so that clients need know nothing about WCF in order to use the service. WCF would be useless for interop with other platforms (e.g. Java) otherwise.
Understand that WCF is a general and powerful framework for implementing communication over a variety of transport protocols. It can be equally effectively used for raw XML messaging as for programming in terms of objects. Object serialisation and deserialisation is an optional extra of the framework, not a requirement. (There is really no such thing as "passing of object reference" - ultimately it is an XML infoset which travels across the communication channel. Also, Entity Framework is not part of WCF - it is a distinct ORM Framework which you can use with WCF if you want, but that's your choice.)
Scalability and performance is entirely orthogonal to the design of the service in terms of its data and operation contracts. You should feel free to adopt whatever approach to defining your services is best for your application. If that's XML messages, that's fine - don't let anyone tell you otherwise.
Related
I'm working on a project that is our companies first foray into Domain Driven Development.
Our Web API originally simply provided CRUD operations and the project exposed OData controllers, but I'm not sure if that is still a good idea.
Is OData a good way to expose non-CRUD APIs?
More info:
Initially our web api basically exposed CRUD functions. To create a new User you would simply create one and post it to the service. To change, for example, an address you would get a copy of the user entity, make changes, then perform an update operation. Basic OData stuff.
Beyond providing query support, OData also exposed the service in a readily consumable way, so it could be added to other projects as a service reference and accessed with a proxy.
Since we have moved over to a DDD approach, things have changed significantly. Our Web API is now simply a gateway to a number of independent sub-domain services. We no longer provide CRUD operations or direct access to entities, instead making service calls to manipulate entities. Instead of creating a User entity sending it to the User service via a Put request, a consumer must generate a CreateUserBindingModel and send it to the User/Create service and let the service generate the entity. Changing an address is done through the ChangeAddress(ChangeAddressBindingModel model) method, rather than just updating the whole object. Queries are much more targeted and rarely if ever return entire domain objects.
Is it a bad idea to keep using OData as a basis for our Web API, when we no longer provide CRUD operations? Is there another way to expose the details of our service the way you can with OData? I know WCF services provide similar functionality, but I was under the impression it was even more tied to CRUD than OData.
OData is a data oriented API spec, it's anti-DDD. Although it can satisfy all your requirements to implement REST APIs but it's best for data processing API. I guess you already know that using OData feels like operating the database via HTTP. If you are using DDD you should forget OData totally.
In OData, actions and functions are a way to add server-side behaviors that are not easily defined as CRUD operations on entities
https://learn.microsoft.com/en-us/aspnet/web-api/overview/odata-support-in-aspnet-web-api/odata-v4/odata-actions-and-functions
https://blogs.msdn.microsoft.com/alexj/2012/02/03/cqrs-with-odata-and-actions/
https://github.com/OData/ODataSamples/blob/master/WebApiCore/ODataActionSample/ODataActionSample/
We currently have an application (Client/Server) that communicates through WCF. We would like to move away from the WCF approach and use a REST approach instead.
There are a few reasons for this, such as overhead (in terms of size) and the possibility to use the same access method for both our Windows client (currently a WinForm client) and mobile devices.
We are also sometimes running the server on the Mono framework, and even though we have it up and running, we have seen some differences regarding how WCF is working on the Mono stack compared to the .NET Framework (so I would not like to use the WebHTTPBinding in WCF to handle REST).
The service also needs to be self-hosted (i.e. not in IIS).
The problem when shifting from WCF to other alternatives is related to contracts. I would like to make it possible to unit test the REST calls, and I would like a contract to be involved, enabling the clients to use proxy classes that they do not have to create by themselves - pretty much like WSDL.
The main idea for handing out proxy classes to developers is that the clients should be able to rely on the service provider to get the correct proxy classes and that they should not need to care about the URLs used.
Is there any way this could be done automatically, and if so - using what framework or method?
Having looked brifely at WebAPI, I came across an example of generating a proxy (http://www.codeproject.com/Tips/535260/Proxy-Object-Generation-for-MVC-and-WebAPI-Control). This would simplify for the developers, but would mean that I manually need to create the proxy for the developers to use.
Any suggestions would be appreciated :)
For Client side unit tests, you should create mock for your rest service responses.
Otherwise You can create a static mock page with all of your service responses.
I have a desktop C# app that I want to split into two parts - server part and client part. My app is already split into two very independent parts that communicate by exchanging some (complex!) objects.
If I want to put one part of my app on some web server, what kind of technology should I use for passing those custom complex objects between the server part and client part? I was thinking about WCF, but...I'm not sure that WCF can easily handle (send/receive) custom objects (composed by many other custom objects). I don't need WCF because I'm not planning to offer my service to any third-party, I'm not planning to port my client app to other OS...
That's why I'm confused and need your help: what kind of remoting technology should I use in my case?
WCF stands for Windows Communication Foundation. In other words its about general cross process/machine communication and not limited to hetrogenous systems
One thing to remember about WCF is, despite appearances, you are not actually passing objects at all - the objects are used by a serializer to generate messages.At the other end it will deserialize into an independent copy. You don't, unlike COM, get a reference back to an object on the sender.
The reason this is important is because if the complex objects have non-serializable state such as a socket connection then this won't make it to the receiver side
Also, with the DataContractSerializer (which is the default) unless your objects are annotated with the [Serializable] attribute or you annotate the classes with [DataContract] and [DataMember] you will only be sending state that is exposed publicly (via a public field or a property).
This isn't purely a problem for WCF; Remoting requires objects derive from MarshalByRefObject or are annotated with the [Serializable] attribute. Building distributed systems is quite different from building systems that all share the same memory address space. You have to think carefully how you define that boundary between the distributed pieces because, for example, lots of small calls will kill your performance rather than few data rich calls (although from your description this might not be an issue that affects you)
So WCF can handle arbitrarily complex object graphs but just remember the above points about serialization
Well, DataContracts in WCF support complex objects, so I don't see a problem with that (how complex are your objects); however you should probably use the technology that is sufficient in your case. You can use Remoting, hell, even Sockets; but it is in almost all cases overkill and going too low in .NET stack for nothing; you will just be wasting your time in implementation.
If you have no reason against WCF, I would go that way, because it is very simple and powerful. There are also standard ASP.NET ASMX web services if you'd like.
One thing to note, whichever the technology, you should have your code structured in a distribution layer, exposing coarse-grained methods.
Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.
The objective is to build a service that I will then consume via jQuery and a standards based web front-end, mobile device "fat-clients," and very likely a WPF desktop application.
It seems like WCF would be a good option, but I've never built a RESTful service with WCF, so I'm not sure where to even begin on that approach.
The other option I'm thinking about is using ASP.NET MVC, adding some custom routes, add a few controller actions and using different views to push out JSON, xml, and other return types.
This project is mostly a learning exercise for myself, and I'd like to spend some extra time and do it "right" so I have a better undertanding of how the pieces fit together.
So my question is this, which approach should I use to build this RESTful service, and what are some advantages of doing it that way?
Normally, I would say WCF for any kind of hosted serice, but in the specific case for RESTful services using JSON as a serialization mechanism, I prefer ASP.NET MVC (which I will refer to as ASP.NET for the remainder of this answer).
One of the first reasons is because of the routing mechanism. In WCF, you have to define it on the contract, which is all well and good, but if you have to make quick changes to your routing, from my point of view, it's much easier to do them using the routing mechanism in ASP.NET.
Also, to the point above, if you have multiple services exposed over multiple interfaces in WCF, it's hard to get a complete image of your URL structure (which is important), whereas in ASP.NET you (typically) have all of the route assignments in one place.
The second thing about ASP.NET is that you are going to have access to all of the intrinsic objects that ASP.NET is known for (Request, Response, Server, etc, etc), which is essential when exposing an HTTP-specific endpoint (which is what you are creating). Granted, you can use many of these same things in WCF, but you have to specifically tell WCF that you are doing so, and then design your services with that in mind.
Finally, through personal experience, I've found that the DataContractJsonSerializer doesn't handle DateTimeOffset values too well, and it is the type that you should use over DateTime when working with a service (over any endpoint) which can be called by people over multiple timezones. In ASP.NET, there is a different serializer that you can use, or if you want, you can create your own ActionResult which uses a custom serializer for you. I personally prefer the JSON.Net serializer.
One of the nice things about the JSON.Net serializer and ASP.NET that I like is that you can use anonymous types with it, if you are smart. If you create a static generic method on a non-generic type which then delegates to an internal generic type, you can use type inference to easily utilize anonymous types for your serialized return values (assuming they are one-offs, of course, if you have a structure that is returned consistently, you should define it and use that).
It should also be mentioned that you don't have to completely discount WCF if developing a RESTful service. If you are pushing an ATOM or RSS feed out from your service then the classes in the System.ServiceModel.Syndication namespace of massive help in the construction and serialization of those feeds. Creating a simple subclass of the ActionResult class to take an instance of SyndicationFeed and then serialize it to the output stream when the ActionResult is executed is quite simple.
Here is a a thought that may help you make the decision between ASP.NET MVC and WCF. In the scenarios you describe, do you expect to need to use a protocol other than HTTP?
WCF is designed to be transport protocol agnostic and so it is very different than ASP.NET. It has channels and bindings, messages, service contracts, data contracts and behaviours. It provides very little in the way of guidance when it comes to building distributed applications. What it gives you is a clean slate to build on.
ASP.Net MVC is naturally a Http based framework. It deals with HTTP verbs, media types, URLs, response headers and request headers.
The question is which model is closer to what you are trying to build?
Now you mentioned ReST. If you really do want to build your distributed applications following the ReST constraints then you would be better to start with OpenRasta. It will guide you down that path.
You can do Rest in ASP.Net MVC and you can do it in WCF, but with those solutions, you will not fall into the pit of success ;-)
Personally, I am not crazy about implementing REST services in WCF. I find the asp.net mvc framework a more natural programming model for this.
The implementor of http://atomsite.net/ originally implemented the atompub specification in WCF and then rewrote the entire service using asp.net mvc. His experience echoed my comment above that for a pure REST service asp.net mvc is the way to go.
The only exception would be if I wanted to potentially expose a service in a restful and non restful way. Or if I was exposing an existing WCF service via REST.