We currently have an application (Client/Server) that communicates through WCF. We would like to move away from the WCF approach and use a REST approach instead.
There are a few reasons for this, such as overhead (in terms of size) and the possibility to use the same access method for both our Windows client (currently a WinForm client) and mobile devices.
We are also sometimes running the server on the Mono framework, and even though we have it up and running, we have seen some differences regarding how WCF is working on the Mono stack compared to the .NET Framework (so I would not like to use the WebHTTPBinding in WCF to handle REST).
The service also needs to be self-hosted (i.e. not in IIS).
The problem when shifting from WCF to other alternatives is related to contracts. I would like to make it possible to unit test the REST calls, and I would like a contract to be involved, enabling the clients to use proxy classes that they do not have to create by themselves - pretty much like WSDL.
The main idea for handing out proxy classes to developers is that the clients should be able to rely on the service provider to get the correct proxy classes and that they should not need to care about the URLs used.
Is there any way this could be done automatically, and if so - using what framework or method?
Having looked brifely at WebAPI, I came across an example of generating a proxy (http://www.codeproject.com/Tips/535260/Proxy-Object-Generation-for-MVC-and-WebAPI-Control). This would simplify for the developers, but would mean that I manually need to create the proxy for the developers to use.
Any suggestions would be appreciated :)
For Client side unit tests, you should create mock for your rest service responses.
Otherwise You can create a static mock page with all of your service responses.
Related
I need a advice on creating a architecture where i want API layer in between UI layer and Business Layer. UI layer should only consume rest services for displaying data.
Reason for doing this is that we need to expose same service for others clients like Ipad, Android etc.
Now my question is:-
1) Do we need dependence inject in this case? (i don't think so coz we are not going to use any reference at UI layer. The only thing is, we are manipulating the JSON returning by service.)
2) Will it hurt performance?
3) Is this the right approach?
Any help will be appreciated. Thanks
we're doing roughly the same thing now.
1) No, you can't.
2) No, twitter is api first, they seem to be doing ok. I guess technically it will, but it does mean you can scale horizontally so the extra hop overhead can easily be counteracted.
3) You have multiple ui clients so it seems like a decent viable solution.
Security
Security: Basic Authentication
Its the easiest to setup, but be aware the token is reversible, so use HTTPS to encrypt the communication.
The HTTP Authorization header containing the username and password is sent with every request to the api level.
You could use session but that requires a bit more setup.
There are plenty of how to's on setting up basic authenication in C# and web api.
The way i created an API for me was:
Project 1 : WebAPI serving as a portal to fetch data
Project 2 : Class Library, providing services to the WebAPI layer.
Project 3 : Class Library, providing data to my services layer using EF.
Now, different controllers in the web api project require different service objects(from project 2) to work with. I had to provide constructors for those controllers using DI. for this i used Autofac.
For you, your business layer would be Project 2.
Data flowing through one more Project layer might take some time, and you will need to put up exception handling and logging again in the API layer. i don't think performance should be big problem here.
In my experience I've seen such platform oriented approach - providing mSOA to N amount of clients. The architectural solution was a Facade that was hiding all the complex Business Layer requests and in the same time providing UI insensitive processing.
Will it hurt performance?
Not necessary - since it has knowledge of how to handle all required sub-systems requests. All the clients just know that they need a single JSON contract to get the job done, not which and how many of the services to call. By doing so - we have a much better and simplified communication. Take a look at the Mediation (intra-communication) pattern:
I've been handed a project that needs some work doing and the original team that created it have all since left the company. This has been sat "on-the-shelf" for 4 years and everyone but our client had forgotten about it. They want it delivering now and it doesn't work.
The system is a relatively simple ASP Web Forms application for submitting data to another service via 2 WSDL interfaces, logging that request in a SQL database and submitting the response to another service via OPC.
I can set up all of those interfaces for testing except the WSDL. I just have the software here to run. Is there any way I can easily create a service to simulate the final one so I can test my software. I only have the 2 WSDL files to go on. These aren't complicated services. I'm only using 4 methods total.
I've been led to believe that the original creator of this system did something similar but I can't find what he used or any documentation about it. I expect it was run on his laptop and was lost when he left the company.
The WCF service client should be wrapped and exposed via an interface to your software. That way, you can mock the interface and test how your software responds to various inputs/outputs from the mocked service client. You control all aspects of what is returned, including potentially throwing exceptions as the real WCF service client would.
This is basically the reason why the "I" in SOLID exists - because substitution of implementation based on interface is simple to do.
The reason why I need loosely-coupled WCF because Entity Framework is tightly-coupled. When I say loosely-coupled, there's no need to instantiate the database context or add the service reference of WCF. It just rely on web configuration or some .ini file that does not require compilation when developers need to change servers, ip address or service url's.
Instead, the MVC(say controller) will just send request message and then gets the response data from WCF service. But still we cannot afford without having Models based on the database (since we need it in intellisense for views markup), where the WCF will get the data. Let say we have those database objects class already, create some repository that binds the WCF data to the MVC Models.
What I mean of WCF web service, it ONLY contains messages, no more passing of object reference, because thats the new SOA definition. It makes more sense to pass messages instead of objects.
Is this a better approach? In terms of scalability and performance, I don't mean to offend the Entity Framework Fans.
It is an entirely valid approach to define a WCF web service in terms of message schemas which just use basic types, so that clients need know nothing about WCF in order to use the service. WCF would be useless for interop with other platforms (e.g. Java) otherwise.
Understand that WCF is a general and powerful framework for implementing communication over a variety of transport protocols. It can be equally effectively used for raw XML messaging as for programming in terms of objects. Object serialisation and deserialisation is an optional extra of the framework, not a requirement. (There is really no such thing as "passing of object reference" - ultimately it is an XML infoset which travels across the communication channel. Also, Entity Framework is not part of WCF - it is a distinct ORM Framework which you can use with WCF if you want, but that's your choice.)
Scalability and performance is entirely orthogonal to the design of the service in terms of its data and operation contracts. You should feel free to adopt whatever approach to defining your services is best for your application. If that's XML messages, that's fine - don't let anyone tell you otherwise.
I'm about to design my Web service API, most of the functions of my API is basically very simular to my web application.
Now the question is, should I create 1 single method and reuse them for both the web application and the web service api? (This seems to be the logical solution, however its very complicated; it's much easier to duplicate the method used by the web application, and keep both separate, ie one method for the web application and one method for the web service.)
How do you guys do it?
1) REUSE: one main method and reuse them for both web application and web service application (I like this but it's complicated)
WebAppMethodX --uses-->
COMMONFUNCTIONMETHOD_X
APIMethodX ---uses---->
COMMONFUNCTIONMETHOD_X
ie Commonfunctionmethod_x contains reusable set of common features
PRO: less code, less maintenance, less bugs.
CON: very complicated
2) DUPLICATE: two methods, one method for the web application and one method for the web service.
WebAppMethodX
APIMethodX
PRO: simple
CON: duplication = more code, more maintenance, more bugs!
Your use case will very likely be different for your public webservice API than for your internal application API. Create a common service project / tier and use that same tier from both your web app and your public-facing webservice API. Create a separate http-invokable method for each of your web app and your webservice.
It comes down to there being
1) different security concerns. For instance, it is nice (often required) to provide a sample client application making use of your public API so that others can easily get up to speed with what you've provided. That client API may need to pass object constructs that you provide them that have been stripped of internal, secure logic/content. (Remember that compiled C# might as well be clear text with Reflector!)
2) different needs and constraints. For instance, for an internal application call you're going to sometimes enforce different business rules vs. your public facing webservice API (often with the latter being much more constrained to scope).
If you design your business logic into your service layer and invoke those classes/methods well from your web project and your webservice project respectively you're going to have a lot of code reuse anyway without trying to overcomplicate things by mixing use cases.
One method. Otherwise when you find a bug and fix it in one, then forget to in the other... you will cry.
One method, in the web service, and have your web application call it.
I don't understand what "one main method" for both means. Web applications don't have a main method; they're deployed to an app server.
One other point to note: you should write your service in terms of a POCO interface. Once you do that, deployment becomes a choice you make.
It depends..
Normally, I would separate them. This way you remove interdependency between two high level processes. code reuse is good within a process but sometimes you want to be able to use a different app on the same service.
If the two are highly dependant on each other, however, you will want to reuse the same functions so that changing it in one place will change it in another. Thus avoiding more potential issues with the development process.
Do you use auto-generated WCF service references in line of business applications? Or do you roll your own? And why?
EDIT
For anyone looking to roll their own, I found this article which may prove useful: Understanding WCF Services in Silverlight 2. There's another article on the site for Silverlight 3 which may be a useful addition: Understanding WCF Faults in Silverlight 3.
I typically roll my own, or tweak the ones generated by the auto-generated wizard.
I have two scenarios, most of the time:
I control both ends of the wire - in that case, I share the assembly with the service and data contracts between the service and client, and in that case, I write my own clients from scratch, as ClientBase<T> descendants or using a ChannelFactory<T>. Unfortunately, this is not an option with a Silverlight client, as far as I know :-(
I get WSDL+XSD from a third party - in that case, I typically use svcutil.exe to generate a first version of the client proxy, and then I tweak that to suit my needs (especially the configs generated by svcutil or VS "Add Service Reference" are horrendously bad.....)
I just like to have that extra control of doing it myself and totally knowing what's going on.
I haven't had to use Silverlight to access a service I didn't control, but in accessing a WCF service that I do control, yeah, I use the standard auto-generated WCF references. Rolling my own would just be too painful when the service is changing regularly.
If you control both ends of the service, you should also strongly investigate RIA Services, which implements a much more elegant way of keeping your Siverlight client in sync with your WCF service than having to manually regenerating your service references each time the interface changes.