I hate MSDN's site for WCF RIA services. It does not say what it is, it only says what it does. It says what it can achieve but does not say why I need it.
For example:
"A common problem when developing an
n-tier RIA solution is coordinating
application logic between the middle
tier and the presentation tier".
Well, it does not mean much to me.
"RIA Services solves this problem by
providing framework components, tools,
and services that make the application
logic on the server available to the
RIA client without requiring you to
manually duplicate that programming
logic. You can create a RIA client
that is aware of business rules and
know that the client is automatically
updated with latest middle tier logic
every time that the solution is
re-compiled."
So does it download DLLs from server? Is it a metadata describing the rules for the data?
So what is it? Is it just a VS 2010 add-on for RAD? Or is it a technology on top of WCF or underneath it or what? Where does it live? With data, with server, what?
I appreciate if you can summarise this for me please.
RIA services is a server-side technology that automatically generates client-side (Silverlight) objects that take care of the communication with the server for you and provide client-side validation.
The main object inside a RIA service is a DomainService, usually a LinqToEntitiesDomainService that is connected to a LinqToEntities model.
The key thing to remember in RIA services is that it's mainly a sophisticated build trick. When you create a domain service and compile your solution, a client-side representation of your domain service is generated. This client-side representation has the same interface. Suppose you create a server-side domain service CustomerService with a method IQueryable<Customer> GetCustomersByCountry. When you build your solution, a class is generated inside your Silverlight project called CustomerContext that has a method GetCustomersByCountryQuery. You can now use this method on the client as if you were calling it on the server.
Updates, inserts and deletes follow a different pattern. When you create a domain service, you can indicate whether you want to enable editing. The corresponding methods for update/insert/delete are then generated in the server-side domain service. However, the client-side part doesn't have these methods. What you have on your CustomerContext is a method called SubmitChanges. So how does this work:
For updates, you simply update properties of existing customers (that you retrieved via GetCustomersByCountryQuery).
For inserts, you use CustomerContext.Customers.Add(new Customer(...) {...}).
For deletes, you use CustomerContext.Customers.Remove(someCustomer).
When you're done editing, you call CustomerContext.SubmitChanges().
As for validation, you can decorate your server-side objects with validation attributes from the System.ComponentModel.DataAnnotations namespace. Again, when you build your project, validation code is now automatically generated for the corresponding client-side objects.
I hope this explanation helps you a little further.
The latest news: WCF RIA Services is dead:
http://blogs.msmvps.com/deborahk/who-moved-my-cheese-ria-services/
If you want to use RIA Services, they have been open sourced:
http://www.openriaservices.net/blog/posts/
Related
I'm working on a project that is our companies first foray into Domain Driven Development.
Our Web API originally simply provided CRUD operations and the project exposed OData controllers, but I'm not sure if that is still a good idea.
Is OData a good way to expose non-CRUD APIs?
More info:
Initially our web api basically exposed CRUD functions. To create a new User you would simply create one and post it to the service. To change, for example, an address you would get a copy of the user entity, make changes, then perform an update operation. Basic OData stuff.
Beyond providing query support, OData also exposed the service in a readily consumable way, so it could be added to other projects as a service reference and accessed with a proxy.
Since we have moved over to a DDD approach, things have changed significantly. Our Web API is now simply a gateway to a number of independent sub-domain services. We no longer provide CRUD operations or direct access to entities, instead making service calls to manipulate entities. Instead of creating a User entity sending it to the User service via a Put request, a consumer must generate a CreateUserBindingModel and send it to the User/Create service and let the service generate the entity. Changing an address is done through the ChangeAddress(ChangeAddressBindingModel model) method, rather than just updating the whole object. Queries are much more targeted and rarely if ever return entire domain objects.
Is it a bad idea to keep using OData as a basis for our Web API, when we no longer provide CRUD operations? Is there another way to expose the details of our service the way you can with OData? I know WCF services provide similar functionality, but I was under the impression it was even more tied to CRUD than OData.
OData is a data oriented API spec, it's anti-DDD. Although it can satisfy all your requirements to implement REST APIs but it's best for data processing API. I guess you already know that using OData feels like operating the database via HTTP. If you are using DDD you should forget OData totally.
In OData, actions and functions are a way to add server-side behaviors that are not easily defined as CRUD operations on entities
https://learn.microsoft.com/en-us/aspnet/web-api/overview/odata-support-in-aspnet-web-api/odata-v4/odata-actions-and-functions
https://blogs.msdn.microsoft.com/alexj/2012/02/03/cqrs-with-odata-and-actions/
https://github.com/OData/ODataSamples/blob/master/WebApiCore/ODataActionSample/ODataActionSample/
The reason why I need loosely-coupled WCF because Entity Framework is tightly-coupled. When I say loosely-coupled, there's no need to instantiate the database context or add the service reference of WCF. It just rely on web configuration or some .ini file that does not require compilation when developers need to change servers, ip address or service url's.
Instead, the MVC(say controller) will just send request message and then gets the response data from WCF service. But still we cannot afford without having Models based on the database (since we need it in intellisense for views markup), where the WCF will get the data. Let say we have those database objects class already, create some repository that binds the WCF data to the MVC Models.
What I mean of WCF web service, it ONLY contains messages, no more passing of object reference, because thats the new SOA definition. It makes more sense to pass messages instead of objects.
Is this a better approach? In terms of scalability and performance, I don't mean to offend the Entity Framework Fans.
It is an entirely valid approach to define a WCF web service in terms of message schemas which just use basic types, so that clients need know nothing about WCF in order to use the service. WCF would be useless for interop with other platforms (e.g. Java) otherwise.
Understand that WCF is a general and powerful framework for implementing communication over a variety of transport protocols. It can be equally effectively used for raw XML messaging as for programming in terms of objects. Object serialisation and deserialisation is an optional extra of the framework, not a requirement. (There is really no such thing as "passing of object reference" - ultimately it is an XML infoset which travels across the communication channel. Also, Entity Framework is not part of WCF - it is a distinct ORM Framework which you can use with WCF if you want, but that's your choice.)
Scalability and performance is entirely orthogonal to the design of the service in terms of its data and operation contracts. You should feel free to adopt whatever approach to defining your services is best for your application. If that's XML messages, that's fine - don't let anyone tell you otherwise.
I have a windows form application(c#) and an asp.NET web application which both access Sql Server database. I want to centralize the database access. Which metedologies should i follow? What is the common approach to this issue?
Writing DAL and Model Libraries and using them in both application?
Writing WCF service including DAL model and using this service with both applicaiton?
None of the above?
Can you give me any idea?
Thank you.
I would go with the WCF approach. Keep in mind that when (not if, when) you have to make changes that pertain to one app, but not the other (yet), you will have to account for that in the common layer, so using interfaces may make your life a little easier.
The cleanest way is to wrap the DB with a WCF services.
If you don't write large amounts of data in one go you can use a WCF Data Service; this directly wraps an Entity Framework model and you can configure access to tables and methods in various ways.
What you want is to have one place where the DB is accessed, so that if there is an issue, you can fix it in one location, for instance.
Furthermore, if you want to log all calls to a particular table, for instance, the only way to make sure that will be done is by centralizing all calls to the DB this way and not allow anybody direct access to the DB.
Wrap the service, then keep the connection string secret.
I think using the SOA approach is really better (WCF or WebServices with a DAL layer) because this way you don't need to publish your DAL dll with the Windows Forms exe. Then, all changes to your data model will automatically happens to your both UI clients.
Remember that this can cause its own problems:
Concern with security so that your Services cannot be accessed directly by URL, allowing someone to run your methods.
Concern about maintenance, because changes in data layer that needs to affect only one interface will be more difficult to control and needs to be better planned before (with the creation of new methods specific to certain intercace).
Decrease in performance, because the HTTP access is always more costly than direct communication with a dll.
Risk of lack of communication with the server, something that is expected to ASP.NET but requires additional concerns in the Windows Forms client to behave properly in these cases.
Option 1 seems simpler and I would do the same.
Option 2 with WCF will add additional code to your product and hence maintenance. Also this would mean an additional layer as well.
Corporate programmers like the second option (WCF service including DAL).
Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.
Do you use auto-generated WCF service references in line of business applications? Or do you roll your own? And why?
EDIT
For anyone looking to roll their own, I found this article which may prove useful: Understanding WCF Services in Silverlight 2. There's another article on the site for Silverlight 3 which may be a useful addition: Understanding WCF Faults in Silverlight 3.
I typically roll my own, or tweak the ones generated by the auto-generated wizard.
I have two scenarios, most of the time:
I control both ends of the wire - in that case, I share the assembly with the service and data contracts between the service and client, and in that case, I write my own clients from scratch, as ClientBase<T> descendants or using a ChannelFactory<T>. Unfortunately, this is not an option with a Silverlight client, as far as I know :-(
I get WSDL+XSD from a third party - in that case, I typically use svcutil.exe to generate a first version of the client proxy, and then I tweak that to suit my needs (especially the configs generated by svcutil or VS "Add Service Reference" are horrendously bad.....)
I just like to have that extra control of doing it myself and totally knowing what's going on.
I haven't had to use Silverlight to access a service I didn't control, but in accessing a WCF service that I do control, yeah, I use the standard auto-generated WCF references. Rolling my own would just be too painful when the service is changing regularly.
If you control both ends of the service, you should also strongly investigate RIA Services, which implements a much more elegant way of keeping your Siverlight client in sync with your WCF service than having to manually regenerating your service references each time the interface changes.