I'm planning a "universal" client-server-architecture. The current structure looks like this:
Server
Asp.Net WebApp (hosted local or in azure)
EntityFrameworkCore Datalayer
WebApi with Controllers for each EF-Model, sending flat DTOs
Client
ApiClient, receiving DTOs
Viewmodels for each View (MVVM), working with several different DTOs. Relationship between Entitys need to be manually connected by Entity-IDs.
As many tutorials show, that's the way to go. But I'm unsure with some things.
When calling my Api from my ViewModel, i need multiple calls (one for each entity i need). Wouldn't it be faster to just create a ApiController for my ViewModel that gives me everything i need in one call? That's against common patterns i guess, but what speaks against it? The whole client-side logic is executed on the server and the client stays clean and dumb.
The comfortable Linq-syntax of entity-framework is not useable on the client side. Is there a comparable thing wrapping all the "receive-dto-and-create-relations" stuff?
I think you can abstract those ideas about architecture that your read on the internet, a good architecture is actually the one which can solve your problems in a more effective way.
My advice is to not stick to those architectures, open your mind, think outside the box. You could create an API Controller, or you could just create a Proxy class(http://www.dofactory.com/net/proxy-design-pattern), to abstract those api calls.
Another way, you could just join the data on the api side. It is not obligatory to return data related to a unique entity using the api. You could just create another class and model the data the way you need.
The comfortable Linq-syntax of entity-framework is not useable on the
client side. Is there a comparable thing wrapping all the
"receive-dto-and-create-relations" stuff?
Well, you can always use lambda expressions to query your data, when you are using a .net collection, it is very much like entity-framework linq, the only difference is that you are not manipulating your data direct from the database, only in memory data.
When calling my Api from my ViewModel, i need multiple calls (one for each entity i need). Wouldn't it be faster to just create a ApiController for my ViewModel that gives me everything i need in one call? That's against common patterns i guess, but what speaks against it? The whole client-side logic is executed on the server and the client stays clean and dumb.
That is NOT against common patterns. Create your API methods according to your use cases and return all data you need. Looks like you are trying to create a RESTful API but REST API doesn't have to be just plain CRUD API. Anything can be a resource.
The comfortable Linq-syntax of entity-framework is not useable on the client side. Is there a comparable thing wrapping all the "receive-dto-and-create-relations" stuff
DTO is not necessary flat. It might have other objects inside of it. If you use JSON serializer to pass data to the client your DTO should be deserialized as a Javascript object with all relations preserved. Also check underscorejs library. It has many helpful functions to work with Javascript objects in a LINQ like way.
Related
I have an application with several Web API controllers and I now I have a requirement which is to be able to filter GET results by the object properties. I've been looking at using OData but I'm not sure if it's a good fit for a couple reasons:
The Web API controller does not have direct access to the DataContext, instead it gets data from our database through our "domain" layer so it has no visibility into our Entity Framework models.
Tying into the first item, the Web API deals with lightweight DTO model objects which are produced in the domain layer. This is effectively what hides the EF models. The issue here is I want these queries to be executed in our database but by the time the Web API method gets a collection from the domain layer all of the objects in the collection have been mapped to these DTO objects, so I don't see how the OData filter could possibly do it's job when the objects are once-removed from EF in this way.
This item may be the most important one: We don't really want to allow arbitrary querying against our Web API/Database. We just sort of want to leverage this OData library to avoid writing our own filters, and filter parsers/builders for every type of object that could be returned by one of our Web API endpoints.
Am I on the wrong track based on #3? If not, would we be able to use this OData library without significant refactoring to how our Web API and our EF interact?
I haven't had experience with OData, but from what I can see it's designed to be fed a Context and manages the interaction and returning of those models. I am definitely not a fan of returning Entities in any form to a client.
It's an ugly situation to be in, but when faced with this, my first course of action is to push back to the clients to justify their searching needs. The default request is almost always "Well, it would be nice to be able to search against everything." My answer to that is that I don't want to know what you want, I want to know what you need because I don't want to give you a loaded gun to shoot your own foot off with and then have you blame me because the system came grinding to a halt. Searching is a huge performance killer if it's too open-ended. It's hard to test for accuracy/relevance, and efficiently index for 100% of possible search cases when users only need 25% of those scenarios. If the client cannot tell you what searching they will need, and just want everything because they might need it, then they don't need it yet.
Personally I stick to specific search DTOs and translate those into the linq expressions.
If I was faced with a hard requirement to implement something like that, I would:
Try to push for these searches/reports to be done off a reporting replica that is synchronized with the live database. (To minimize the bleeding when some idiot managers fire up some wacky non-indexed search criteria so that it doesn't tie up the production DB where people are trying to do work.)
Create a new bounded DbContext specific for searching with separate entity definitions that only expose the minimum # of properties to represent search criteria and IDs.
Hook this bounded context into the API and OData. It will return "search results". When a user selects a search result, use the ID(s) against the API to load the applicable domain, or initiate an action, etc.
no. 1. is optional, a nice to have provided they can live with searches not "seeing" updated criteria until replicated. (I.e. a few seconds to minutes depending on replication strategy/size) Normally these searches are used for reporting-type queries so I'd push to keep these separate from the normal day-to-day searching options that users use. (I.e. an advanced search option or the like.)
I'm about to start a project which will require a web site, connected to a web service. The web service will retrieve data from a database, and return it to the website.
My question is about how to properly separate business concerns with data access.
The service can be separated by utilizing the repository pattern. Then in the service calls implementations I can get the required data from the repository in the form of entities, then return it over the wire.
Similarly I can do the same on the website. Use the repository to hide the implementation details of getting the data from the service and serializing it into an entity or entities.
However, my main issue with this approach is that both the service and the website will both have definitions for their entities. Is there a pattern I can use that will allow me to define these entities once, or is this architecture way off from what is common / best practice.
I should mention that the technologies I'm using are asp.net with c# and I'm not using an entity framework.
Create a WCF Data Service and a client for it in the very same solution. Visual Studio will enable to use the very same classes and model at client side what you define in the service side.
Bonus: In case you use the concept right, the IQueryable will can be marshalled to client side (not the result), so you can even do ad-hoc queries in client side, (supposing you repository's method returns with IQueryable) just the result will travel in the wire. This will be important for paging scenarios too.
Start reading here
Sounds daft i know but i want to do something a bit out of the ordinary ...
essentially I'm looking to build solution that has a wcf data service at the back end (or something of that ilk at least) that allows me to query my database using simple url syntax.
the problem i have is that when my db schema changes i have to recomile the entire back end and that's not good because the solution i'm building allows the definition of "entities" so to speak.
Essentially what i want to do is have the model update every time the db updates ... as a sort of triggered event.
I'm thinking that EF won't do this which leads me to my actual question ...
How would you solve this problem?
I need exactly what a wcf data service offers out of the box ... just with a more dynamic data model beneath it.
You need to change the O/RM to something more dynamic ... something like Massive could be used instead of EF.
Someone looks to be doing similar with WebWCF ... Massive with WCF Web Api to return dynamic types/Expandos?.
If you use data services then you'd need to figure out some way to represent the Massive as a 'DataContext'. WebWCF on the other hand would serialise dynamic objects as a lump of JSON or XML where required.
The problem with your proposed approach is one where the Web Service contract is dynamic and not versioned. This means that if you delete/rename/change a field you essentially have created a change to the 'Contract' that the clients use to consume the web service. This can lead to a client breaking unless updated at the same time.
If you are looking at a low friction way of managing model change updating database I have found that EF Code First 4.2 and EF Migrations works pretty well for me. 0.7.0.1 is reasonably stable and all available from NuGet.
We are developing a 3-tier application with a WPF client, which communicates through WCF with the BLL. We use EF to access our database.
We have been using the default EntityObject code generator of EF, but had lots of problems and serialization issues when sending those object through the wire, and when processing and reattaching them in the BLL.
We are about to switch to the POCO template, and rewrite the data access and the communication parts of our app (we are hoping to have a cleaner architecture and less "magic code" that way.
My question is whether it is a good idea to reuse the POCO classes on the client side? Or should we create separate DTO classes? Even if they would be identical to the POCO entity classes? What are the pros/cons of the two approaches?
Definitely use DTOs + AutoMapper. Otherwise you'll have tons of problems with DataContractSerializer while using WCF due to circular dependencies (especially problematic with navigational properties). Even though you may omit DTOs initially, you'll be forced to used them later on due to problems mentioned above. So I would advice using proper DTOs for each tier.
Also your tier specific models will carry different attributes. You may also need to modify (i.e. specialize) the data that you carry up in each tier. So if your project is large enough (or have the prospect to be so), use DTOs with proper naming and place them in proper locations (i.e. not all in the same assembly).
Now i work on similar issue. I have done next:
Create next assemplities:
SF.Contracts - that just defined ServiceCotnracts and DataContracts. Obvious all datacontracts may be used like POCO classes in EF (but i dont use t4 or other generator - all POCO classes and DataContext are written manualy, because i need to use very bad database).
SF.
SF.DataAccessObjects - in this assemlity i implement my edmx and DataContext.
SF.Services - implementation of WCF Services.
So, a large numbers of select WCF method have next signature and implementation:
public List<Vulner> VulnerSelect(int[] idList = null, string[] navigationPropertiesList = null)
{
var query = from vulner in _businessModel.DataModel.VulnerSet
select vulner;
if (navigationPropertiesList != null)
navigationPropertiesList.Select(p =>{query = ((ObjectQuery<Vulner>)query).Include(p);
return true; });
if (idList != null)
query = query.Where(p => idList.Contains(p.Id));
return query.ToList();
}
and you can use this method like this:
WCFproxy.VulnerSelect(new[]{1,2,3},new[]{"SecurityObjects", "SecurityObjrcts.Problem"});
so that, you have no problem with serrialization, navigation properties etc. and you can clearly indicate which NavigationProperties must be load.
p.s.: sory for my bad English :)
I'd say use DTOs.
Circular dependencies and large object graphs can be a serious problem causing either errors or far too much serialised traffic. There's just way too much noise on an ORM controlled object to send it down the line.
I use a service layer to access my domain objects and use LINQ extensively, but I always return DTO objects back to the client.
I am starting a new ASP.NET MVC project to learn with, and am wondering what's the optimal way to set up the project(s) to connect to a SQL server for the data. For example lets pretend we have a Product table and a product object I want to use to populate data in my view.
I know somewhere in here I should have an interface that gets implemented, etc but I can't wrap my mind around it today :-(
EDIT: Right now (ie: the current, poorly coded version of this app) I am just using plain old SQL server(2000 even) using only stored procedures for data access, but I would not be adverse to adding in an extra layer of flexability for using linq to sql or something.
EDIT #2: One thing I wanted to add was this: I will be writing this against a V1 of the database, and I will need to be able to let our DBA re-work the database and give me a V2 later, so it would be nice to only really have to change a few small things that are not provided via the database now that will be later. Rather than having to re-write a whole new DAL.
It really depends on which data access technology you're using. If you're using Linq To Sql, you might want to abstract away the data access behind some sort of "repository" interface, such as an IProductRepository. The main appeal for this is that you can change out the specific data access implementation at any time (such as when writing unit tests).
I've tried to cover some of this here:
I would check out Rob Conery's videos on his creation of an MVC store front. The series can be found here: MVC Store Front Series
This series dives into all sorts of design related subjects as well as coding/testing practies to use with MVC and other projects.
In my site's solution, I have the MVC web application project and a "common" project that contains my POCOs (plain ol' C# objects), business managers and data access layers.
The DAL classes are tied to SQL Server (I didn't abstract them out) and return POCOs to the business managers that I call from my controllers in the MVC project.
I think that Billy McCafferty's S#arp Architecture is a quite nice example of using ASP.NET MVC with a data access layer (using NHibernate as default), dependency injection (Ninject atm, but there are plans to support the CommonServiceLocator) and test-driven development. The framework is still in development, but I consider it quite good and stable. As of the current release, there should be few breaking changes until there is a final release, so coding against it should be okay.
I have done a few MVC applications and I have found a structure that works very nicely for me. It is based upon Rob Conery's MVC Storefront Series that JPrescottSanders mentioned (although the link he posted is wrong).
So here goes - I usually try to restrict my controllers to only contain view logic. This includes retrieving data to pass on to the views and mapping from data passed back from the view to the domain model. The key is to try and keep business logic out of this layer.
To this end I usually end up with 3 layers in my application. The first is the presentation layer - the controllers. The second is the service layer - this layer is responsible for executing complex queries as well as things like validation. The third layer is the repository layer - this layer is responsible for all access to the database.
So in your products example, this would mean that you would have a ProductRepository with methods such as GetProducts() and SaveProduct(Product product). You would also have a ProductService (which depends on the ProductRepository) with methods such as GetProductsForUser(User user), GetProductsWithCategory(Category category) and SaveProduct(Product product). Things like validation would also happen here. Finally your controller would depend on your service layer for retrieving and storing products.
You can get away with skipping the service layer but you will usually find that your controllers get very fat and tend to do too much. I have tried this architecture quite a few times and it tends to work quite nicely, especially since it supports TDD and automated testing very well.
For our application I plan on using LINQ to Entities, but as it's new to me there is the possiblity that I will want to replace this in the future if it doesn't perform as I would like and use something else like LINQ to SQL or NHibernate, so I'll be abstracting the data access objects into an abstract factory so that the implementation is hidden from the applicaiton.
How you do it is up to you, as long as you choose a proven and well know design pattern for implementation I think your final product will be well supported and robust.
Use LINQ. Create a LINQ to SQL file and drag and drop all the tables and views you need. Then when you call your model all of your CRUD level stuff is created for you automagically.
LINQ is the best thing I have seen in a long long time. Here are some simple samples for grabbing data from Scott Gu's blog.
LINQ Tutorial
I just did my first MVC project and I used a Service-Repository design pattern. There is a good bit of information about it on the net right now. It made my transition from Linq->Sql to Entity Framework effortless. If you think you're going to be changing a lot put in the little extra effort to use Interfaces.
I recommend Entity Framework for your DAL/Repository.
Check out the Code Camp Server for a good reference application that does this very thing and as #haacked stated abstract that goo away to keep them separated.
i think you need a orm.
for example entity framework(code first)
you can create some class for model.
use these models for you logic and view,and mapping them to db(v1).
when dba give you new db(v2),only change the mapping config.(v1 and v2 are all rdb,sql server,mysql,oracel...),if db(v1) is a rdb and db(v2) is a nosql(mongo,redis,couchbase...),that's not work
may be need do some find and replace