How many entities should RIA domain service include? - c#

I was wondering about how to exactly implement domain service in RIA. Is it common to include all entities in the entire domain model in a single domain service, thus making the service responsible for the entire database? Is this the way it's normally done? I really have no reason to separate data access into different services, but I was wondering if this is considered a good practice, and what the pros and cons of such an approach would be.
Also, is it considered a good or bad practice to register domain context as a singleton with IOC, so that the entire application works with the same set of data, thus avoiding concurrency issues and similar problems?
Thoughts?
Thank you

We have two separate services in our app: one for the data model and one strictly used for authentication. We took this design from MS's business sample app structure.
We considered breaking up our data domain service into smaller components but decided against it because it didn't seem to add any advantage (other than reducing service class size.) If you have distinct data models that are completely independent from each other then going that route might make sense. Intuitively the domain service should represent the entire domain. If your domains are independent (with the occasional need for crossover) then it makes logical sense to segregate them in that way.
Regarding using the context as a Singleton: I tried that and ended up creating class-scope instances instead. We haven't experienced any issues doing it this way as they all use the same underlying data connection. I don't know what the "official" best practice is, but this is the way I've seen it done in numerous RIA apps.

Thanks Nick. I actually did the same thing as you, I built two services, one for authentication and one for data access. That seems most logical to me.
As for making datacontext a singleton, I've tried that as well and it works nicely. No need to constantly reload and refresh data and worry about concurrency issues in other classes :)

Related

Implementing caching of services in Domain project being used by a Web API

My question is: how do I implement caching in my domain project, which is working like a normal stack with the repository pattern.
I have a setup that looks like the following:
ASP.NET MVC website
Web API
Domain project (using IoC, with Windsor)
My domain project for instance have:
IOrderRepository.cs
OrderRepository.cs
Order.cs
My ASP.NET MVC website calls the Web API and gets back some DTO classes. My Web API then maps these objects to business objects in my domain project, and makes the application work.
Nowhere in my application have I implemented caching.
Where should be caching be implemented?
I thought about doing it inside the methods in the OrderRepository, so my Get, GetBySpecification and Update methods has to call some generic cache handler injected by the OrderRepository.
This obviously gives some very ugly code, and isn't very generic.
How to maintain the cache?
Let's say we have a cache key like "OrderRepostory_123". When I call the Update method, should I call cacheHandler.Delete("OrderRepository_123") ? Because that seems very ugly as well
My own thoughts...
I can't really see a decent way to do it besides some of the messy methods I have described. Maybe I could make some cache layer, but I guess that would mean my WebAPI wouldn't call my OrderRepository anymore, but my CacheOrderRepository-something?
Personally, I am not a fan of including caching directly in repository classes. A class should have a single reason to change, and adding caching often adds a second reason. Given your starting point you have at least two likely reasonable options:
Create a new class that adds caching to the repository and exposes the same interface
Create a new service interface that uses one or more repositories and adds caching
In my experience #2 is often more valuable, since the objects you'd like to cache as a single unit may cross repositories. Of course, this depends on how you have scoped your repositories. A lot may depend on whether your repositories are based on aggregate roots (ala DDD), tables, or something else.
There are probably a million different ways to do this, but it seems to me (given the intent of caching is to improve performance) implementing the cache similar to a repository pattern - where the domain objects interact with the cache instead of the database, then perhaps a background thread could keep the database and cache in sync, and the initial startup of the app pool would fill the cache (assuming eager loading is desired). A whole raft of technical issues start to crop up, such as what to do if the cache is modified in a way that violates a database constraint. Code maintenance becomes a concern where any data structure related concerns possibly need to be implemented in multiple places. Concurrency issues start to enter the fray. Just some thoughts...
SQLCacheDependency with System.Web.Caching.Cache, http://weblogs.asp.net/andrewrea/archive/2008/07/13/sqlcachedependency-i-think-it-is-absolutely-brilliant.aspx . This will get you caching that gets invalidated based on other systems applying updates also.
there are multiple levels of caching depending on the situation however if you are looking for generic centralized caching with low number of changes I think you will be looking for EF second level caching and for more details check the following http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
Also you can use caching on webapi level
Kindly consider if MVC and WebAPI the network traffic if they are hosted in 2 different data centers
and for huge read access portal you might consider Redis http://Redis.io
It sounds like you want to use a .NET caching mechanism rather than a distributed cache like Redis or Memcache. I would recommend using the System.Runtime.Caching.MemoryCache class instead of the traditional System.Web.Caching.Cache class. Doing this allows you to create your caching layer independent of your MVC/API layer because the MemoryCache has no dependencies on System.Web.
Caching your DTO objects would speed up your application greatly. This prevents you from having to wait for data to be assembled from a cache that mirrors your data layer. For example, requesting Order123 would only require a single cache read rather than to several reads to any FK data. Your caching layer would of course need to contain the logic to invalidate the cache on UPDATEs you perform. A recommended way would be to retrieve the cached order object and modify its properties directly, then persist to the DB asynchronously.

ObservableCollection in the service layer of the WPF MVVM application

Examples of WPF MVVM apps I've seen on the Internet consider VM a layer which interacts with a service layer which either uses "old" events from an external library, or interacts with web using HTTP or whatever. But what if I build all M, V, VM, service and other parts myself? How to properly build interaction between the service layer and the viewmodel layer? Can I just put ObservableCollection<OrderModel> into the service and return it as is from the viewmodel for the view, or is it considered a bad approach and there're better alternatives?
You can do this - of course you can. The primary reason to do such a thing would be to reduce duplication across multiple WPF applications.
However, a challenge you might have in some scenarios, depending on your service layer/data layer implementation, is long-running services that in turn use database connections. ObservableCollections are enticing from the point of view of having the service layer automatically synchronising changes made by an application to a data store; however it gets complicated when you want to communicate changes that originate from the data itself (i.e. in response to some other process that creates/modifies data).
The service layer can't really replace the instance (i.e. in the case of large-scale changes), since it is no longer the sole owner of the reference - but even if it could, replacing the instance would pretty much break any binding the UI has to the collection.
So you stick to trying to keep the one instance up to date. If your services are bound to a database, then unless you code-up some form of long-running monitoring process within your service, the only simple way to keep an ObservableCollection up to date after it's been dished out would be to hold database connections/contexts (in the case of Linq to Sql or EF) open - because otherwise related objects etc are not going to be able to retrievable (unless you force all objects to be read in one go - which is not scalable).
Okay, so it's possible to write some form of management layer which can manage the connections for you - but in addition to the inevitable polling, or perhaps SQL Server notifications that you might use, I believe the code might get quite complicated.
That said, it really does depend - that particular issue is one to look out for, but it might be that you have an architecture and environment in which such things simply don't matter.
My advice, if you want to try it - go ahead. For me? I've thought about it - and beyond adding INotifyPropertyChanged to some domain models, I stick to the idea that an application has it's own VM. Multiple applications might share the same VM - but that won't be internal to the service layer itself.
A service layer provides access to data and business logic in a typically one-shot way. Classes in the VM pattern are intended to have a much longer lifespan - and trying to code a long-running service layer is notoriously very hard to do - especially if you want it to try and solve all the problems that all future applications might present. Inevitably you will end up coding services or VM types within the service layer for a single application only - in which case it might as well have gone in that App's codebase.
I'd be tempted to use an ObservableCollection only from the point at which the "observable" aspect is relevant, which is generally the VM exposing something to the V. Further down the stack (i.e. the M) I'd be tempted to stick with more generic things like lists and collections (unless you specifically need for things to be otherwise). Its easy enough for the VM to create an ObservableCollection based on any old IEnumerable in any case.
A reasonable question though, especially as ObservableCollection's placement in the System.Collections namespace would seem to suggest that Microsoft don't particularly think of this as a specialized class (and certainly not wpf-specific).
I wouldn't do that for a number of reasons. They're documented here: Common mistakes with an observable collection
The author goes through several mistakes people make with them, including using them in the service layer.

database access from multiple applications

I have a windows form application(c#) and an asp.NET web application which both access Sql Server database. I want to centralize the database access. Which metedologies should i follow? What is the common approach to this issue?
Writing DAL and Model Libraries and using them in both application?
Writing WCF service including DAL model and using this service with both applicaiton?
None of the above?
Can you give me any idea?
Thank you.
I would go with the WCF approach. Keep in mind that when (not if, when) you have to make changes that pertain to one app, but not the other (yet), you will have to account for that in the common layer, so using interfaces may make your life a little easier.
The cleanest way is to wrap the DB with a WCF services.
If you don't write large amounts of data in one go you can use a WCF Data Service; this directly wraps an Entity Framework model and you can configure access to tables and methods in various ways.
What you want is to have one place where the DB is accessed, so that if there is an issue, you can fix it in one location, for instance.
Furthermore, if you want to log all calls to a particular table, for instance, the only way to make sure that will be done is by centralizing all calls to the DB this way and not allow anybody direct access to the DB.
Wrap the service, then keep the connection string secret.
I think using the SOA approach is really better (WCF or WebServices with a DAL layer) because this way you don't need to publish your DAL dll with the Windows Forms exe. Then, all changes to your data model will automatically happens to your both UI clients.
Remember that this can cause its own problems:
Concern with security so that your Services cannot be accessed directly by URL, allowing someone to run your methods.
Concern about maintenance, because changes in data layer that needs to affect only one interface will be more difficult to control and needs to be better planned before (with the creation of new methods specific to certain intercace).
Decrease in performance, because the HTTP access is always more costly than direct communication with a dll.
Risk of lack of communication with the server, something that is expected to ASP.NET but requires additional concerns in the Windows Forms client to behave properly in these cases.
Option 1 seems simpler and I would do the same.
Option 2 with WCF will add additional code to your product and hence maintenance. Also this would mean an additional layer as well.
Corporate programmers like the second option (WCF service including DAL).

Designing an API: Use the Data Layer objects or copy/duplicate?

Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.

Sharing domain model with WCF service

Is it good practice to reference my web applications domain layer class library to WCF service application.
Doing that gives me easy access to the already existing classes on my domain model so that I will not need to re-define similar classes to be used by the WCF service
On the other hand, I don't like the coupling that it creates between the application and service and i am curious if it could create difficulties for me on the long run.
I also think having dedicated classes for my WCF app would be more efficient since those classes will only contain the members that will be used by the service and nothing else. If I use the classes from my domain layer there will be many fields in the classes that will not be used by the service and it will cause unnecessary data transfer.
I will appreciate if you can give me your thoughts from your experience
No it's not. Entities are all about behaviour. data contract is all about... data. Plus as you mentioned you wouldn't want to couple them together, because it will cripple you ability to react to change very soon.
For those still coming across this post, like I....
Checkout this site. Its a good explanation on the topic.
Conclusion: Go through the effort of keeping the boundaries of your architecture clear and clean. You will get some credit for it some day ;)
I personally frown on directly passing domain objects directly through WCF. As Krzysztof said, it's about a data contract not a contract about the behavior of the the thing you are passing over the wire.
I typically do this:
Define the data contracts in their own assembly
The service has a reference to both the data contracts assembly and the business entity assemblies.
Create extension methods in the service namespace that map the entities to their corresponding data contracts and vice versa.
Putting the conceptual purity of what a "Data Contract" is aside, If you begin to pass entities around you are setting up your shared entity to pulled in different design directions by each side of the WCF boundary. Inevitably you'll end up with behaviors that only belong to one side, or even worse - have to expose methods that conceptually do the same thing but in a different way for each side of the WCF boundary. It can potentially get very messy over the long term.

Categories