I was asked to build different WCF services where each do other work against sql.
We have 5 db's. All db's+connection string are in 1 xml file. ( file-system file)
The services are hosted under WAS iis 7.5.
since each service should read from db , each service references a DAL dll file.
So here are our components :
I would like to read the xml data to CACHE ( at the first request) and from now on - read from cache. (reading the file each reqeust is out of the question).
idea #1 = the dll , in his ctor , at first request will read the xml file and load it to its cache.
so the dal will look like this :
so now each service can access the DLL's cache object via property. ( one advantage is when dealing with cache dependency on a single file - so when it is changing , we should reload only to one location).
idea #2 = when service is up , load the xml into its cache.
so now , each service will look like this :
Service #1 :
Service #2 :
..
the downside is many cache dependencies on the same file
Question :
By the best practice experience and by design pattern POV : which is the preferred way ?
p.s. the xml file change frequency 1/(1 month)
First of all, when it comes to file system, on Windows Server OS, there's a built in cache layer above the disk. So you probably won't feel much difference regarding disk reads. Of course, parsing the same input again and again is not a good practice, so the parsed (tokenized) xml should be cached.
The design needs more clarifications:
Is there only a single instance of a DAL class, shared among the 5
services? Or maybe the property described in idea 1 is static?
In idea 2: when the file changes and, say, connection string 4 is
changed (and everything else remains the same) - only service 4
should be reloaded?
If a specific service is reloaded - does it cause some kind of
inconsistency with other (non fresh) services?
Update:
I'm still not sure I fully understand the scenario, but here's what I'd do as far as I understand:
The DAL should expose an interface for all data related operations.
Let's say it's IDataGateway
Now, each service has should have a reference to an instance that implements IDataGateway. The service should not be aware of the caching mechanism at all. It just consumes data from the interface.
So all of the caching is done outside the service, in terms of classes and code organization.
Now, the caching layer, in turn, implements IDataGateway, and also consumes a non cached instance of IDataGateway. That's called Decorator pattern. The non cached instance is to be injected in the constructor.
Now, I suggest each service has its own instance of a cached IDataGateway. It's simpler than a singleton (to me, at least). And since data is not shared between services, then we're cool. If, however, data is shared between the services, than a single instance should be used.
Back to those 5 instances, and to the xml file.
We want to monitor this file once it changes, right? We could easily write our own file monitor, or use the one that comes with the framework, or we could see the source code of the CacheDependency class.
The simplest way to do it is to have 5 monitors watching the same file. That's not much of a performance penalty, since timers are quite "cheap".
If, however, you'd like to reduce the resources being used by your system, then you could use a single monitor, having it raise its event of FileChanged or something like that. Each of the 5 cached implementations (those 5 instances) of IDataGateway should have this monitor injected in its constructor, and wire up its own event listener to the FileChanged event.
Once this event is triggered, all of the 5 cached instances of IDataGateway would invalidate their inner cache, thus they should clear their in-memory entries.
On the next call, the cached implementation of IDataGateway would try to take the non existing data from its in-memory cache, but obviously nothing would be there, so it should go on executing the same method in the non-cached implementation of IDataGateway, and populate its cache.
That's my design, HTH...
For me the question comes down to who really needs to know about connection strings: the DAL or the Service? Obviously it's the DAL. The service doesn't (or shouldn't) care what kind of data store the DAL is using - could be a bunch of CSV's on the disk (yikes!) for all it cares. So, it wouldn't make sense to put the connection strings in the services. The DAL needs the connection info, so the DAL should take care of finding it and caching it.
Related
I have researched a lot, and found a few different options and opinions, but I'm not sure how to proceed.
I'm working on a project where there are almost-non-variable classes (e.g. Users) and very-variable classes (e.g. Meetings). I have a repository for each class which gets data directly from database, but I would like to implement a service layer for almost-non-variable classes. This way I could only load them at application's startup.
What can I do to load Users at startup and maintain it on RAM? So I don't have to fetch database every time I need a specific User.
Is singleton pattern a good option? I'm avoiding it, because of people saying it's an anti-pattern.
I'm working on adding push notification into my ASP.NET core 2.0.0 webApp. I want to have a notification service that would have a badgeCount member which I would update when I send out notifications or when I mark something as read.
I wanted to make this a singleton, but it seems like I can't use dependency injection for singletons. I need access to my dbContext and maybe some other Identity /or Entity services later.
Would it make sense for me to make my notifcation service a scopedService instead of a singleton so that I can use DI? Then have a notificationBadge singleton that I would inject into my scopedService so I can maintain it?
I'm doing this so that I don't have to calculate the badge count each time (involves using queries)
EDIT: Actually, after writing this I realized that singletons are probably only instantiated once on server startup and not per user. So my initial approach wouldn't work even if I could use DI. I'd probably have to add a field on my user class that extends the IdentityUser then right? Or is there a way around this so that I don't have to update/save this to any db record?
Understanding DI
So to try and cover your question DI is certainly what you want in terms of most things inside your application and website. It can do singletons, as well as scoped and transcients (new copy every time).
In order to really understand DI and specifically the .Net Core implenentation I actually make use of the DI from .Net Core in a stand-alone .Net Standard open source library so you can see how it is done.
Video explaining the DI and showing me make and use the DI outside of ASP.Net Core scene: https://www.youtube.com/watch?v=PrCoBaQH_aI
Source code: https://github.com/angelsix/dna-framework
This should answer your question regarding how to access the DbContext if you do not understand it already from the video above: https://www.youtube.com/watch?v=JrmtZeJyLgg
Scoped/Transcient vs Singleton
What you have to remember when it comes to whether or not to use a singleton instance is singletons are always in-memory, so you should always consider and try to make things scoped or transcient to save memory, if the creation of that service is not intense or slow. So it is basically a trade off between RAM usage vs speed on some generate grounds.
If you then have specific types of service the decision becomes a different one. For example for DbContext objects you can think of them like a "live, in-memory database query/proxy" and so just like SQL queries you want to create them, execute them and be done with them. That is why they are made scoped, so that when a controller is created (per request) a new DbContext is created, injected, used by an action and then destroyed.
I guess the simple answer is it doesn't usually matter too much and most applications won't have any major concern or issues but you do have to remember singletons stay in-memory for the lifecycle of your application or the app domain if you are in a rare multi-domain setup.
Notification Count
So the main question is really about badges. There are many things involved in this process and setup, and so I will limit my answer to the presumption that you are talking about a client logged into a website and you are providing the website UI, and want to show the badge count for, and that you are not on about for example some Android/iOS app or desktop application.
In terms of generating the badge count it would be a combination of all unread messages or items in your database for the user. I would do this calculation on request from the user visiting a page (So in an Action and returned to the view via Razer or ViewBag for example) that needs that information, or from requesting it via Ajax if you are using a more responsive/Ajax style site.
That again I presume is not an issue and I state it just for completeness and presumptions.
So the issue you are asking about is basically that every time the page changes or the badge count is re-requested you are concerned about the time in getting that information from the database, correct?
Personally I would not bother trying to "cache" this outside of the database, as it is a fast changing thing and you will likely have more hit trying to keep the cache in-sync than just calling the database.
Instead if you are concerned the query will be intensive to work out the badge count, I would instead every time any addition to the database of an unread/new item, or a marking of an item as read is done, you do a "SetUnreadCount" call that calculates and writes that value as a single integer to the database so your call to get the unread count is a Scalar call to the database and SUPER quick.
My question is: how do I implement caching in my domain project, which is working like a normal stack with the repository pattern.
I have a setup that looks like the following:
ASP.NET MVC website
Web API
Domain project (using IoC, with Windsor)
My domain project for instance have:
IOrderRepository.cs
OrderRepository.cs
Order.cs
My ASP.NET MVC website calls the Web API and gets back some DTO classes. My Web API then maps these objects to business objects in my domain project, and makes the application work.
Nowhere in my application have I implemented caching.
Where should be caching be implemented?
I thought about doing it inside the methods in the OrderRepository, so my Get, GetBySpecification and Update methods has to call some generic cache handler injected by the OrderRepository.
This obviously gives some very ugly code, and isn't very generic.
How to maintain the cache?
Let's say we have a cache key like "OrderRepostory_123". When I call the Update method, should I call cacheHandler.Delete("OrderRepository_123") ? Because that seems very ugly as well
My own thoughts...
I can't really see a decent way to do it besides some of the messy methods I have described. Maybe I could make some cache layer, but I guess that would mean my WebAPI wouldn't call my OrderRepository anymore, but my CacheOrderRepository-something?
Personally, I am not a fan of including caching directly in repository classes. A class should have a single reason to change, and adding caching often adds a second reason. Given your starting point you have at least two likely reasonable options:
Create a new class that adds caching to the repository and exposes the same interface
Create a new service interface that uses one or more repositories and adds caching
In my experience #2 is often more valuable, since the objects you'd like to cache as a single unit may cross repositories. Of course, this depends on how you have scoped your repositories. A lot may depend on whether your repositories are based on aggregate roots (ala DDD), tables, or something else.
There are probably a million different ways to do this, but it seems to me (given the intent of caching is to improve performance) implementing the cache similar to a repository pattern - where the domain objects interact with the cache instead of the database, then perhaps a background thread could keep the database and cache in sync, and the initial startup of the app pool would fill the cache (assuming eager loading is desired). A whole raft of technical issues start to crop up, such as what to do if the cache is modified in a way that violates a database constraint. Code maintenance becomes a concern where any data structure related concerns possibly need to be implemented in multiple places. Concurrency issues start to enter the fray. Just some thoughts...
SQLCacheDependency with System.Web.Caching.Cache, http://weblogs.asp.net/andrewrea/archive/2008/07/13/sqlcachedependency-i-think-it-is-absolutely-brilliant.aspx . This will get you caching that gets invalidated based on other systems applying updates also.
there are multiple levels of caching depending on the situation however if you are looking for generic centralized caching with low number of changes I think you will be looking for EF second level caching and for more details check the following http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
Also you can use caching on webapi level
Kindly consider if MVC and WebAPI the network traffic if they are hosted in 2 different data centers
and for huge read access portal you might consider Redis http://Redis.io
It sounds like you want to use a .NET caching mechanism rather than a distributed cache like Redis or Memcache. I would recommend using the System.Runtime.Caching.MemoryCache class instead of the traditional System.Web.Caching.Cache class. Doing this allows you to create your caching layer independent of your MVC/API layer because the MemoryCache has no dependencies on System.Web.
Caching your DTO objects would speed up your application greatly. This prevents you from having to wait for data to be assembled from a cache that mirrors your data layer. For example, requesting Order123 would only require a single cache read rather than to several reads to any FK data. Your caching layer would of course need to contain the logic to invalidate the cache on UPDATEs you perform. A recommended way would be to retrieve the cached order object and modify its properties directly, then persist to the DB asynchronously.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble deciding between two possible design choices. I have a web site which has a pretty extensive business layer and DAL (website, bll, and dal are all in multiple separate dlls). I need to design a windows service that can take some of my business objects, write them to a file, and store them locally within our network. The files are then imported into a 3rd party program which does further processing on them.
I can design this service one of two ways:
Wrap the service around the business layer and DAL. This would be quick and easy but the downside is every time the business layer changes, the service will have to be updated.
Add a web service to the web site and just query the web service for what I need. The windows service wouldn't have to use the business layer and as long as the web service doesn't change, I'll be good. The only downside is that I may have to create some basic business objects to parse the web service's xml into.
The windows service will have to poll the business layer/dal or web service every 10-20 minutes or so. The windows service is necessary because the web site is hosted offsite and thus doesn't have access to any of our local resources. I am leaning towards option 2 but I'm torn.
Given the two choices, which is the better option? Are there other possible options that I haven't considered? Also, how do you usually design for situations where you have one core set of libraries that are primarly used by a website but may end up being used either for data retrieval or to perform some function?
I'm not sure what the criteria is for storing certain business objects as files on the network, but if you're doing this on a regular basis then presumably you are trying to track changes of some kind, so there is another solution: Build the logic directly into the business/persistence layer.
If this secondary file storage is a business requirement, then it ought to be embedded directly in that tier and triggered by some sort of event. That way, instead of having an what is essentially an ad-hoc post-processing job that can get out of sync with the rest of the system, you have just one coherent system.
Invert the design - instead of wrapping a web service around the business services and using it for ad-hoc reporting, create a web service that encapsulates the data you need to receive from the export on a regular basis, and have your business tier send messages to it when new data is ready. You can send messages asynchronously so as not to tie up the business services, and depending on your reliability requirements you could set up a message queue (it's easier than it sounds, WCF already knows how to use MSMQ as the delivery mechanism, it's just a few configuration settings to change).
I can't say with any certainty that this is better than your first two options without knowing a good deal more about the architecture, the amount and type of data, the scheduling and reporting requirements, etc., but it is something you should consider. If you think that your business services are likely to change fairly frequently, then it might work better have it push data outward to a "warehouse" type abstraction rather than having a mining process to pull it.
Otherwise, I think I would go with option 2. I don't know if you've worked with WCF services before but you should know that you never actually have to parse XML. Everything is done through data contracts and when you generate a proxy for the web service, you get strongly-typed .NET objects. If you can pass your domain objects directly through the service API then it's really very little work at all to create the web service.
The real downside to a web service is that you have to take steps to ensure that your service contract never substantially changes (otherwise it can break clients). So you might eventually end up needing to create Data Transfer Objects on the service side to use as the public API instead of passing through domain objects. But in many cases you won't need to do this for a good long while, so go ahead and try it out, you'll see that it's pretty straightforward.
A variant of option two:
Add a WCF service to the site, exposing the information required as basic DTO DataContracts.
You could use AutoMapper or similar within the WCF service to handle the boring bit of converting your business objects to DTOs.
From your point two I understand, that you would just add the web-api for this extra-service. Thus, you would have to update two parts for any changes (extra-service, web-api, dll). With option one you would only have to update two parts (extra-service, dll), thus I would go with one.
BUT if you target for a general web api which you always have to maintain, go with option two.
For more flexibility instead of hard-wrapping your service around business and DAL, and instead of relying on the web site (through integrated web service) make use of design concepts like: interfaces, dynamic Type loading, Inversion of Control so your service is a thin decoupled layer that communicates with the business and DAL and allows for dynamic updates of the business and DAL without recompiling the service. Maybe put assemblies in the Global Assembly Cache of the machine to be shared across various other projects assemblies and apps.
I know it seems like throwing out jargon for the sake of it but that's how I would start to think.
Edit:
Loading types dynamically is actually amazing and easy. This is a quick C# pseudo code for one way, and without testing it might actually be right.
// Get a System.Type from string representation
Type t = Type.GetType("type name");
// Create instance of type.
object o = Activator.CreateInstance(t);
// Cast it to the interface (or actual Type) you're working with.
IMyInterface strongObject = (IMyInterface)o;
// ... and continue from there with the instance.
Instructions about how to formulate the string representation of a type name can be found in MSDN under Type.AssemblyQualifiedName, Type.GetType and similar places. In short you can see a lot of assembly qualified type names in the app.config or web.config files because they use the same format.
Im new to the DDD thing. I have a PROFILE class and a PROFILE REPOSITORY CLASS.
The PROFILE class contains the following fields -> Id, Description, ImageFilePath
So when I add a new Profile, I upload then image to the server and store the path to it in my db.
When I delete the profile, the image should be removed from my file system aswell.
My Question:
Where do I add logic for this. My profile repository has a Delete method. Should I add this logic here. Or should I add a service to encapsulate both actions.
Any comment would be appreciated...
Thanks
You have two different "actions" related to the images. You have a "physical" process and a "logical" process. The logical process is persisting the information about the image into the domain repository, since it is part of the domain. The physical process of add (and delete) are a prerequisite to the logical process.
Taking a step back, the physical process is completely independent of the logical process, but the opposite is not true. You obviously do not want to persist meta-information about the image (in the domain) if the image was not saved. Also, you don't want to remove the information from the domain if you cannot remove the physical file.
The domain should contain the information required to remove the logical instance of the image from the datasource. Think of the domain as a physically separate application. In this case, the domain has no actual knowledge that the data it is persisting has anything to do with a physical file. Make sure to keep it this way.
Generally, I have my entities in an assembly, then my repositories and domain services in another. The application services live outside of the domain model, but leverage it to do its work. So application services use one or domain services or other application services and domain services can use one or more repositories.
Keeping this in mind, you have two places for the actual deletion logic, and a third place to coordinate them. Here is how it would work if I were doing it. The domain service will leverage the repository for the logical delete from the underlying datasource (as well as a retrieval which you will need, as well). It is not aware of anything else other than working with the domain object instance. I also would have an application service (outside of the domain) which specifically dealt with removing the physical instance. For argument sake, I will assume you have an "ImageRepository" class and an "ImageServices" class, which contain your domain repository and your domain services, respectively. Your ImageServices needs a Delete() method, as well as whatever Find() methods you are using. I usually explicitly call the find methods as FindBy...() (i.e, FindByKey(), FindByName(), etc.).
You don't want to remove the logical instance if you haven't been able to remove the physical instance, so make sure you have a means of measuring success of the removal operation for the physical image. I would probably go with some sort of a custom exception in this case (since I would consider deleting a file to be a standard operation that should not commonly fail). This usually falls in the realm of "management". So usually I have an application service named something like "ImageManagementService". For simplicity sake, this service (since it is part of the application and not the domain) can have a private method to do the physical delete. Let's call it "DeleteImageFile()".
The third place is a coordination of these two operations, also as an application service. I would just make this the public method in the "ImageManagementService". We can call this one "RemoveImage". This application service will do the following:
Retrieve the instance information from the domain services (a passthrough call to your repository).
Use the instance information to locate the physical file and remove it (the first application service mentioned, again).
If the physical removal is successful, delete the instance (back to the domain service, facading the repository again).
So, what happens is the application itself calls the "RemoveImage()" method from the "ImageManagementService" instance. Internally, "RemoveImage()" first calls the "FindBy..()" from the domain's "ImageServices" to get an instance from the domain. The filepath is used from there to call to the private "DeleteImageFile() method in the "ImageManagementService" instance. Upon success, it will then call the "Delete()" methods in the domain's "ImageService", which is acting as a facade to your repository.
I think it is very important to focus on the separation of concerns in this case, because if you have an explicit separation (which you can do with different assemblies) you will become comfortable with knowing which kind of logic can go in which place. I highly recommend the Evan's book. Also, for a quick hit on the SOC concept as it relates to DDD, I recommend taking a look at Jeffrey Palermo's three part series on the "Onion Architecture".
Just a couple of notes as to why you would use a domain service instead of calling the repository directly from the application service. Primarily, the repository has more complicated instancing then the domain service. Remember, it is mostly a facade, but might have additional logic that does not fit in anywhere else in the domain. A good example of this might be if you wanted to enforce a unique filename. The domain object itself has no knowledge of other domain objects in other aggregates directly, so the domain service might check for an existing instance with the same name prior to a save operation. Very handy, indeed! Also, a domain service is not limited to a single repository. You can have a domain service coordinate efforts between multiple repositories. If you have overlapping aggregates, you might need to call work with two related aggregate roots at the same time. you can do this in the domain service, keeping that sort of logic in the domain and not bleeding into the application.
Hope this helps. I am sure that there are other ways to do this, but this is the way that I have found success in my own applications with similar scenarios.
#joseph.ferris: "Generally, I have my entities in an assembly, then my repositories and domain services in another. "
Personally, I prefer to see assemblies as a unit of deployment, not a separation of concerns design tool. For that, I'd rather use namespaces.
Ensuring no cyclic-dependencies (between those namespaces) that way is harder, but tools like NDepend can help out.
On a first approach, I think I would opt for the most simple approach, and delete the physical image from disk inside the ImageRepository.
It is maybe not the most 'correct' or 'pure' solution, but it is the most simple one, and this conforms to the 'choose the most simple solution that works' adagio.
When, in a later phase of the project, you feel that this solution is not good, and you feel you need a more complex (and maybe more pure) solution like the one proposed by joseph.ferris, then you can always refactor it.
It is easier to refactor a simple solution, then to refactor a complex solution. :)