CACHE infrastructure with WCF services - c#

I have some WCF services(Let's call X) which has a cache service client in it. So that, the end user who calls my WCF service does not know about cache and should not be care about it.
My cache service is also a WCF service which is not publicly avaliable, just X can call it. As you know it is possible to put any kind of object in cache(let's assume that Cache is HttpRuntime.Cache), but when the issue comes in WCF, presenting the cached values from a WCF service, any kind of object could be a problem because of unknown data types.
My questions is, how can I serve my cache values from WCF as could as it can be generic?

I know this isn't going to solve your issue if you're stuck with this architecture, but personally I'd avoid this set-up completely.
I'd use a dedicated data cache of some sort with a dedicated client that talks to the cache in an efficient way.
If you're not going out-of-process with your caching, then you could use an in-memory cache, otherwise if you're going cross-process, or over the network, you'd be better off using a dedicated data cache like AppFabric/Velocity or Memcached.
You'd get so many other benefits out-of-the-box too, like distributed caching, redundancy and automatic fail-over. I doubt WCF is going to be a winning solution for data caching unknown objects.

Related

WCF - Sharing/caching of data between calls

I am new to WCF & Service development and have a following question.
I want to write a service which relies on some data (from database for example) in order to process client requests and reply back.
I do not want to look in database for every single call. My question is, is there any technique or way so that I can load such data either upfront or just once, so that it need not go to fetch this data for every request?
I read that having InstanceContextMode to Single can be a bad idea (not exactly sure why). Can somebody explain what is the best way to deal with such situation.
Thanks
The BCL has a Lazy class that is made for this purpose. Unfortunately, in case of a transient exception (network issue, timeout, ...) it stores the exception forever. This means that your service is down forever if that happens. That's unacceptable. The Lazy class is therefore unusable. Microsoft has declared that they are unwilling to fix this.
The best way to deal with this is to write your own lazy or use something equivalent.
You also can use LazyInitializer. See the documentation.
I don't know how instance mode Single behaves in case of an exception. In any case it is architecturally unwise to put lazy resources into the service class. If you want to share those resources with multiple services that's a problem. It's also not the responsibility of the service class to do that.
It all depends on amount of data to load and the pattern of data usage.
Assuming that your service calls are independent and may require different portions of data, then you may implement some caching (using Lazy<T> or similar techniques). But this solution has one important caveat: once data is loaded into the cache it will be there forever unless you define some expiration strategy (time-based or flush on write or something else). If you do not have cache entry expiration strategy your service will consume more and more memory over time.
This may not be too important problem, though, if amount of data you load from the database is small or majority of calls access same data again and again.
Another approach is to use WCF sessions (set InstanceContextMode to PerSession). This will ensure that you have service object created for lifetime of a session (which will be alive while particular WCF client is connected) - and all calls from that client will be dispatched to the same service object. It may or may not be appropriate from business domain point of view. And if this is appropriate, then you can load your data from the database on a first call and then subsequent calls within same session will be able to reuse the data. New session (another client or same client after reconnect) will have to load data again.

Multi level cache - Appfabric with MemoryCahe

In my current setup I have a dedicated Appfabric server. Most of the objects stored there are reference objects which means most of the operations are 'Get' operations. Therefore I've considered using LocalCache.
Unfortunately, recently I experienced problems with the availability of the cache server resulting from various network issues. The application server continues to work directly with the DB in these cases thanks to a provider I've written. However, it has a very large impact on performance as expected.
I want to be able to use some kind of a local cache for the highly referenced objects, even when the cache server is down. For this purpose I've considered using the MemoryCache of .Net 4. I don't really care about the objects being stale and I rely on a timeout eviction policy, therefore I don't worry about synchronization between the application servers.
I wanted to hear what do you think about this solution.
- Are there any other points I should consider?
- Is there a better solution to provide fast access for highly referenced objects even when the cache server is down?
Appfabric's LocalCache is a client cache, local and inproc to the client application, which stores references of frequently used data, so application does not need to deserialize same object again. However since LocalCache works with the cache server, it would not work if cache server is down.
One solution possible to your problem is as you have mentioned, having an independant client cache so even if cache server goes down, client cache will still be available.
When relying on inproc cache you will have to keep it in mind that in-process caches store reference of cached objects. If your application modifies object after getting from cache, it will be modified in cache as well. Also if multiple threads may end up modifying same item in cache, you will need thread synchronization for such objetcs.
However even using an independant client cache, you application may end up hitting the database frequently, since data in client cache of one application server will not be accessable to other servers.
A better solution might be using replicated cache servers, where each server will have all cached data. This will not only improve get performace for referential data but also will eliminate single point of failure, like in your case.
If Appfabric is not a hard requirement for application, you may look into NCache for better scalability and high availablility.
Did you consider AppFabric's local cache feature? Or is it not suitable for you?

WCF Per Call Services with Shared Business objects

I would need some help to point me in the right direction.
We want to expose service functionality (which consists of reading + updating a SQL Server database) via WebHTTP end points as per-call services to users.
We don't want to use SOAP if avoidable, as we have trouble to make this interoperate on other platforms.
This must be scalable to 1000+ users, which, in this scenario, are unlikely to submit many concurrent requests. It is estimated that at any given time there should be max 25 concurrent requests.
(That's why per-session services were ruled out, since that would meant to keep 1000+ sessions open while only 25 actions are performed.)
By experience with a test service, we find however, that using pure Per-Call WCF services over HTTP perform poorly, with the largest time lapse being the initialization of the SQL server connection.
It's sort of a similar scenario to what a web server normally would encounter.
Therefore it appeared sensible to use a similar approach as web servers do - for performance reasons they keep a pool of HTTP engines active, and incoming requests are being assigned one of the engines in the pool.
So we want to keep a pool of 25-30 "Business Logic Objects" (i.e. classes with the actual service logic decoupled from mere service interfaces) open which should be instantiated when the service host starts.
Seems that WCF does not have a scenario built in which supports this out of the box.
How would I go about it?
When I am self hosting, I can derive a custom class from ServiceHost and add a Dictionary with the Business objects. This would incur threading issues I guess, which I would have to handle with manual synchronization, correct?
If we decide to host in IIS, how would I do it then, since IIS automatically takes care of creating an instance of the ServiceHost class, and thus I have not much of a chance to throw my own custom host in-between, do I?
Or is this a bad approach altogether. Any other ideas appreciated.
Is there actually a bottleneck with the stateless, session-free approach?
The pool of "business logic objects" doesn't look like a good idea to me. You'll face hard-to-debug concurrency issues.
Have you actually tested the following pattern?
one business logic object per request, shortest lifetime as possible
one SQL connection per business logic object
stateless services
By experience with a test service, we find however, that using pure
Per-Call WCF services over HTTP perform poorly, with the largest time
lapse being the initialization of the SQL server connection.
Really, the SQL server connection shouldn't be a bottleneck because of SQL Server connection pooling.
I dont think their would be much cost associated with instantiating business logic object. you may enable pooling on sql connection object as pointed by ken. Better to go for caching business object rather pooling business logic object.

Architecture help (WCF or not)

I need to process thousands of user details from different (clients) web applications. I have finished a console app that does the actual processing. I have also decided to use MSMQ (the console app will get the user details from a Queue).
I need help deciding how the client web applications will pass data to the Queue. I am thinking I can add a WCF service that will receive data from the client apps and pass it on to the Queue.
Would this be the best way to go? Or is there a better way(s)?
If the whole architecture is Microsoft based I can suggest you to push messages to MSMQ using an InProc dll which is much faster than access via WCF (which add one more layer to the architecture and it slow down the process as it need to serialize/deserialize) the objects. If you design this component in a proper way (SOLID principles) and you make it not coupled to the code you can easily switch to WCF(if you need it) adding a data contract and an End Point to expose your component as a service(at the end of the day WCF exposes an Interface)
Yes it would be the best - in that it's what WCF is for; as it's config driven you'll be able to use different binding types to suit the environment you're in (sending the data across).
The assumption is that the web clients are all (mostly) out on the public internet; being on a private network would give you more options.
WCF can use a queue as a binding type, not sure if that gives you any advantage since you're going to put them into a queue anyway. A synchronous WCF call using an http binding will be fine performance wise as the act of giving it to the MSMQ you have should be pretty quick.
Take a look at NServiceBus

WCF for a shared data access

I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved:
A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute.
My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data -> multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right".
What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it?
What approach would you take? Project is still in a very early stage so complete design re-do is not out of question.
All ideas are appreciated. Thanks!
UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules.
UPDATE 2: a separate benchmark project will be kicked up that should use MySQL as a data backend (instead on in-memory list).
It depends how far it has to scale. If a single server will suffice, then fine; keep it conveniently in memory (as long as you can recreate the data if the server gets restarted). If the data-volume is low, then simple blocking (lock) should work fine to synchronize the data, or for higher throughput a ReaderWriterLockSlim. I would probably not store it directly in the WCF class instance, though.
I would avoid anything involving sessions (if/when this ties into the WCF life-cycle); this is rarely helpful to simple services.
For distributed load (over multiple servers) I would give consideration to a separate dedicated backend. A database or memcached / AppFabric / etc would be worth consideration.

Categories