WCF - Sharing/caching of data between calls - c#

I am new to WCF & Service development and have a following question.
I want to write a service which relies on some data (from database for example) in order to process client requests and reply back.
I do not want to look in database for every single call. My question is, is there any technique or way so that I can load such data either upfront or just once, so that it need not go to fetch this data for every request?
I read that having InstanceContextMode to Single can be a bad idea (not exactly sure why). Can somebody explain what is the best way to deal with such situation.
Thanks

The BCL has a Lazy class that is made for this purpose. Unfortunately, in case of a transient exception (network issue, timeout, ...) it stores the exception forever. This means that your service is down forever if that happens. That's unacceptable. The Lazy class is therefore unusable. Microsoft has declared that they are unwilling to fix this.
The best way to deal with this is to write your own lazy or use something equivalent.
You also can use LazyInitializer. See the documentation.
I don't know how instance mode Single behaves in case of an exception. In any case it is architecturally unwise to put lazy resources into the service class. If you want to share those resources with multiple services that's a problem. It's also not the responsibility of the service class to do that.

It all depends on amount of data to load and the pattern of data usage.
Assuming that your service calls are independent and may require different portions of data, then you may implement some caching (using Lazy<T> or similar techniques). But this solution has one important caveat: once data is loaded into the cache it will be there forever unless you define some expiration strategy (time-based or flush on write or something else). If you do not have cache entry expiration strategy your service will consume more and more memory over time.
This may not be too important problem, though, if amount of data you load from the database is small or majority of calls access same data again and again.
Another approach is to use WCF sessions (set InstanceContextMode to PerSession). This will ensure that you have service object created for lifetime of a session (which will be alive while particular WCF client is connected) - and all calls from that client will be dispatched to the same service object. It may or may not be appropriate from business domain point of view. And if this is appropriate, then you can load your data from the database on a first call and then subsequent calls within same session will be able to reuse the data. New session (another client or same client after reconnect) will have to load data again.

Related

Pattern or library for caching data from web service calls and update in the background

I'm working on a web application that uses a number of external data sources for data that we need to display on the front end. Some of the external calls are expensive and some also comes with a monetary cost so we need a way to persist the result of these external requests to survive ie a app restart.
I've started with some proof of concept and my current solution is a combination of a persistent cache/storage (stores serialized json in files on disk) and a runtime cache. When the app starts it will populate runtime cache from the persistent cache, if the persistent cache is empty it would go ahead and call the webservices. Next time the app restarts we're loading from the persistent cache - avoiding the call to the external sources.
After the first population we want the cache to be update in the background with some kind of update process on a given schedule, we also want this update process to be smart enough to only update the cache if the request to the webservice was successful - otherwise keep the old version. Theres also a twist here, some webservices might return a complete collection while others requires one call per entity - so the update-process might differ depending on the concrete web service.
I'm thinking that this senario can't be totally unique, so I've looked around and done a fair bit of Googleing but I haven't fund any patterns or libraries that deals with something like this.
So what I'm looking for is any patterns that might be useful for us, if there is any C#-libraries or articles on the subject as well? I don't want to "reinvent the wheel". If anyone have solved similar problems I would love to hear more about how you approached them.
Thank you so much!

Right way of using WCF service client [duplicate]

This question already has answers here:
Best Practice for WCF Service Proxy lifetime?
(4 answers)
Reuse of WCF service clients
(2 answers)
Closed 9 years ago.
I have a UI application in which I consume a WCF service like this
public MainWindow()
{
....
mServiceClient = new ServiceClient("EndPointTCP");
}
Should I create the client as a member of class and close the client along with exit of my application or Should I create a new client whenever its required in a method and close there itself.
It depends solely onwhat you want to achieve. There is no "best way to do it" since both ways are possible, good, and have different tradeoffs.
Holding the clientobject simply wastes resources. It also may leak context data between calls. You might have a bug that will cause mClient.Buy(100) and mClient.Sell(100) to work properly when used alone, but fail when used together mClient.Buy(100); mClient.Sell(100). Dropping and re-creating fresh instance each time could save you from that one bug, but obviously that's not a good argument for it.
Recreating the client each time a call is to be made has however the vague benefit of .. having a fresh client every time. If your app has a possibility to dynamically change the endpoint during the app's runtime, then automatically your client will be always using the newest addresses/logins/passwords/etc.
However, not recreating the client object at every time is simply faster. Mind that it's WCF layer, so the actual underlying connection can be any. Were it some protocol with heavy setup with some keyexchange, encryption, etc, you may find that creating a new client every time might create a new connection at every time and it will slow down eveyrthing, while keeping the instance will work blazingly fast, since connection might be kept open and reused. Often you try to keep the connection when you have to perform many and often calls to the service, like 24h/day twice/sec monitoring some remote value for breaching safe limits.
On yet the other hand, you might not want the connection to linger. Your remote service may have thousands of clients and limited resources, so you might want to close the connection ASAP so others may connect. Often you'd do it like that when the calls to the service are really rarely done, only once in a time, ie. when user clicks after returning from coffe break.
Please don't get me wrong: all above is just conjuring some vague "facts" from a void. I do not know your app, nor your service, nor your bindings (sans "endpoint TCP"). The most important factors are all on your side and they sit in the actual way how your app and that remote service work and interoperate. If you care about what you ask, you must first simply research the topic on your side. Best - simply try both ways and check if it works and how does it perform. The difference would be something like 2..6 lines of code, so that's, well, rather quick.
There are already some similar questions:
Reuse of WCF service clients
Reusing a WCF service client or creating one each time?
In my opinion it depends on your application type (scalability, performance requirements, ...) but usually I think that it's more safe to recreate the ServiceClient each time. In this way you don't need special code if there are connections problems between requests and with the latest version of WCF seems that there isn't a big performance impact.
See http://msdn.microsoft.com/en-us/library/aa738757.aspx.
Consider also that ServiceClient is not thread safe (at least reading MSDN).

CACHE infrastructure with WCF services

I have some WCF services(Let's call X) which has a cache service client in it. So that, the end user who calls my WCF service does not know about cache and should not be care about it.
My cache service is also a WCF service which is not publicly avaliable, just X can call it. As you know it is possible to put any kind of object in cache(let's assume that Cache is HttpRuntime.Cache), but when the issue comes in WCF, presenting the cached values from a WCF service, any kind of object could be a problem because of unknown data types.
My questions is, how can I serve my cache values from WCF as could as it can be generic?
I know this isn't going to solve your issue if you're stuck with this architecture, but personally I'd avoid this set-up completely.
I'd use a dedicated data cache of some sort with a dedicated client that talks to the cache in an efficient way.
If you're not going out-of-process with your caching, then you could use an in-memory cache, otherwise if you're going cross-process, or over the network, you'd be better off using a dedicated data cache like AppFabric/Velocity or Memcached.
You'd get so many other benefits out-of-the-box too, like distributed caching, redundancy and automatic fail-over. I doubt WCF is going to be a winning solution for data caching unknown objects.

Entity objects and NHibernate sessions

We have our first NHibernate project going on pretty well. However, I still have not grasped the complete picture how to manage the sessions and objects in our scenario.
So, we are configuring a system structure in a persistent object model, stored in a database with NHibernate.
The system consists of physical devices, which the application is monitoring in a service process. So at service startup, we instantiate Device objects in the service and update their status according to data read from the device interface. The object model stays alive during the lifetime of the service.
The service is also serving Silverlight clients, which display object data and may also manipulate some objects. But they must access the same objects that the service is using for monitoring, for example, because the objects also have in-memory data as well, which is not persisted. (Yes, we are using DTO objects to actually transfer the data to the clients.)
Since the service is a multithreaded system, the question is how the NHibernate sessions should be managed.
I am now considering an approach that we would just have a background thread that would take care of object persistence in the background and the other threads would just place "SaveRequests" to our Repository, instead of directly accessing the NHibernate sessions. By this means, I can use a single session for the service and manage the NHibernate layer completely separate from the service and clients that access the objects.
I have not found any documentation for such a setup, since everyone is suggesting a session-per-request model or some variation. But if I get it right, if I instantiate an object in one session and save it in another one, it is not the same object - and it also seems that NHibernate will create a new entry in the database.
I've also tried to figure the role of IOC containers in this kond of context, but I have not found any useful examples that would show that they could really help me.
Am I on a right track or how should I proceed?
Consider ISession a unit of work. You will want to define within the context of your application, what constitutes a unit of work. A unit of work is a boundary around a series of smaller operations which constitute a complete, functional task (complete and functional is defined by you, in the design of your application). Is it when your service responds to a Silverlight client request, or other external request? Is it when the service wakes up to do some work on a timer? All of the above?
You want the session to be created for that unit of work, and disposed when it completes. It is not recommended that you use long-running ISession instances, where operations lazily use whatever ambient ISession they can find.
The idea is generally described as this:
I need to do some work (because I'm responding to an event, whether it be an incoming request, a job on a timer, it doesn't matter).
Therefore, I need to begin a new unit of work (which helps me keep track of all the operations I need to do while performing this work).
The unit of work begins a new ISession to keep track of my work.
I do my work.
If I was able to do my job successfully, all my changes should be flushed and committed
If not, roll all my changes back.
Clean up after myself (dispose ISession, etc.).

WCF for a shared data access

I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved:
A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute.
My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data -> multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right".
What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it?
What approach would you take? Project is still in a very early stage so complete design re-do is not out of question.
All ideas are appreciated. Thanks!
UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules.
UPDATE 2: a separate benchmark project will be kicked up that should use MySQL as a data backend (instead on in-memory list).
It depends how far it has to scale. If a single server will suffice, then fine; keep it conveniently in memory (as long as you can recreate the data if the server gets restarted). If the data-volume is low, then simple blocking (lock) should work fine to synchronize the data, or for higher throughput a ReaderWriterLockSlim. I would probably not store it directly in the WCF class instance, though.
I would avoid anything involving sessions (if/when this ties into the WCF life-cycle); this is rarely helpful to simple services.
For distributed load (over multiple servers) I would give consideration to a separate dedicated backend. A database or memcached / AppFabric / etc would be worth consideration.

Categories