We have our first NHibernate project going on pretty well. However, I still have not grasped the complete picture how to manage the sessions and objects in our scenario.
So, we are configuring a system structure in a persistent object model, stored in a database with NHibernate.
The system consists of physical devices, which the application is monitoring in a service process. So at service startup, we instantiate Device objects in the service and update their status according to data read from the device interface. The object model stays alive during the lifetime of the service.
The service is also serving Silverlight clients, which display object data and may also manipulate some objects. But they must access the same objects that the service is using for monitoring, for example, because the objects also have in-memory data as well, which is not persisted. (Yes, we are using DTO objects to actually transfer the data to the clients.)
Since the service is a multithreaded system, the question is how the NHibernate sessions should be managed.
I am now considering an approach that we would just have a background thread that would take care of object persistence in the background and the other threads would just place "SaveRequests" to our Repository, instead of directly accessing the NHibernate sessions. By this means, I can use a single session for the service and manage the NHibernate layer completely separate from the service and clients that access the objects.
I have not found any documentation for such a setup, since everyone is suggesting a session-per-request model or some variation. But if I get it right, if I instantiate an object in one session and save it in another one, it is not the same object - and it also seems that NHibernate will create a new entry in the database.
I've also tried to figure the role of IOC containers in this kond of context, but I have not found any useful examples that would show that they could really help me.
Am I on a right track or how should I proceed?
Consider ISession a unit of work. You will want to define within the context of your application, what constitutes a unit of work. A unit of work is a boundary around a series of smaller operations which constitute a complete, functional task (complete and functional is defined by you, in the design of your application). Is it when your service responds to a Silverlight client request, or other external request? Is it when the service wakes up to do some work on a timer? All of the above?
You want the session to be created for that unit of work, and disposed when it completes. It is not recommended that you use long-running ISession instances, where operations lazily use whatever ambient ISession they can find.
The idea is generally described as this:
I need to do some work (because I'm responding to an event, whether it be an incoming request, a job on a timer, it doesn't matter).
Therefore, I need to begin a new unit of work (which helps me keep track of all the operations I need to do while performing this work).
The unit of work begins a new ISession to keep track of my work.
I do my work.
If I was able to do my job successfully, all my changes should be flushed and committed
If not, roll all my changes back.
Clean up after myself (dispose ISession, etc.).
Related
EDIT: Realized that I might have to rethink the design of the system.
I have a WCF-service that accesses an MSSQLDB. In the database there are a number of "objects" saved. I need to be able to access the database multithreaded, but each object thread-safe! And I need to do this in the WCF service.
I know it is possible to do this with isolation levels in the database, but I want to manage as much as possible in the application. The reason for this is to keep all business logic in the application. So it doesn't really matter that it's a database, it could be a static collection of some sort.
Fx. a collection of these objects:
Object1
Object2
Object3
Multiple threads should be able to access the collection at all times, but only one thread at a time should access Object1. A thread won't access multiple objects, so there shouldn't be any risk of deadlocking.
I could do some workaround with managing a lot of singletons, by some kind of id, but I think that would add a lot of overhead.
I also thought of adding a bit in the database for if the object is being accessed, but then I would have to implement some waiting method in the application, and if something went wrong, there is a risk, the thread would hang indefinitely.
In a web application that fires of many ajax requests from different users to carry out actions. One of these requests fires off some database updates. If this is currently in progress I want to make sure other session requests for this action is just ignored. Is it safe to implement a static variable that I can lock so the action can be ignored by other requests if one is already in progress or would this just be a bad design?
Update
After digging more I came across Optimistic Concurrency. I'm using EF6 so to avoid this it sounds like all I need to do is with Concurrency Mode to fixed?
A solution based on static variables may look attractive, because it is easy to implement. However, it quickly becomes a maintenance liability, particularly in a web application environment.
The problem with static variables in web environments, such as IIS, is that they are not shared globally across your entire application: if you configure your app pool to have several worker processes, each process would have its own copy of all static variables. Moreover, if you configure your infrastructure for load balancing, each process on each server would have its own copy, with no control on the part of your application. In your situation this would mean a possibility of multiple updates happening at the same time.
That is why I would avoid using a static variable in situations when it is absolutely critical that at most a single request be in progress at any given time.
In your situation, the persistence layer should be in charge of not corrupting the data no matter how many updates are firing at the same time. Persistence layer needs to decide which requests to execute, and which to throw away. One approach to solving this problem is optimistic locking. See this Q&A for general information on how it could be implemented.
I am new to WCF & Service development and have a following question.
I want to write a service which relies on some data (from database for example) in order to process client requests and reply back.
I do not want to look in database for every single call. My question is, is there any technique or way so that I can load such data either upfront or just once, so that it need not go to fetch this data for every request?
I read that having InstanceContextMode to Single can be a bad idea (not exactly sure why). Can somebody explain what is the best way to deal with such situation.
Thanks
The BCL has a Lazy class that is made for this purpose. Unfortunately, in case of a transient exception (network issue, timeout, ...) it stores the exception forever. This means that your service is down forever if that happens. That's unacceptable. The Lazy class is therefore unusable. Microsoft has declared that they are unwilling to fix this.
The best way to deal with this is to write your own lazy or use something equivalent.
You also can use LazyInitializer. See the documentation.
I don't know how instance mode Single behaves in case of an exception. In any case it is architecturally unwise to put lazy resources into the service class. If you want to share those resources with multiple services that's a problem. It's also not the responsibility of the service class to do that.
It all depends on amount of data to load and the pattern of data usage.
Assuming that your service calls are independent and may require different portions of data, then you may implement some caching (using Lazy<T> or similar techniques). But this solution has one important caveat: once data is loaded into the cache it will be there forever unless you define some expiration strategy (time-based or flush on write or something else). If you do not have cache entry expiration strategy your service will consume more and more memory over time.
This may not be too important problem, though, if amount of data you load from the database is small or majority of calls access same data again and again.
Another approach is to use WCF sessions (set InstanceContextMode to PerSession). This will ensure that you have service object created for lifetime of a session (which will be alive while particular WCF client is connected) - and all calls from that client will be dispatched to the same service object. It may or may not be appropriate from business domain point of view. And if this is appropriate, then you can load your data from the database on a first call and then subsequent calls within same session will be able to reuse the data. New session (another client or same client after reconnect) will have to load data again.
Objective: To have one DbContext instance per Workflow instance, for isolation of changes to prevent concurrency failures and to allow Transactions.
Environment:
I use Unity in an N-tier (Services, Repository, EF5) application and re-use all the same interfaces/classes across the website and WF (Unity.WF, with custom Factory to add Unity RegisterTypes). I have the website's EF object running under the PerRequestLifetimeManager which works well, I'm trying to do the same for my WF instances.
I have one server running WF services (WF 4.5 with a WCF endpoint), soon this will need to be two (if that makes any difference to anyone's answers). I have multiple WF definitions (xamlx workflows), one in particular will soon be called 1000s of times a day to process files uploaded by clients. The processing time can be anywhere between <1 min to 1 hour depending on the amount of data uploaded, because of this is have set to allow the incoming requests to persist immediately and then resume when the server is free.
Problem:
The documented LifetimeManager for EF with Unity.WF is the HierarchicalLifetimeManager, but this seems to use the same EF instance across all running WF instances of the same WF definition.
I have tried several WCF Unity Lifetime Managers, but they all rely on the OperationContext.Current which is only available if the WF does not persist, this is not going to work for my situation.
So I tried the Microsoft WF Security Pack which claimed to be able to create/resurrect the OperationContext with an activity, this also does not work.
Surely this is a common problem that others have faced?
My next move would be to create my own LifetimeManager and somehow know about the workflow instance id and return the correct EF object from a Dictionary, but how would I get that id when the code is trying to resolve a dependency 2 levels deep in a repository constructor?
Thanks
Update:
Firstly, I could use a NativeActivity container and set the DataContext in the properties as in http://blogs.msdn.com/b/tilovell/archive/2009/12/20/workflow-scopes-and-execution-properties.aspx. This does provide using the same DataContext for my child Activities.
My problem did not go away, it always failed in the same Activity, i suspect it is because of Lazy Loading. So, i finally used a lock in the Execute function to ensure that only 1 WF can run this activity at once with a volatile static bool to check against so that the other WFs can Delay and not block the maxConcurrentInstances.
I hope this helps someone else.
I have a Windows Service running every ten seconds, in different threads.
This service makes various CRUD operations on a SQL Server 2008 Database onto the same machine.
For each CRUD operation, I put a "using" bracket like this example :
public object InsertClient(clsClient c)
{
using (ClientEntities e = new ClientEntities()) {
e.Clients.AddObject(c);
}
}
I'm concerned about the efficiency of this operations if there is already another thread interacting with the same table. Is it the right way to do this ?
Furthermore, is there any risk of interthread exception with this method ?
Thanks for your help.
No, it's not wrong to have multiple object entities as long as you create and dispose it right away.
Here is the general recommendation from MSDN.
When working with long-running object context consider the following:
As you load more objects and their references into memory, the object context may grow quickly in memory consumption. This may cause
performance issues.
Remember to dispose of the context when it is no longer required.
If an exception caused the object context to be in an unrecoverable state, the whole application may terminate.
The chances of running into concurrency-related issues increase as the gap between the time when the data is queried and updated grows.
When working with Web applications, use an object context instance per request. If you want to track changes in your objects between the
tiers, use the self-tracking entities. For more information, see
Working with Self-Tracking Entities and Building N-Tier Applications.
When working with Windows Presentation Foundation (WPF) or Windows Forms, use an object context instance per form. This lets you use
change tracking functionality that object context provides.
If you are worry about the cost of creating connection every new object entities, since EF relies on the data provider and if the provider is ADO.Net, the connection pooling is enabled by default, unless you disable it in connection string.
Also the metadata is cache globally per application domain, so every new object entities will simply copy the metadata from global cache.
And since EF is not thread-safe, it's recommended to have each object entities in each thread.
Like much of .NET, the Entity Framework is not thread-safe. This means
that to use the Entity Framework in multithreaded environments, you
need to either explicitly keep individual ObjectContexts in separate
threads, or be very conscientious about locking threads so that you
don't get collisions. - MSDN