How to have one Unity DbContext per WF instance? - c#

Objective: To have one DbContext instance per Workflow instance, for isolation of changes to prevent concurrency failures and to allow Transactions.
Environment:
I use Unity in an N-tier (Services, Repository, EF5) application and re-use all the same interfaces/classes across the website and WF (Unity.WF, with custom Factory to add Unity RegisterTypes). I have the website's EF object running under the PerRequestLifetimeManager which works well, I'm trying to do the same for my WF instances.
I have one server running WF services (WF 4.5 with a WCF endpoint), soon this will need to be two (if that makes any difference to anyone's answers). I have multiple WF definitions (xamlx workflows), one in particular will soon be called 1000s of times a day to process files uploaded by clients. The processing time can be anywhere between <1 min to 1 hour depending on the amount of data uploaded, because of this is have set to allow the incoming requests to persist immediately and then resume when the server is free.
Problem:
The documented LifetimeManager for EF with Unity.WF is the HierarchicalLifetimeManager, but this seems to use the same EF instance across all running WF instances of the same WF definition.
I have tried several WCF Unity Lifetime Managers, but they all rely on the OperationContext.Current which is only available if the WF does not persist, this is not going to work for my situation.
So I tried the Microsoft WF Security Pack which claimed to be able to create/resurrect the OperationContext with an activity, this also does not work.
Surely this is a common problem that others have faced?
My next move would be to create my own LifetimeManager and somehow know about the workflow instance id and return the correct EF object from a Dictionary, but how would I get that id when the code is trying to resolve a dependency 2 levels deep in a repository constructor?
Thanks

Update:
Firstly, I could use a NativeActivity container and set the DataContext in the properties as in http://blogs.msdn.com/b/tilovell/archive/2009/12/20/workflow-scopes-and-execution-properties.aspx. This does provide using the same DataContext for my child Activities.
My problem did not go away, it always failed in the same Activity, i suspect it is because of Lazy Loading. So, i finally used a lock in the Execute function to ensure that only 1 WF can run this activity at once with a volatile static bool to check against so that the other WFs can Delay and not block the maxConcurrentInstances.
I hope this helps someone else.

Related

ASP.NET Core Return HTTP Response And Continue Background Worker With Same Context

Sorry ahead of time, this is a bit of a lengthy setup/question. I am currently working on an API using C# ASP.NET Core 2.1. I have a POST endpoint which takes about 5-10 seconds to execute (which is fine). I need to add functionality which could take a considerable amount of time to execute. My current load testing takes an additional 3 minutes. To be honest production could take a bit longer because I can't really get a good answer as to how many of these things we can expect to have to process. From an UX perspective, it is not acceptable to wait this long as the front end is waiting for the results of the existing POST request. In order to maintain an acceptable UX.
All services are set up as transient using the default ASP.NET Core DI container. This application is using EF Core and is set up in the same fashion as the services (sorry I am not at work right now and forgot the exact verbiage within the Setup file).
I first tried to just create a background worker, but after the response was sent to the client, internal objects would start to be disposed (i.e. entity db context) and it would eventualy throw errors when continuing to try executing code using said context (which makes sense since they were being disposed).
I was able to get a background worker mostly working by using the injected IServiceScopeFactory (default ASP.NET Core implementation). All my code executes successfully until I try saving to the DB. We have overridden the SaveChangesAsync() method so that it will automatically update the properties CreatedByName, CreatedTimestamp, UpdatedByName, and UpdatedTimestamp to the currently tracked entities respectively. Since this logic is used by an object created from the IServiceScopeFactory, it seems like it does not share the same HttpContext and therefore, does not update the CreatedByName and UpdatedByName correctly (tries to set these to null but the DB column does not accept null).
Right before I left work, I created a something that seemed to work, but it seems very dirty. Instead of using the IServiceScopeFactory within my background worker to create a new scope, I created an impersonated request using the WebClient object which pointed to an endpoint within the same API that was currently being executed. This did allow the response to be sent back to the client in a timely manor, and this did continue executing the new functionality on the server (updating my entities correctly).
I apologize, I am not currently at work and cannot provide code examples at this moment, but if it is required in order to fully answer this post, I will put some on later.
Ideally, I would like to be able to start my request, process the logic within the existing POST, send the response back to the client, and continue executing the new functionality using the same context (including the HttpContext which contains identity information). My question is, can this be done without creating an impersonated request? Can this be accomplished with a background worker using the same context as the original thread (I know that sounds a bit weird)? Is their another approach that I am completely missing? Thanks ahead of time.
Look in to Hangfire pretty easy to use library for background tasks.

MemoryCache over Threads

I'm currently investigating some code which has a cache layer which at the bottom level uses the MemoryCache class. This is a c# Windows Service app so not web/IIS. There is a section of the code which spawns off a number of threads in which it creates and executes some code in a POC to perform some calculations. These are then stored in the above mentioned cache layer. What is been seen is it looks like cached values seem to be getting stored per thread and not at apication level. I thought that MemoryCache was a Singleton that would sit out side of the individual threads.
Can anybody confirm this behaviour would be expected?
Many thanks for any comments.
A MemoryCache is thread-safe but there is no reason to assume it's a singleton. If you want different threads to access the same MemoryCache instance you need to give them all a reference to the same instance (either as a singleton (really bad) static (still bad) or through argument passing as dependency injection (good)).
The simple way to do it (which does use global state) is to access the default memory cache:
var cache = MemoryCache.Default; // not really a good idea and harder to test, works
You can find the specific docs here. Make sure to configure it in your app/web.config file.

Is it wrong to call many times my ObjectEntities in EntityFramework?

I have a Windows Service running every ten seconds, in different threads.
This service makes various CRUD operations on a SQL Server 2008 Database onto the same machine.
For each CRUD operation, I put a "using" bracket like this example :
public object InsertClient(clsClient c)
{
using (ClientEntities e = new ClientEntities()) {
e.Clients.AddObject(c);
}
}
I'm concerned about the efficiency of this operations if there is already another thread interacting with the same table. Is it the right way to do this ?
Furthermore, is there any risk of interthread exception with this method ?
Thanks for your help.
No, it's not wrong to have multiple object entities as long as you create and dispose it right away.
Here is the general recommendation from MSDN.
When working with long-running object context consider the following:
As you load more objects and their references into memory, the object context may grow quickly in memory consumption. This may cause
performance issues.
Remember to dispose of the context when it is no longer required.
If an exception caused the object context to be in an unrecoverable state, the whole application may terminate.
The chances of running into concurrency-related issues increase as the gap between the time when the data is queried and updated grows.
When working with Web applications, use an object context instance per request. If you want to track changes in your objects between the
tiers, use the self-tracking entities. For more information, see
Working with Self-Tracking Entities and Building N-Tier Applications.
When working with Windows Presentation Foundation (WPF) or Windows Forms, use an object context instance per form. This lets you use
change tracking functionality that object context provides.
If you are worry about the cost of creating connection every new object entities, since EF relies on the data provider and if the provider is ADO.Net, the connection pooling is enabled by default, unless you disable it in connection string.
Also the metadata is cache globally per application domain, so every new object entities will simply copy the metadata from global cache.
And since EF is not thread-safe, it's recommended to have each object entities in each thread.
Like much of .NET, the Entity Framework is not thread-safe. This means
that to use the Entity Framework in multithreaded environments, you
need to either explicitly keep individual ObjectContexts in separate
threads, or be very conscientious about locking threads so that you
don't get collisions. - MSDN

Bookmark with timeout in WF 4.0 when persisted

I have been looking around for a while now and I want to create a timeout property on a bookmark in WF 4.0.
I can make it work with using a Picker with two different branches (and have a timer in one of them and my bookmark in the other).
However this does not work if my workflow is persisted to the database (which it will be since the timeout will be several days) since it will not trigger until i load the workflow next time which can be several days also.
Does anyone know if there is any other way to solve this in the WF 4.0? Or have you done a great workaround?
Okay so what you're going to want to do is build a Workflow Service, you will not be able to do this via a workflow that is not hosted via the Workflow Service Host (WSH) near as easily. To tell you it can't be done would be incorrect, but I can tell you that you don't want to.
That service will be available via a WCF endpoint and can do exactly what you're needing. You would be able to build a workflow that had a pick branch that had two things in it, the first is a Receive activity that could be called into by the user if they responded in time. The second would be a durable timer that ticked at a specified interval and would allow you to branch down another path. Now this same service can have more than one Receive activity and thus exposing more than one endpoint so if your workflow has any other branches just like this you can handle all of those in one atomic workflow.
Does this make sense?

Entity objects and NHibernate sessions

We have our first NHibernate project going on pretty well. However, I still have not grasped the complete picture how to manage the sessions and objects in our scenario.
So, we are configuring a system structure in a persistent object model, stored in a database with NHibernate.
The system consists of physical devices, which the application is monitoring in a service process. So at service startup, we instantiate Device objects in the service and update their status according to data read from the device interface. The object model stays alive during the lifetime of the service.
The service is also serving Silverlight clients, which display object data and may also manipulate some objects. But they must access the same objects that the service is using for monitoring, for example, because the objects also have in-memory data as well, which is not persisted. (Yes, we are using DTO objects to actually transfer the data to the clients.)
Since the service is a multithreaded system, the question is how the NHibernate sessions should be managed.
I am now considering an approach that we would just have a background thread that would take care of object persistence in the background and the other threads would just place "SaveRequests" to our Repository, instead of directly accessing the NHibernate sessions. By this means, I can use a single session for the service and manage the NHibernate layer completely separate from the service and clients that access the objects.
I have not found any documentation for such a setup, since everyone is suggesting a session-per-request model or some variation. But if I get it right, if I instantiate an object in one session and save it in another one, it is not the same object - and it also seems that NHibernate will create a new entry in the database.
I've also tried to figure the role of IOC containers in this kond of context, but I have not found any useful examples that would show that they could really help me.
Am I on a right track or how should I proceed?
Consider ISession a unit of work. You will want to define within the context of your application, what constitutes a unit of work. A unit of work is a boundary around a series of smaller operations which constitute a complete, functional task (complete and functional is defined by you, in the design of your application). Is it when your service responds to a Silverlight client request, or other external request? Is it when the service wakes up to do some work on a timer? All of the above?
You want the session to be created for that unit of work, and disposed when it completes. It is not recommended that you use long-running ISession instances, where operations lazily use whatever ambient ISession they can find.
The idea is generally described as this:
I need to do some work (because I'm responding to an event, whether it be an incoming request, a job on a timer, it doesn't matter).
Therefore, I need to begin a new unit of work (which helps me keep track of all the operations I need to do while performing this work).
The unit of work begins a new ISession to keep track of my work.
I do my work.
If I was able to do my job successfully, all my changes should be flushed and committed
If not, roll all my changes back.
Clean up after myself (dispose ISession, etc.).

Categories