What is the Lifecycle of a static field - c#

What is the lifecycle of a static field in C# MVC:
private static InventoryMgmtContext _dbContext = new InventoryMgmtContext();
public ManageWorkOrdersAppServ()
: base(new WorkOrderHeaderRepository(_dbContext ))
{
_workOrderHeaderRepository = new WorkOrderHeaderRepository(_dbContext);
_workOrderDetailRepository = new WorkOrderDetailRepository(_dbContext);
}
In this case when does the _dbContext die?
This is a follow up to my other question that I haven't been able to get clarification on.

Static fields live for as long as the AppDomain in which the type is loaded lives. That's true regardless of environment.
Now in a web environment, IIS will recycle the AppDomain in some situations - so you shouldn't rely on it being the same forever.
If that's really a database context though, I don't think it should be in a static field at all. Typically you create a database context for a single "unit of work".

Related

Should EF6 DbContext be injected as Scoped or Transient if transactions need to be executed both in batches and asynchronously individually?

About 2 years ago, we made the change from ADO.net over to Entity Framework 6. Initially, we simply instantiated our DbContexts where we needed them. However, at some point we started down the path of prepping for implementing Dependency Injection in the solution. As such, our DbContexts were injected into our MVC controller constructors, and then the necessary logic classes were instantiated directly using the DbContexts. For awhile, this worked great as we had certain IRepository implementations that allowed us to manipulate dozens of entities across multiple repositories, and save them all with a single SaveChanges call.
However, over time, we've started to adapt a more purist DI approach where all our new classes are being injected (rather than instantiated). As a side-effect, we've started moving away from repositories and towards using EF as just a core repository across our solution. This has lead to us building modules in our application that perform their unit of work and save their changes. So rather than having dozens of repositories being used and accessed to perform an operation, we simply use the DbContext.
Initially, this worked out alright as we were injecting our DbContexts as scoped, and the functionality was unchanged. However, with the move towards more self-contained, self-saving modules, we've encountered concurrency errors with our new functionality. We managed to solve the concurrency issues by switching the DI configuration for our DbContexts over to transient. This presented each self-contained module with a new DbContext and they were able to execute and save without caring what the other modules were doing.
However, switching the DbContexts over to transient had the unfortunate side-effect of making it impossible to switch our legacy modules over to our DI container as they relied on a singular shared DbContext across all of their injected dependencies.
So my main conundrum is whether we should make our DbContexts Scoped or Transient. And if we do settle on scoped, how do we write our new modules so that they can execute in a parallel way? And if we settle on transient, how can we preserve the functionality in our dozens of legacy classes that are still developed and used?
Scoped
Pros
Single DbContext per request. No worries about entities being tracked in different contexts, and saves can be done wholesale.
Legacy Code does not need any major changes to be switched to DI.
Cons
Unrelated tasks can't execute concurrently using the same context.
Developers must constantly be aware of the state of the current context. They need to be wary of any side-effects from other classes utilizing the same context.
System.NotSupportedException: 'A second operation started on this context before a previous asynchronous operation completed. Use 'await' to ensure that any asynchronous operations have completed before calling another method on this context. Any instance members are not guaranteed to be thread safe.' thrown during concurrent operations.
Transient
Pros
New DbContext per class. No worries about locking context while performing most operations on the context.
Modules become self-contained and you don't need to worry about side-effects from other classes.
Cons
Receiving an entity from one context and attempting to use it in a different context instance can cause errors.
No ability to perform batch operations across multiple different classes sharing the same context.
Here is a demo algorithm to force a concurrency error for a scoped context. It presents a possible use-case for the transient injection.
// Logic Class
public class DemoEmrSaver
{
private readonly DbContext_dbContext;
public DemoEmrSaver(DbContext dbContext)
{
_dbContext = dbContext;
}
public Task CreateEmrs(int number)
{
Contract.Assert(number > 0);
for (var i = 0; i < number; i++)
CreateEmr();
return _dbContext.SaveChangesAsync();
}
private void CreateEmr()
{
var emr = new EMR
{
Name = Guid.NewGuid().ToString()
};
_dbContext.EMRs.Add(emr);
}
}
// In a controller
public async Task<IActionResult> TestAsync()
{
// in reality, this would be two different services.
var emrSaver1 = new DemoEmrSaver(_dbContext);
var emrSaver2 = new DemoEmrSaver(_dbContext);
await Task.WhenAll(emrSaver1.CreateEmrs(5), emrSaver2.CreateEmrs(5));
return Json(true);
}
And here is a demo of how the older services often functioned
public class DemoEmrSaver
{
private readonly DbContext _dbContext;
public DemoEmrSaver(DbContext dbContext)
{
_dbContext = dbContext;
}
public void CreateEmrs(int number)
{
Contract.Assert(number > 0);
for (var i = 0; i < number; i++)
CreateEmr();
}
private void CreateEmr()
{
var emr = new EMR
{
Name = Guid.NewGuid().ToString()
};
_dbContext.EMRs.Add(emr);
}
}
// controller action
public async Task<IActionResult> TestAsync()
{
var emrSaver1 = new DemoEmrSaver(_dbContext);
var emrSaver2 = new DemoEmrSaver(_dbContext);
emrSaver1.CreateEmrs(5);
emrSaver2.CreateEmrs(5);
await _catcContext.SaveChangesAsync();
return Json(true);
}
Is there some sort of middle ground that won't require massive overhauls to the old code, but that still enables my new modules to be defined and utilized in a simple way (e.g. avoiding having to pass a Func of some sort into each constructor to get a new instance, and avoid having to specifically a request a fresh DbContext everywhere I need one?
Also probably important, I'm using the .Net Core DI Container from the Microsoft.Extensions.DependencyInjection namespace.
Why not to use an artificial scopes were you have this difficulties?
For example, we have some background services in our codebase, when they are used inside a normal AspNet core web app, as you say, the context are bounded to the requests, but for our console apps, we do not have the concept of scoped, so we have to define it ourselves.
To create an artificial scope, simply inject an IServiceScopeFactory, then, everything inside will utilize the new, separated context.
public class SchedulerService
{
private readonly IServiceScopeFactory _scopeService;
public SchedulerService(IServiceScopeFactory scopeService)
{
_scopeService = scopeService;
}
public void EnqueueOrder(Guid? recurrentId)
{
// Everything you ask here will be created as if was a new scope,
// like a request in aspnet core web apps
using (var scope = _scopeService.CreateScope())
{
var recurrencyService = scope.ServiceProvider.GetRequiredService<IRecurrencyService>();
// This service, and their injected services (like the context)
// will be created as if was the same scope
recurrencyService.ProcessScheduledOrder(recurrentId);
}
}
}
This way you can control the lifetime of the scoped services, helping you to share the same context inside that block.
I would recommend to create just one service this way, and then inside the service program everything as normal, this way your code will be keep clean and easier to read, so, do like the example:
using (var scope = _scopeService.CreateScope())
{
var recurrencyService = scope.ServiceProvider.GetRequiredService<IRecurrencyService>();
// In this service you can do everything and is
// contained in the same service
recurrencyService.ProcessScheduledOrder(recurrentId);
}
Please do not add complex code inside the using, something like
using (var scope = _scopeService.CreateScope())
{
var recurrencyService = scope.ServiceProvider.GetRequiredService<IRecurrencyService>();
var otherService= scope.ServiceProvider.GetRequiredService<OtherService>();
var moreServices = scope.ServiceProvider.GetRequiredService<MoreServices>();
var something = recurrencyService.SomeCall();
var pleaseDoNotMakeComplexLogicInsideTheUsing = otherService.OtherMethod(something);
...
}
EDIT
My fear with this approach is that it's applying a Service Locator
pattern, and I've often seen that dismissed as an anti-pattern where
DI is concerned
An anti-pattern would be to use this as normal work, but I am suggesting to introduce it in just one part, there are limits and constraints to what DI can do and can help you with your problems.
For example, property injection (no constructor injection) are also a code smell, but it is not banned or deleted of the framework, because in some cases is the only solution, or the most simple, and keeping things simple is more important than keep all the good practices (even best practices are not white or black, sometimes you will have to do trade-offs between follow one or other principle).
My solution should be in one part of your program, not for everything, that is why I recommend to create just one service, and from there make all the services, you can not use constructor injection to break the scoped life cycle, so IServiceScopeFactory exists just for that.
And sure, it is not for general use, but to help with lifecycle problems like you have.
If you are worried about calling GetService<SomeClass> you can create an abstraction to keep your code clean, for example, I created this general service:
public class ScopedExecutor
{
private readonly IServiceScopeFactory _serviceScopeFactory;
private readonly ILogger<ScopedExecutor> _logger;
public ScopedExecutor(
IServiceScopeFactory serviceScopeFactory,
ILogger<ScopedExecutor> logger)
{
_serviceScopeFactory = serviceScopeFactory;
_logger = logger;
}
public async Task<T> ScopedAction<T>(Func<IServiceProvider, Task<T>> action)
{
using (var scope = _serviceScopeFactory.CreateScope())
{
return await action(scope.ServiceProvider);
}
}
public async Task ScopedAction(Func<IServiceProvider, Task> action)
{
using (var scope = _serviceScopeFactory.CreateScope())
{
await action(scope.ServiceProvider);
}
}
}
Then i have this extra layer (you could make this in the same class as the previous)
public class ScopedExecutorService<TService>
{
private readonly ScopedExecutor _scopedExecutor;
public ScopedExecutorService(
ScopedExecutor scopedExecutor)
{
_scopedExecutor = scopedExecutor;
}
public Task<T> ScopedActionService<T>(Func<TService, Task<T>> action)
{
return _scopedExecutor.ScopedAction(serviceProvider =>
action(
serviceProvider
.GetRequiredService<TService>()
)
);
}
}
Now, where you need your services as a separated context, you can use it something like this
public class IvrRetrieveBillHistoryListFinancingGrpcImpl : IvrRetrieveBillHistoryListFinancingService.IvrRetrieveBillHistoryListFinancingServiceBase
{
private readonly GrpcExecutorService<IvrRetrieveBillHistoryListFinancingHttpClient> _grpcExecutorService;
public IvrRetrieveBillHistoryListFinancingGrpcImpl(GrpcExecutorService<IvrRetrieveBillHistoryListFinancingHttpClient> grpcExecutorService)
{
_grpcExecutorService = grpcExecutorService;
}
public override async Task<RetrieveBillHistoryListFinancingResponse> RetrieveBillHistoryListFinancing(RetrieveBillHistoryListFinancingRequest retrieveBillHistoryListFinancingRequest, ServerCallContext context)
{
return await _grpcExecutorService
.ScopedLoggingExceptionHttpActionService(async ivrRetrieveBillHistoryListFinancingHttpClient =>
await ivrRetrieveBillHistoryListFinancingHttpClient
.RetrieveBillHistoryListFinancing(retrieveBillHistoryListFinancingRequest)
);
}
}
As you see, no service.GetService is called in the business code, just in one place in our toolkit

static variable persistence in IIS

I have used static variable which I hope it will persist in IIS. But for sometimes, it is cleared. Is that possible that IIS will clear the static variable?
public partial class Main : CustomPage
{
public static bool cachedCurrentYearDataInFile = false;
static variables live though the application cycle. If the application ends (chech application pool settings like idle and recycle) a new instance is generated and you would lose all static information of the now non-existant one. If you want persistence you should consider, actual persistence like file/database.

Enum stored correctly in DB but not being deserialized into type correctly in C#

I have an enum:
public enum EnumSample{
Setup = 1,
Pending = 7
}
I have an entity (I'm using EF) stored in the database with its EnumSample property set to EnumSample.Pending. The database reflects this value as 7 when I query. However when I retrieve the same entity from within C# (using EF), I get an incorrect type mapping (I'm getting Setup when I should be getting Pending). All IDs are validated. Screenshot below. What could be causing this?
UPDATE
Restarting the app gets me what I expect - the issue is that the instance of dbContext is sticking around between requests. I can do using var db = new AppContext and it works fine.
I have the following configured on my UnityContainer:
container.RegisterType<AppContext>(new ContainerControlledLifetimeManager());
I've tried various different lifetime managers - PerHttpRequestLifetimeManager doesn't resolve per other guidance I've seen.
An example of a service layer method:
public class EventService : BaseService
{
public readonly AppContext _db;
private readonly NotificationService _notificationService;
public EventService(AppContext db, NotificationService notificationService)
{
_db = db;
_notificationService = notificationService;
}
The behavior that I need is that EventService and NotificationService have the same instance of AppContext, but that between requests, the context is disposed and I get fresh state / new instance.
PerThreadLifetimeManager doesn't guarantee that each request will served by different thread. Unity doesn't have per request life time manager by default. You need to install a nuget package Unity.MVC which has a new built-in lifetime manager: PerRequestLifetimeManager.
#SB2055 See this answer for more details.

Making access to db static = bad?

In a couple of .NET C# webservice projects that i have done i have made access to db static with help of the singleton pattern. Then the other day my friend told me that this is a bad thing to do, because if a lot of request is made for the same db entity then the db would be locked because of the static instance. Is my friends assumptations right? I thought that every new request would make a new instance of the class?
The implementation of the singleton class looks like this:
public class WebService
{
private readonly IFactory _factory;
public WebService(IFactory factory)
{
_factory = factory;
}
public IDataRepository Data
{
get
{
return _factory.GetDatabase();
}
}
}
public static class WebServiceImpl
{
private static readonly WebService _webService = new WebShop(new WebserviceFactoryImpl());
public static WebService webService { get { return _webService; } }
}
_factory.GetDatabase() returns a new instace of the Database class.
Looking at WebServiceImpl, all calls will be sharing a single WebService instance. Now, this isn't necessarily a problem, depending on how that is implemented; for example, if _factory.GetDatabase(); ends up getting called per-request, then it might be that you are getting away with it. Depending further on what GetDatabase() does - i.e. does it get a new instance per call? or does it give you the same instance every time? Simply: we don't have enough information there to answer fully. But:
sharing a single database connection between requests is dangerous; either you need to lock / synchronize, or you risk lots of errors (database connections are not usually written to be thread-safe)
sharing an ORM between requests is even worse: in addition to everything above, you also get issues with data accumulating in the identity / object cache; ORM instances (data-context, etc) are intended to be short-lived and then discarded (and sometimes: disposed)
Having static access to the database is not necessarily a problem; it all comes down to how that is implemented - for example, a static-based API could still create (and dispose) a connection on every call.

Singleton's running on Asp.Net web applications

I have a question about Singletons running within IIS (6,7,7.5) and an ASP.NET 4.0 Web Application (MVC3 app to be specific).
I have a singleton object in my project that is accessed and used in the global.ascx, on the application_start, as well as a few other places within the application.
My concern is, this singleton needs to be accessable at a per instance scenario. However, since IIS is essentially the hosting process, is the singleton going to be the same object across all instances of the application?
If I use the [ThreadStatic] keyword, does it seperate at the Application Pool level?
Finally, is there a way, I can assure a singleton is only a singleton per instance of my application. i.e. if I run my application on 1 website, but inside 5 virtual directories, there is 5 instances of the singleton or if I run my website on 5 different websites within the same application pool.
Hopefully that's clear enough, incase you wanted to see the singleton object, I pasted the general idea of it below.
public sealed class Singleton : IDisposable
{
[ThreadStatic]
private static volatile Singleton _instance;
[ThreadStatic]
private static readonly object _syncRoot = new object();
public bool IsReleased { get; private set; }
public Singleton()
{
IsReleased = false;
}
public static Singleton Instance
{
get
{
if (_instance == null)
{
lock (_syncRoot)
{
if (_instance == null)
_instance = new Singleton();
}
}
return _instance;
}
}
public void Dispose()
{
IsReleased = true;
Singleton._instance = null;
}
}
A static value should be static across a particular instance of your web application, so each instance of your application will have its own instance that will be shared across all threads on that instance.
For further reading, see http://msdn.microsoft.com/en-us/library/2bh4z9hs(v=vs.71).aspx
Oh, and the ThreadStatic attribute will cause the static value to only be static across a particular thread, so every request would have its own version of that field. It doesn't sound like this is what you're going for.
With IIS, you have no control over the thread that your request runs on. If you really need this kind of application instance level locking, you may want to look into the heavier locking objects (Mutex, Monitor, etc) and create one for each application.
If you absolutely want to ensure that they are separate, you could run each one in it's own Application Pool. That way you'd get a WP for each virtual directory.

Categories