This is a small question, just to make sure I'm understanding Unity correctly.
I'm using Unity in an ASP.NET MVC application, and I have registered a type as follows:
container.RegisterType<IPizzaService, PizzaService>();
And I'm using it in a controller like:
public class PizzaController : Controller
{
private IPizzaService _pizzaService;
public PizzaController(IPizzaService pizzaService)
{
_pizzaService = pizzaService;
}
[HttpGet]
public ActionResult Index()
{
var pizzasModel = _pizzaService.FindAllPizzas();
...
}
}
Every time a page request is done, a new instance of IPizzaService is injected and used. So this all works fine.
My question: do I have to do anything special to dispose this instance? I assume that, once the request has ended, the controller is disposed and the PizzaService instance eventually gets garbage collected.
If I need deterministic disposal of an instance because it uses an entity framework context or an unmanaged resource for example, I have to override Dispose of the controller, and there make sure I call the dispose of the instances myself.
Right? If not, please explain why :)
Thanks!
IMO, whatever creates a disposable object is responsible for disposing it. When the container injects a disposable object via RegisterType<I, T>(), I want to guarantee that the object is ready to be used. However, using RegisterInstance<I>(obj) does dispose of your object automatically.
This can be difficult with an IOC container, and is impossible with Unity out of the box. However, there is some really nifty code out there that I use all the time:
http://thorarin.net/blog/post/2013/02/12/Unity-IoC-lifetime-management-IDisposable-part1.aspx
The blog has code for a DisposingTransientLifetimeManager and DisposingSharedLifetimeManager. Using the extensions, the container calls Dispose() on your disposable objects.
One caveat is that you'll need to reference the proper (older) version of Microsoft.Practices.Unity.Configuration.dll & Microsoft.Practices.Unity.dll.
ContainerControlledTransientManager was added in Unity on Jan 11, 2018
Add container 'owned' transient lifetime manager ContainerControlledTransientManager #37
So, ContainerControlledTransientManager is required. This lifetime manager is the same as TransientLifetimeManager except if the object implements IDisposable it will keep strong reference to object and dispose it when container is disposed.
If created object is not disposable, container does not maintain any object references so when that object is released GC will collect it immediately.
Related
I'm using EF Core together with Postgres (probably doesn't matter) inside an .NET Core 3.1 console application.
The program is using a shared project (among other components of the solution) with all business logic implemented using a simple CQRS type pattern with Mediator.
At one place I'm retrieving large objects from the database (10 - 100MB) in size. This is not very frequent so by itself is not an issue. It takes a fraction of a second on modern hardware.
The problem is that for some reason those objects get cached in the datacontext between command executions, like the datacontext doesn't get disposed.
I don't understand why, because I registered the DbContext inside the DI container (the standard built in one) as transient. How I understand it it should create a new instance every time it's requested and the garbage collector should take care of the rest.
The registration code is something like this:
static IServiceProvider ConfigureServiceProvider()
{
IServiceCollection services = new ServiceCollection();
DbContextOptions<MyAppDbContext> dbContextOptions = new DbContextOptionsBuilder<MyAppDbContext>()
.UseNpgsql(Configuration.GetConnectionString("MyApp.Db"))
.Options;
services.AddSingleton(dbContextOptions);
services.AddDbContext<MyAppDbContext>(options => options.UseNpgsql(Configuration.GetConnectionString("MyApp.Db"), options => options.EnableRetryOnFailure()), ServiceLifetime.Transient);
services.AddTransient<IMyAppDbContext>(s => s.GetService<MyAppDbContext>());
// (...)
}
Then the command is using it in this way:
public class RecalculateSomething : IRequest
{
public Guid SomeId { get; set; }
public class Handler : IRequestHandler<RecalculateSomething>
{
private IMyAppDbContext context;
private readonly IMediator mediator;
public Handler(IMyAppDbContext context, IMediator mediator)
{
this.context = context ?? throw new ArgumentNullException(nameof(context));
this.mediator = mediator ?? throw new ArgumentNullException(nameof(mediator));
}
public async Task<Unit> Handle(RecalculateSomething request, CancellationToken ct)
{
// (...)
}
}
}
Does anyone know what the problem is? Is it something I'm doing wrong configuring the DI container? Or something else, like a reference I'm holding onto somewhere (couldn't find it). What would be the best way to approach debugging an issue like this?
BTW I have "fixed it" by just forcing it to create a new DbContext each time from DbContextOptions, but that's more of a workaround. Would like to know what the core issue is.
You almost never want to register a DbContext. It is better to register a factory, which you use to create a new DbContext on every request. While that seems inefficient, that pattern has been optimized over the years, and allows you to deterministically reclaim resources on each request.
From the documents
The lifetime of a DbContext begins when the instance is created and ends when the instance is disposed. A DbContext instance is designed to be used for a single unit-of-work. This means that the lifetime of a DbContext instance is usually very short.
The document goes on to explain that you can inject it, which you are doing, but the problem is this lacks deterministic reclamation of resources. It's better to keep this in your power using a factory, which is explained later in the same document
Some application types (e.g. ASP.NET Core Blazor) use dependency injection but do not create a service scope that aligns with the desired DbContext lifetime. Even where such an alignment does exist, the application may need to perform multiple units-of-work within this scope. For example, multiple units-of-work within a single HTTP request.
In these cases, AddDbContextFactory can be used to register a factory for creation of DbContext instances.
I'm willing to bet that it's the combination of this non-determinism, the large size of the queries, and the frequency of the queries that is causing some memory pressure in your application.
You can see if adding the factory (AddDbContextFactory ) and using it to create a context helps; again, refer to the same section referenced above in the document for the actual code.
If you're wondering why a DbContext that is transient is non-deterministic, you'd probably need to dive into the .NET Core codebase to see when transient resources are reclaimed. That's probably not right after you exit your handler, but somewhere after rolling back up the (potentially deep) call stack.
(If that's the case think of it this way -- many calls trying to complete as many are unwinding their stack; this transient situation is pressure that can build over some period).
Your DI is not wrong per se, but it does not seem optimal for what you're trying to accomplish and the context in which you're code is operating.
I'm currently reading the book Dependency Injection in .NET by Mark Seeman. In this book he recommends the Register, Resolve, Release pattern and also recommends that each of these operations should appear only once in your application's code.
My situation is the following: I'm creating an application that communicates with a PLC (a kind of industrial embedded computer) using a proprietary communication protocol for which the PLC manufacturer provides an library. The library's documentation recommends creating a connection to the PLC and maintaining it open; then using a timer or a while loop, a request should be periodically sent to read the contents of the PLC's memory, which changes over time.
The values read from the PLC's memory should be used to operate on a database, for which I intend to use Entity Framework. As I understand it, the best option is to create a new dbContext on every execution of the loop in order to avoid a stall cache or concurrency problems (the loop could be potentially executing every few milliseconds for a long time while the connection is kept open all the time).
My first option was calling Resolve on application construction to create a long-lived object that would be injected with the PLC communication object and would handle loop execution and keep the connection alive. Then, at the beginning of every loop execution I intended to call Resolve again to create a short-lived object that would be injected with a new dbContext and which would perform the operations on the database. However, after reading the advice on that book I'm doubting whether I'm on the right track.
My first idea was to pass a delegate to the long-lived object upon its construction that would allow it to build new instances of the short-lived object (I believe it is the factory pattern), thus removing the dependency on the DI container from my long-lived object. However, this construct still violates the aforementioned pattern.
Which is the right way of handling Dependency Injection in this situation?
My first attempt without DI:
class NaiveAttempt
{
private PlcCommunicationObject plcCommunicationObject;
private Timer repeatedExecutionTimer;
public NaiveAttempt()
{
plcCommunicationObject = new PlcCommunicationObject("192.168.0.10");
plcCommunicationObject.Connect();
repeatedExecutionTimer = new Timer(100); //Read values from PLC every 100ms
repeatedExecutionTimer.Elapsed += (_, __) =>
{
var memoryContents = plcCommunicationObject.ReadMemoryContents();
using (var ctx = new DbContext())
{
// Operate upon database
ctx.SaveChanges();
}
}
}
}
Second attempt using Poor man's DI.
class OneLoopObject
{
private PlcCommunicationObject plcCommunicationObject;
private Func<DbContext> dbContextFactory;
public OneLoopObject(PlcCommunicationObject plcCommunicationObject, DbContext dbContext
{
this.plcCommunicationObject = plcCommunicationObject;
this.dbContext = dbContext;
}
public void Execute()
{
var memoryContents = plcCommunicationObject.ReadMemoryContents();
// Operate upon database
}
}
class LongLivedObject
{
private PlcCommunicationObject plcCommunicationObject;
private Timer repeatedExecutionTimer;
private Func<OneLoopObject> oneLoopObjectFactory;
public LongLivedObject(PlcCommunicationObject plcCommunicationObject, Func<PlcCommunicationObject, OneLoopObject> oneLoopObjectFactory)
{
this.plcCommunicationObject = plcCommunicationObject;
this.dbContextFactory = dbContextFactory;
this repeatedExecutionTimer = new Timer(100);
this.repeatedExecutionTimer.Elapsed += (_, __) =>
{
var loopObject = oneLoopObjectFactory(plcCommunicationObject);
loopObject.Execute();
}
}
}
static class Program
{
static void Main()
{
Func<PlcCommunicationObject, OneLoopObject> oneLoopObjectFactory = plc => new OneLoopObject(plc, new DbContext());
var myObject = LongLivedObject(new PlcCommunicationObject("192.168.1.1"), oneLoopObjectFactory)
Console.ReadLine();
}
}
The first edition states (chapter 3, page 82):
In its pure form, the Register Resolve Release pattern states that you should only make a single method call in each phase [...] an application should only contain a single call to the Resolve method.
This description stems from the idea that your application only contains either one root object (typically when writing a simple console application), or one single logical group of root types, e.g. MVC controllers. With MVC controllers, for instance, you would have a custom Controller Factory, which is provided by the MVC framework with a controller type to build. That factory will, in that case, only have a single call to Resolve while supplying the type.
There are cases, however, where your application has multiple groups of root types. For instance, a web application could have a mix of API Controllers, MVC Controllers and View Components. For each logical group you would likely have a single call to Resolve, and thus multiple calls to Resolve (typically because each root type gets its own factory) in your application.
There are other valid reasons for calling back into the container. For instance, you might want to defer building part of the object graph, to combat the issue of Captive Dependencies. This seems your case. Another reason for having an extra resolve is when you use the Mediator pattern to dispatch messages to a certain implementation (or implementations) that can handle that message. In that case your Mediator implementation would typically wrap the container and call Resolve. The Mediator’s abstraction would likely be defined in your Domain library, while the Mediator’s implementation, with its knowledge of the container, should be defined inside the Composition Root.
The advice of having a single call to Resolve should, therefore, not be taken literally. The actual goal here is to build a single object graph as much as possible in one call, compared to letting classes themselves call back into the container to resolve their dependencies (i.e. the Service Locator anti-pattern).
The other important point that (the second edition of) the book makes is
Querying for Dependencies, even if through a DI Container, becomes a Service Locator if used incorrectly. When application code (as opposed to infrastructure code) actively queries a service in order to be provided with required Dependencies, then it has become a Service Locator.
A DI Container encapsulated in a Composition Root isn't a Service Locator—it's an infrastructure component.
(note: this quote is from the second edition; Although the first edition contains this information as well, it might be formulated differently).
So the goal of the RRR pattern is to promote encapsulation of the DI Container within the Composition Root, which is why it insists in having a single call to Resolve.
Do note that while writing the second edition, Mark and I wanted to rewrite the discussion of the RRR pattern. Main reason for this was that we found the text to be confusing (as your question indicates). However, we eventually ran out of time so we decided to simply remove that elaborate discussion. We felt that the most important points were already made.
Combining factories with DI is a common solution. There is absolutely nothing wrong with creating and disposing objects dynamically in your program (it's much more difficult and limiting to try to account for every bit of memory you'll need up front).
I found a post by Mark Seeman about the Register, Resolve, Release Pattern (RRR) here: http://blog.ploeh.dk/2010/09/29/TheRegisterResolveReleasepattern/
He states that...
The names originate with Castle Windsor terminology, where we:
Register components with the container
Resolve root components
Release components from the container
So the RRR pattern is limited to the DI Container. You do indeed Register and Release components with the container one time in your application. This says nothing about objects not injected through DI, ie those objects created dynamically in the normal execution of your program.
I have seen various articles use distinct terminology for the two different types of things you create in your program with relation to DI. There are Service Objects, ie those global objects injected via DI to your application. Then there are Data or Value Objects. These are created by your program dynamically as needed and are generally limited to some local scope. Both are perfectly valid.
It sounds like you want to be able to both resolve objects from the container and then release them, all without directly referencing the container.
You can do that by having both a Create and a Release method in your factory interface.
public interface IFooFactory
{
Foo Create();
void Release(Foo created);
}
This allows you to hide references to the container within the implementation of IFooFactory.
You can create your own factory implementation, but for convenience some containers, like Windsor, will create the factory implementation for you.
var container = new WindsorContainer();
container.AddFacility<TypedFactoryFacility>();
container.Register(Component.For<Foo>());
container.Register(
Component.For<IFooFactory>()
.AsFactory()
);
You can inject the factory, call Create to obtain an instance of whatever the factory creates, and when you're done with it, pass that instance to the Release method.
Windsor does this by convention. The method names don't matter. If you call a method of the interface that returns something, it attempts to resolve it. If a method returns void and takes an argument then it tries to release the argument from the container.
Behind the scenes it's roughly the same as if you wrote this:
public class WindsorFooFactory : IFooFactory
{
private readonly IWindsorContainer _container;
public WindsorFooFactory(IWindsorContainer container)
{
_container = container;
}
public Foo Create()
{
return _container.Resolve<Foo>();
}
public void Release(Foo created)
{
_container.Release(created);
}
}
The factory implementation "knows" about the container, but that's okay. Its job is to create objects. The factory interface doesn't mention the container, so classes that depend on the interface aren't coupled to the container. You could create an entirely different implementation of the factory that doesn't use a container. If the object didn't need to be released you could have a Release method that does nothing.
So, in a nutshell, the factory interface is what enables you to follow the resolve/release part of the pattern without directly depending on the container.
Here's another example that shows a little bit more of what you can do with these abstract factories.
Autofac uses Func<> as the factory pattern so you could always do the same:
public class Foo()
{
private readonly Func<Bar> _barFactory;
public Foo(Func<Bar> barFactory)
{
_barFactory = barFactory;
}
}
Adding Factory Interfaces for factories is not something I think anyone should need to do most of the time, it's extra work for little to no reward.
Then you simply need to keep track of which entities are externally owned or DI owned for your release (Dispose in C#).
We have created a singleton object (SsoSettingsProvider ) in which we inject object with lifestyle PerWebRequest (IReservationService in our example it is WCF client). In constructor we use this object to get some data and we place this data in a private field.
public class SsoSettingsProvider : ISsoSettingsProvider
{
readonly LogonSettings _logonSettings;
public SsoSettingsProvider(IReservationService reservationService)
{
_logonSettings = reservationService.GetSSOSettings();
}
}
If we look at possible lifestyle mismatches in Castle Windsor it says:
"Component 'SsoSettingsProvider / ISsoSettingsProvider' with lifestyle
Singleton depends on 'late bound IReservationService' with lifestyle
PerWebRequest This kind of dependency is usually not desired and may
lead to various kinds of bugs."
This info says that there is only possibility, but in this case i think it is not a problem because injected object is not referenced in a field so it can be garbage collected. am i right ?
in this case i think it is not a problem because injected object is not referenced in a field so it can be garbage collected. am i right?
Castle Windsor is warning about Captive Dependencies. The main problem is not so much that instances aren’t garbage collected, but a class will reuse an instance that is not intended for reuse.
Simple example is when you inject a DbContext into a class that is configured as singleton. Although this will result in the DbContext being held alive until its singleton consumer goes out of scope (which typically is when the application ends). A DbContexthowever should not be reused over multiple requests. For one, because it simply isn't thread-safe. On top of that, it gets stale very soon, which causes it to return cached data, instead of re-querying the database.
For this reason we register DbContext typically as Scoped. This does mean however that all its consumers should live at most as long as the DbContext, to prevent it from breaking the application. This is what Castle is warning about.
In your case however you don't store the IReservationService into a private field of SsoSettingsProvider. This would still be a problem, because it would be reasonable to expect that the objects that IReservationService returns do not outlive the IReservationService (otherwise IReservationService would be registered as Singleton). Since from the perspective of SsoSettingsProvider, there is no way it could know whether or not it is safe to store LogonSettings, it is much better to not store it at all.
On top of that, as expressed here, injection constructors should not use their dependencies at all. This leads to slow and unreliable object composition.
So even though you might have analyzed your design and know for sure that this works in your particular case, I would suggest you doing one of the following things:
Store IReservationService as private field in SsoSettingsProvider and call GetSSOSettings only when one of SsoSettingsProvider's members is called and prevent storing LogonSettings. This forces you to make either SsoSettingsProvider scoped or IReservationService singleton. Whether or not IReservationService can be singleton is only something you can find out.
In case SsoSettingsProvider is only interested in LogonSettings, and LogonSettings is a constant value that won't change after the application started, you should inject LogonSettings directly in SsoSettingsProvider's constructor. This simplifies SsoSettingsProvider and pushes loading the LogonSettings to the Composition Root.
Given an assembly where I'd have a SomeContext class derived from DbContext and implementing interface ISomeContext, and a SomeService class implementing ISomeService interface, I'd bind the rest of the app's dependencies like this:
kernel.Bind(t => t.FromThisAssembly()
.SelectAllClasses()
.Where(c => !c.Name.EndsWith("Context") && !c.Name.EndsWith("Service"))
.BindAllInterfaces());
Then, given that SomeService has a constructor-injected ISomeContext dependency, with Ninject.Extensions.NamedScope I can define a named scope like this:
kernel.Bind<ISomeService>().To<SomeService>().DefinesNamedScope("ServiceScope");
And then when I say SomeContext lives in the named scope I've just created, like this:
kernel.Bind<ISomeContext>().To<SomeContext>().InNamedScope("ServiceScope");
My understanding is that by doing that, whenever an instance of SomeService gets injected, the SomeContext instance that it received in its constructor will only live for as long as the SomeService instance exists - that is, when SomeService gets garbage collected, SomeContext gets disposed and dies gracefully.
...I have a few questions:
Is this the proper way of scoping a class that implements IDisposable?
If not, then what would be a proper way of scoping a class that is disposable?
If SomeService is injected in another class (turns out it actually is!), doesn't that other class somewhat creates a scope the context lives and dies in? If so, then what's the use of declaring a "named scope" if all it does is give a name to what gets disposed at garbage collection time?
Shortly put, how exactly is the above code ultimately different from not specifying a scope at all?
Note: InRequestScope is irrelevant here, I'm not talking about a Web app. The application is in fact a class library that gets composed when a client VB6 library calls into it; the C# code lives as a global instance in the VB6 library, and the entire C# app gets composed at once. If the context/disposables live for as long as the C# app's global VB6 instance exists, there's something I'm doing wrong - I'd like my connections to be as short-lived as possible, so I believe I can't be injecting contexts just like this, I should instead be injecting factories that spit out a context that only lives for as long as it is needed, and that would be the scope of whoever gets that factory injected... I think I've just answered part of my question here... have I?
Not using a scope, ninject will not call IDispose.Dispose() on the Context.
.InParentScope() on the context makes sure the context is disposed when the object it gets injected into (SomeService) gets garbage collected
--> When SomeService implements INotifyWhenDisposed, the context will get disposed immediately after SomeService gets disposed.
.InNamed scope is good for injecting the same instance of the Context into multiple objects of an object tree.
Also see http://www.planetgeek.ch/2010/12/08/how-to-use-the-additional-ninject-scopes-of-namedscope/
I know that similar question was asked several times (for example: here, here,here and here) but it was for previous versions of Unity where the answer was dependent on used LifetimeManager class.
Documentation says:
Unity uses specific types that inherit
from the LifetimeManager base class
(collectively referred to as lifetime
managers) to control how it stores
references to object instances and how
the container disposes of these
instances.
Ok, sounds good so I decided to check implementation of build in lifetime managers. My conclusion:
TransientLifetimeManager - no handling of disposing. Container only resolves instance and it does not track it. Calling code is responsible for disposing instance.
ContainerControlledLifetimeManager - disposes instance when lifetime manager is disposed (= when container is disposed). Provides singleton instance shared among all containers in hiearchy.
HierarchicalLifetimeManager - derives behavior from ContainerControlledLifetimeManager. It provides "singleton" instance per container in hiearchy (subcontainers).
ExternallyControlledLifetimeManager - no handling of disposing. Correct behavior because container is not owner of the instance.
PerResolveLifetimeManager - no handling of disposing. It is generally same as TransientLifetimeManager but it allows reusing instance for dependency injection when resolving whole object graph.
PerThreadLifetimeManager - no handling of disposing as also described in MSDN. Who is responsible for disposing?
Implementation of build-in PerThreadLifetimeManager is:
public class PerThreadLifetimeManager : LifetimeManager
{
private readonly Guid key = Guid.NewGuid();
[ThreadStatic]
private static Dictionary<Guid, object> values;
private static void EnsureValues()
{
if (values == null)
{
values = new Dictionary<Guid, object>();
}
}
public override object GetValue()
{
object result;
EnsureValues();
values.TryGetValue(this.key, out result);
return result;
}
public override void RemoveValue()
{ }
public override void SetValue(object newValue)
{
EnsureValues();
values[this.key] = newValue;
}
}
So disposing container does not dispose disposable instances created with this lifetime manager. Thread completion will also not dispose those instances. So who is responsible for releasing instances?
I tried to manually dispose resolved instance in code and I found another problem. I can't teardown the instnace. RemoveValue of lifetime manager is empty - once the instance is created it is not possible to remove it from thread static dictionary (I'm also suspicious that TearDown method does nothing). So if you call Resolve after disposing the instance you will get disposed instance. I think this can be quite big problem when using this lifetime manager with threads from thread pool.
How to correctly use this lifetime manager?
Moreover this implementation is often reused in custom lifetime managers like PerCallContext, PerHttpRequest, PerAspNetSession, PerWcfCall, etc. Only thread static dictionary is replaced with some other construct.
Also do I understand it correctly that handling disposable objects is dependent on lifetime manager? So the application code is dependent on used lifetime manager.
I read that in other IoC containers dealing with temporary disposable objects is handled by subcontainers but I didn't find example for Unity - it could be probably handled with local scoped subcontainer and HiearchicalLifetimeManager but I'm not sure how to do it.
There are only a few circumstances where Unity will dispose an instance. It is really unsupported. My solution was a custom extension to achieve this - http://www.neovolve.com/2010/06/18/unity-extension-for-disposing-build-trees-on-teardown/
Looking at the Unity 2.0 source code, it smells like the LifetimeManagers are used to keep objects in scope in different ways so the garbage collector doesn't get rid of them. For example, with the PerThreadLifetimeManager, it will use the ThreadStatic to hold a reference on each object with that particular thread's lifetime. However, it won't call Dispose until the container is Disposed.
There is a LifetimeContainer object that is used to hold onto all the instances that are created, then is Disposed when the UnityContainer is Disposed (which, in turn, Disposes all the IDisposables in there in reverse chronological order).
EDIT: upon closer inspection, the LifetimeContainer only contains LifetimeManagers (hence the name "Lifetime"Container). So when it is Disposed, it only disposes the lifetime managers. (and we face the problem that is discussed already).
I came across this issue recently myself as I was instrumenting Unity into my application. The solutions I found here on Stack Overflow and elsewhere online didn't seem to address the issue in a satisfactory way, in my opinion.
When not using Unity, IDisposable instances have a well-understood usage pattern:
Within a scope smaller than a function, put them in a using block to get disposal "for free".
When created for an instance member of a class, implement IDisposable in the class and put clean-up in Dispose().
When passed into a class's constructor, do nothing as the IDisposable instance is owned somewhere else.
Unity confuses things because when dependency injection is done properly, case #2 above goes away. All dependencies should be injected, which means essentially no classes will have ownership of the IDisposable instances being created. However, neither does it provide a way to "get at" the IDisposables that were created during a Resolve() call, so it seems that using blocks can't be used. What option is left?
My conclusion is that the Resolve() interface is essentially wrong. Returning only the requested type and leaking objects that need special handling like IDisposable can't be correct.
In response, I wrote the IDisposableTrackingExtension extension for Unity, which tracks IDisposable instances created during a type resolution, and returns a disposable wrapper object containing an instance of the requested type and all of the IDisposable dependencies from the object graph.
With this extension, type resolution looks like this (shown here using a factory, as your business classes should never take IUnityContainer as a
dependency):
public class SomeTypeFactory
{
// ... take IUnityContainer as a dependency and save it
IDependencyDisposer< SomeType > Create()
{
return this.unity.ResolveForDisposal< SomeType >();
}
}
public class BusinessClass
{
// ... take SomeTypeFactory as a dependency and save it
public void AfunctionThatCreatesSomeTypeDynamically()
{
using ( var wrapper = this.someTypeFactory.Create() )
{
SomeType subject = wrapper.Subject;
// ... do stuff
}
}
}
This reconciles IDisposable usage patterns #1 and #3 from above. Normal classes use dependency injection; they don't own injected IDisposables, so they don't dispose of them. Classes that perform type resolution (through factories) because they need dynamically created objects, those classes are the owners, and this extension provides the facility for managing disposal scopes.
would it be a viable solution to use the HttpContext.Current.ApplicationInstance.EndRequest event to hook to the end of the request and then disposing of the object stored in this lifetime manager? like so:
public HttpContextLifetimeManager()
{
HttpContext.Current.ApplicationInstance.EndRequest += (sender, e) => {
Dispose();
};
}
public override void RemoveValue()
{
var value = GetValue();
IDisposable disposableValue = value as IDisposable;
if (disposableValue != null) {
disposableValue.Dispose();
}
HttpContext.Current.Items.Remove(ItemName);
}
public void Dispose()
{
RemoveValue();
}
you don't have to use a child container like the other solution and the code used to dispose the objects is still in the lifetime manager like it should.