I'm trying to use Ninject (version 3.0.1) in a WinForms application, I have several (currently) self-binded service class, which I construct using Ninject. Some service class needs other service classes (sub-services). Most of these service classes need a repository to interact with the database, for what I have an abstract IRepository interface. I need to have the same repository for the whole service-hierarchy in a service class, so I'm using the InCallScope() scope when binding the IRepository. Currently I'm using XPO as an ORM tool, so I have an XpoRepository implementation, which I'm binding to. See my other question about this scenario.
My binding looks like this:
Bind<IRepository>().To<XpoRepository>().InCallScope();
I don't have explicit ToSelf() bindings for each service class, so I assume when I get them from Ninject, they should have the transient scope, which I interpret as I have to manually dispose them.
Assume that I have a Services1 and a Services2 service class, both having a constructor parameter of type IRepository. Now assume that Services1 would like to use some methods of Services2, so I add another constructor parameter to Services1 with type Services2. Without Ninject, I would do:
var repo = new MyRepository(); // implementing IRepository
var service1 = new Services1(repo, new Services2(repo));
I'm using one of the services in a background thread (using TPL), in a loop like this:
while (true) {
if (CancellatioPending())
break;
using (var service = _kernel.Get<Service1>())
{
// do some stuff using the service class
}
Thread.Sleep(20*1000);
}
I had the same structure before using Ninject, so I have (I think) properly implemented disposal of every objects, including repositories in the correct places. However, I've noticed that since I'm utilizing Ninject for this, I have a big memory leak in my application, and it crashes in every 2-3 hours with OutOfMemoryException. I put a breakpoint inside the loop, and noticed that the Ninject cache has thousands of entries full of disposed XpoRepository objects. They are disposed by me I guess, but I'm not sure who called the dispose method.
Why is Ninject holding these disposed objects? I would expect that when I dispose the main service in the end of the using block (which is the scope of the IRepository objects due to InCallScope()) every object in its scope should be disposed and released by Ninject.
EDIT: Before any comment or answer about why this pattern is not good, I know that it could be better. I know I could extract service interfaces to actually make use of DI and improve testability, and I also know that I should probably use a Func<IRepository> as a constructor parameter, and inject into it, and like that every service could have its own reponsibility to dispose the repository. Simply I have no time for such refactorings currently.
Ninject will release the repository if all the following things are true:
no one is holding a reference to service1
service1 itself is GC'd (since you have a thread sleep of 20 sec there is a high chance that it has been promoted to Gen 2 and they are released very rarely)
Cache pruning was executed after service1 is GC'd, The cache pruning interval defaults to 30 sec. You may want to try a shorter interval.
Alternatively to the previous point you can try to force immediate releasing by implementing Ninject.Infrastructure.Disposal.INotifyWhenDisposed in service1
Related
In the video of course CQRS in Practice.
In the Startup.cs code, it has the following code.
public void ConfiureServices(IServiceCollection services)
{
service.AddMvc();
services.AddScoped<UnitOfWork>();
}
However, the code needs to be to services.AddTransient(); because there is no dispose method for UnitOfWork? Why UnitOfWork.dispose() is required for AddScoped?
The lifetime of an object (scoped, transient, singleton), is a wholly separate issue from whether or not the object implements IDisposable.
It is sometimes the case that objects that implement IDisposable are used in Dependency Injection (often because they're external dependencies that have unmanaged resources), but it's not always that way.
AddScoped, in the context of ASP.NET Core, means that for the lifetime of an ASP.NET request, that same object will be used.
AddTransient, in the context of ASP.NET Core, means that every object instantiation -- even during the same HTTP request, will use a new instance of that object.
For your particular problem -- the Unit of Work issue, you're going to want to make sure whatever database you're using is OK with multiple readers and writers before switching to Transient, the reason being if you're using AddTransient, then if you make multiple calls to the database, you're going to open new transactions and (possibly) connections for each call; and there are databases that do not like this very much (Postgres being a shining example).
The lingo we use to talk about that is the Multiple Active Result Sets issue, and each database handles it differently.
Maybe this question has some explanation around but could not find the best solution to this:
I was reading this blog post from Mark Seemann about the captive dependencies and as far as I understand at the end of the post he comes to conclusion never use or at least try to avoid the captive dependencies otherwise there will be troubles (so far is OK). Here is another post from Autofac documentation.
They suggest using captive dependencies only by purpose (when you know what you are doing!). This made me think about a situation I have on my website. I have like 10 services, all of them rely on DbContext for database operations. I think they could easily be registered as InstancePerLifetimeScope if I fix the problem for DbContext not to hold it forever in memory attached to my services. (I am using Autofac in my case). So, I thought a good starting point would be to create all of these as per lifetime instances and the DbContext as instance per request. Then in my services, I would use something like that:
public class MyService
{
private readonly IDbContext _dbContext = DependencyResolver.Current.GetService<IDbContext>();
private MyModel GetMyModel() => Mapper.Map<MyModel>(_dbContext.MyTable.FirstOrDefault());
}
And then in my startup class I have:
builder.RegisterType<ApplicationDbContext>().As<IDbContext>().InstancePerRequest();
builder.RegisterType<MyService>().As<IMyService>().InstancePerLifetimeScope();
Does this pattern work correctly, I mean not keeping the dbContext forever attached to any service, so it will be disposed at the end of the request and if it works, is there any performance issue from this line:
private readonly IDbContext _dbContext = DependencyResolver.Current.GetService<IDbContext>();
compared to constructor injection(There are many invocations from dbContext to database so I am afraid to get IDbContext every time I want to use it because it might be resource consuming) ?
The reason I want dbContext to be instance per request and not instance per dependency is that I have implemented the unit of work pattern on top of the dbContext object.
A normal method in my controller would look like:
public ActionResult DoSth()
{
using(var unitOfWork = UnitOfWorkManager.NewUnitOfWork())
{
//do stuff
try
{
unitOfWork.Commit();
return View();
}
catch(Exception e)
{
unitOfWork.RollBack();
LoggerService.Log(e);
return View();
}
}
}
If this works fine then there is another issue I am concerned of. So, if I can make my services as instances per lifetime (except DbContext), is there any issue to apply async-await on every method inside of the services to make them non-blocking methods. I am asking this if there is any issue using async-await for the dbContext instance, so, for example, I would have something like this:
public async MyModel GetMyModel()
{
var result = //await on a new task which will use dbcontext instance here
return Mapper.Map<MyModel>(result);
}
Any advice or suggestion is much appreciated!
I'd approach the issue from a distance.
There are some architectural choices which can make your life easier. In web development it's practical to design your application to have stateless service layer (all the state is persisted in DB) and to fit the one HTTP request, one business operation principle (in other words one service method for one controller action).
I don't know how your architecture looks (there's not enough info in your post to determine) but chances are it meets the criteria I described above.
In this case it's easy to decide which component lifetime to choose: DbContext and service classes can be transient (InstancePerDependency in terminology of Autofac) or per request (InstancePerRequest) - it doesn't really matter. The point is that they have the same lifetime so the problem of captive dependencies doesn't arise at all.
Further implications of the above:
You can just use ctor injection in your service classes without worries. (Anyway, service locator pattern would be the last option after investigating lifetime control possibilities like lifetime scopes and IOwned<T>.)
EF itself implements the unit of work pattern via SaveChanges which is suitable most of the cases. Practically, you only need to implement an UoW over EF if its transaction handling doesn't meet your needs for some reason. These are rather special cases.
[...] is there any issue to apply async-await on every method inside of the
services to make them non-blocking methods.
If you apply the async-await pattern consistently (I mean all async operations are awaited) straight up to your controller actions (returning Task<ActionResult> instead of ActionResult), there'll be no issues. (However, keep in mind that in ASP.NET MVC 5 async support is not complete - async child actions are not supported.)
The answer, as always, is it depends... This configuration can work if:
Your scopes are created within the request boundary. Is your unit of work creating a scope?
You don't resolve any of your InstancePerLifetimeScope services before creating your scope. Otherwise they potentially live longer than they should if you create multiple scopes within the request.
I personally would just recommend making anything that depends on DbContext (either directly or indirectly) InstancePerRequest. Transient would work as well. You definitely want everything within one unit of work to be using the same DbContext. Otherwise, with Entity Framework's first level cache, you may have different services retrieving the same database record, but operating on different in-memory copies if they're not using the same DbContext. Last update would win in that case.
I would not reference your container in MyService, just constructor inject it. Container references in your domain or business logic should be used sparingly and only as a last resort.
I have an existing C# ASP.NET application with a user interface and various buttons to initiate actions. The actions make synchronous method calls on a class which is a singleton and I'll call this class ServiceLayer. This layer also initializes a data model.
I want to schedule some of the actions from the UI to occur at certain times of day. I believe Quartz.NET provides all the necessary features I need to do this. I can successfully call methods on the singleton class ServiceLayer from the Execute(IJobExecutionContext context) of each Job class (i.e. classes which implement the IJob interface). However, I don't like using this approach for a few reasons:
Difficult to unit-test (e.g. I have to ensure the singleton is initialized before I can do anything)
Scaling up if many jobs are called
Thread safety issues associated with calling multiple methods on the singleton class at the same time.
My question is what is the best design pattern to handle this case instead of calling methods on a singleton directly? I believe I need to make use of the JobDataMap somehow but I'm not sure how. Should I be looking at a producer-consumer or a queuing approach?
You might want to consider implementing a custom job factory that injects your service layer object into the job. There are already implementations of custom factories for the most popular DI containers out there, so you could go with one of those or build your own. This would allow you to pass a reference to your service layer object each time you create a job and should help with unit testing. It would also resolve the singleton issue.
As far as scaling up is concerned, you could use the JobDataMap to pass in things like connection strings or server names, allowing you to load balance or distribute work across your servers.
Here are some posts describing the custom job factory approach if you end up going down that path.
After doing some research on MEF I came across the CreationPolicy.Shared property which according to MSDN:
Specifies that a single shared instance of the associated
ComposablePart will be created by the CompositionContainer and shared
by all requestors.
Sounds good as long as I always ensure that one and only one container ever accesses the class that I export with this policy. So how do I go about ensuring that only one container ever accesses my exported type? Here is my scenario:
I have a Windows service that needs to tap into a singleton-like class for some in-memory data. The data is non-persistent so I want it to be freshly created whenever the service starts up but it serves no purpose once the service is stopped. Multiple threads in my service will need to read and write to this object in a thread-safe fashion so my initial plan was to inherit from ConcurrentDictionary to ensure thread safe operations against it.
The threads that will be tapping into this class all inherit from a single abstract base class, so is there a way to have this class (and only this class) import it from MEF and have this work the way I want?
thanks for any tips you may have, I'm newish to MEF so I'm still learning the ins and outs
If it absolutely must be a singleton amongst different containers, you could use a private constructor and expose a static Instance property, as if it were a "classic" non-container-managed singleton. Then in the composition root, use ComposeExportedValue to register it with the container:
container.ComposeExportedValue(MySingleton.Instance);
You could always use the Lazy type since it blocks other threads as described in this blog post: http://geekswithblogs.net/BlackRabbitCoder/archive/2010/05/19/c-system.lazylttgt-and-the-singleton-design-pattern.aspx
I'm working on a web application that uses a couple of services to synchronize data with external resources. The application and the services share the same data layer and use Castle Windsor to implement IoC.
In the web application there is the a PerWebRequest lifestyle which limits the lifetime of an instance to the lifetime of a request. I want to use something similar in the services.
The services are triggered every once in a while to do the synchronization. I want the services and repositories in the datalayer to be singletons within the a single iteration of the service, similar to the PerWebRequest lifestyle in the web application.
What I've come up with is the concept of a Run. A run is a single invocation of the synchronization code within the service. That looks like this:
using( _runManager.Run() )
{
var sync = _usageRepoFactory.CreateInstance();
sync.SynchronizeUsage();
}
The implementation of IRun will release all instances with the PerRunLifeStyle resolved since it's creation when it is disposed, at the end of the using block.
This code looks quite clean, but I wonder if there is a better way of doing this. I have tried using child containers but found these rather 'heavy' after profiling the solution.
Any feedback is welcome. If needed I can post the IRun implementation as well.
Update
Based on the comments I've cleaned up the code a bit. I've introduced a new service IRunManager which is basically a factory for IRun. I've also started using a factory to get rid of the ServiceLocator invocation.
Take a look at this contextual lifestyle