Reducing dependencies with SignalR, NHibernate and Ninject - c#

An architectural question. I've got a nicely de-coupled MVC3 solution with a few projects that's working rather well.
Proj.Core - interfaces for data classes and services
Proj.Services - interfaces for model services
Proj.Data.NH - implementations of the data interfaces
Proj.Services.NH - implementations of the data / model services
Proj.Infrastructure - setting up Ninject and NHibernate
Proj.Tests - unit tests
Proj.Web - the MVC project
I've set up NHibernate to be session per request in the infrastructure project, so Proj.Web doesn't need to reference NHibernate (or Ninject, for that matter). I'm now introducing SignalR, which works really nicely for a quick chat app. I've put a SignalR hub in the web project. I now want to persist the chat messages in the database, which has confused me somewhat.
I want to use a service (let's call it PostService), so the SignalR hub doesn't have a dependency on NHibernate. My other services are injected into the controllers' constructors, and the session is injected into the services' constructors.
As the SignalR hub hangs around, unlike controllers, PostService (injected into the constructor as an IPostService) can't have a session injected into its constructor, as there won't be a session. If there was, it would hang around forever, which would be far too long a time for a transaction.
I could inject the session factory into the PostService, and each method could use a transaction, e.g.
private void WithTransaction(Action<ISession> action)
{
using (var session = _sessionFactory.OpenSession())
using (var tx = session.BeginTransaction())
{
action(session);
tx.Commit();
}
}
public IPost Post(string message)
{
WithTransaction(session =>
{
session.Save(new Post(message));
});
}
The hub will then call _postService.Post(message);
However, once the Post method does more things, I'll want to use some of my existing services to do the things, as they've already been written and unit tested. As the session is created in the method, I can't have the services injected into the PostService constructor, as they accept a session into their constructors.
So, I guess I have the following options, but I'm not sure if a) this is a complete list, or b) which is the best option:
I. Inject an IDependencyResolver into the PostService constructor, and create the services I need in the Post method, passing in the session to the constructor. There's an IDependencyResolver in System.Web.Mvc and in SignalR, so this would (depending on which project PostService resides) introduce a dependency on either library.
II. Modify the services so each method that uses a session has one passed in as a parameter. Overload this without the session parameter, and call the new one. The MVC service calls would use the first, and the PostService would use the second e.g.
public void SaveUser(IUser user)
{
Save(_session, user);
}
public void SaveUser(ISession session, IUser user)
{
session.Save(user);
}
III. Don't use the existing services. Have the PostService do it's own thing, even if there is a bit of duplication (e.g. getting user details etc.)
IV. Remove the ISession from the services' constructors, and pass it in to each method (and deal with the Controllers' accordingly.
V. Something else.
I guess I'm leaning towards the first one, but I'm not sure where PostService would live. If it goes in Proj.Services.NH, then I'd have to introduce a dependency on System.Web.Mvc or SignalR, which I don't want to do. If it lives in Proj.Web, then I'd have to introduce a dependency on NHibernate, which I also don't want to do. It doesn't belong in Proj.Infrastructure, as it is application code. Should it have it's own project with dependencies on everything, or is there a better way?

I would use some sort of auto factory for the additional services you need. So you would write your PostService constructor as:
public PostService( Func<INeededService1> factory1,Func<INeededService2> factory2 ...)
{
...
}
and then use this extension to have these factory automatically working ( by querying the container ).

Related

How to pass container (IServiceProvider) to be available in business classes?

I have WebAPI (.NET 5) project that hosts services to be called from other (web) applications.
And, I would like to use Dependency Injection to get some service classes inside of my business objects.
Naturally, all calls to my project are going through Controllers that can receive IServiceProvider which will allow me to GetRequiredService().
All is good inside of the controller instance. But some/many of my business methods are not created by the Dependency Injection engine and as a result, I don't have access to the service provider and the only way to get data from the DI is to pass the instance of IServiceProvider manually to required business objects. Obviously, that will add up excessive code which will reduce readability.
What can I do to avoid passing the instance of IServiceProvider to my business objects and still be able to access it?
One solution I see is to create a static global object (_globalServiceProvider), but I'm afraid this will ruin the whole idea.
Anyone can recommend a better solution?

DI in Service Fabric Service Remoting

I have a Service Fabric application with one service which is exposed to Internet (GatewayService) through an ASP.NET Web API and a couple of internal services not exposed to the Internet (let's call one of them InternalService). So far, InternalService is also an ASP.NET Web APIs, so InternalService.cs has a CreateServiceInstanceListeners() method which looks like this:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] {
new ServiceInstanceListener(serviceContext =>
new KestrelCommunicationListener(serviceContext, "ServiceEndpoint", (url, listener) =>
WebHost.CreateDefaultBuilder()
.UseStartup<Startup>()
.ConfigureServices((context, services) => { services.AddSingleton(serviceContext); })
.UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
.UseUrls(url)
.Build()))
};
}
The Startup class (in Startup.cs) for InternalService configures some services, such as adding a SQL DbContext to the Dependency Injection system, and of course setting up ASP.NET with AddMvc() etc. I have a couple of ApiControllers which expose the API.
This works, BUT I don't get any real type safety with this, and it generally makes development a bit cumbersome, needing to deserialize the result manually in my GatewayService before manipulating it. So I decided to go with SF's Service Remoting instead, resulting in a CreateServiceInstanceListeners() method which looks like this:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return this.CreateServiceRemotingInstanceListeners();
}
Then I copied all the logic from the controllers into InternalService.cs too, but this lead to an issue: I don't have access to my DbContext anymore, because it was injected into the constructor of the ApiController, instantiated by ASP.NET according to the rules set in the Startup class, which isn't used anymore.
Is there a way for me to use Startup in the same way when using Service Remoting?
Can I separate the API into multiple classes, in the same way as ApiControllers are separated into multiple classes? I feel like having all exposed methods in the same class will be quite a hazzle.
I know this has already an accepted answer, but I want to add my two cents.
As you have realized, remoting has two major differences compared to WebApi:
Given a remoting interface, you have a single implementation class
The remoting implementation class is a singleton, so, even if you use DI as explained in the accepted answer, you still can't inject a DbContext per request.
I can give you some solutions to these problems:
This one is simple: create more interfaces. You can add as many remoting interfaces as you want in a single service fabric service. So, you should split your remoting API into smaller interfaces with groups that make sense (interface segregation). But, I don't think you should have many, because that would probably mean that your microservice has too many responsibilities.
A naive approach to having dependencies per request is to inject factories into the remoting class, so you can resolve and dispose dependencies in every method instead of by constructor injection. But I found a much better approach using Mediatr, which might not seem trivial, but once set up it's very easy to use. The way it works is you create a little helper class that gets an ILifetimeScope (as you use Autofac) in the constructor and it exposes an Execute method. This method will create a child LifetimeScope, resolve Mediatr and send a WrapperRequest<TRequest> (the wrapper is a trick so that the remoting input and output objects don't have to depend on Mediatr). This will allow you to implement a Handler class for each remoting operation, which will be resolved per request so that you can inject the dependencies in the constructor as you do with a WebApi controller.
It might sound confusing if you are not familiar with Mediatr and Autofac. If I have time I'll write a blog post about it.
You can use Autofac, there's an entire page that explains how to set it up:
Add the Autofac.ServiceFabric nuget package
Configure DI:
// Start with the trusty old container builder.
var builder = new ContainerBuilder();
// Register any regular dependencies.
builder.RegisterModule(new LoggerModule(ServiceEventSource.Current.Message));
// Register the Autofac magic for Service Fabric support.
builder.RegisterServiceFabricSupport();
// Register a stateless service...
builder.RegisterStatelessService<DemoStatelessService>("DemoStatelessServiceType");
// ...and/or register a stateful service.
// builder.RegisterStatefulService<DemoStatefulService>("DemoStatefulServiceType");
using (builder.Build())
{
ServiceEventSource.Current.ServiceTypeRegistered(
Process.GetCurrentProcess().Id,
typeof(DemoStatelessService).Name);
// Prevents this host process from terminating so services keep running.
Thread.Sleep(Timeout.Infinite);
}
check out the demo project.

ASP.NET Core - Repository dependency injection fails on Singleton injection

I am using SoapCore to create a web service for my ASP.NET Core MVC application.
I am using Entity Framework Core and a simple Repository pattern to get my DB data.
I am injecting my repository classes via .AddSingleton() in my Startup.cs:
services.AddSingleton<IImportRepository, ImportRepository>();
services.AddSingleton<IWebService, WebService>();
Since the EF DbContext is scoped I get an error when calling my web service:
Cannot consume scoped service 'App.Data.ApplicationDbContext' from
singleton 'App._Repository.IImportRepository'.
When I use .AddScoped() instead, it works fine.
I've read injecting scoped dependencies via a controllers/classes constructor is bad practice, since it "falls back" to be a singleton or behaves like one.
I wonder if there is another way to make it work with singletons or if this has some major draw backs in the long term (about 100-200 users will use the site) when using scoped injections in my controllers via the ctor?
Simply put, your go-to lifetime should be "scoped". You should only use a singleton or transient lifetime if you have a good reason to do so. For a singleton, that's stuff like managing locks or holding data that needs to persist for the lifetime of the application, neither of which applies to the concept of a repository. Repositories should be entirely disposable. The point is to persist to the database or to some other store, so they should not contain any data in their own right that needs to be persisted.
Long and short, your best bet here is to simply make your repo(s) scoped, so you can directly inject the context. As far as constructor injection goes, I'm not sure where you got the idea that that's a bad practice. It's in fact how dependency injection works in most cases, so you can't really have one without the other.
If you absolutely need to have a singleton, then your only option is the service locator antipattern. For that, you will inject IServiceProvider:
public class MyRepo
{
private readonly IServiceProvider _serviceProvider;
public MyRepo(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
...
}
Then, each time you need the context (that's important), you'll need to do:
using (var scope = _serviceProvider.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<MyContext>();
// do something with context
}
Scoped objects cannot be injected in to singleton objects. Its simple as this, singletons are created only once when the app is starting and used by all the subsequent requests. Scoped objected are created during each request and disposed at the end of the request. So there is no way for the previously created single object to know about the scoped objects creating during each request. So using scoped objects is not possible in singletons. But the the other way around is possible.
injecting scoped dependencies via a controllers/classes constructor
I don't think its a bad practice at all. If not how are you planing to do unit testing?
Also its not right to make DB context singleton. You will face cashing issues/data anomalies in parallel and subsequent requests. In my opinion DB context has to be scoped and all the objects that uses the db-context has to be scoped all the way up.
So in you case, make all the objects ImportRepository, WebService and DB Context all scoped.
Cheers,

Is it bad to have a captive dependency on DbContext instance?

Maybe this question has some explanation around but could not find the best solution to this:
I was reading this blog post from Mark Seemann about the captive dependencies and as far as I understand at the end of the post he comes to conclusion never use or at least try to avoid the captive dependencies otherwise there will be troubles (so far is OK). Here is another post from Autofac documentation.
They suggest using captive dependencies only by purpose (when you know what you are doing!). This made me think about a situation I have on my website. I have like 10 services, all of them rely on DbContext for database operations. I think they could easily be registered as InstancePerLifetimeScope if I fix the problem for DbContext not to hold it forever in memory attached to my services. (I am using Autofac in my case). So, I thought a good starting point would be to create all of these as per lifetime instances and the DbContext as instance per request. Then in my services, I would use something like that:
public class MyService
{
private readonly IDbContext _dbContext = DependencyResolver.Current.GetService<IDbContext>();
private MyModel GetMyModel() => Mapper.Map<MyModel>(_dbContext.MyTable.FirstOrDefault());
}
And then in my startup class I have:
builder.RegisterType<ApplicationDbContext>().As<IDbContext>().InstancePerRequest();
builder.RegisterType<MyService>().As<IMyService>().InstancePerLifetimeScope();
Does this pattern work correctly, I mean not keeping the dbContext forever attached to any service, so it will be disposed at the end of the request and if it works, is there any performance issue from this line:
private readonly IDbContext _dbContext = DependencyResolver.Current.GetService<IDbContext>();
compared to constructor injection(There are many invocations from dbContext to database so I am afraid to get IDbContext every time I want to use it because it might be resource consuming) ?
The reason I want dbContext to be instance per request and not instance per dependency is that I have implemented the unit of work pattern on top of the dbContext object.
A normal method in my controller would look like:
public ActionResult DoSth()
{
using(var unitOfWork = UnitOfWorkManager.NewUnitOfWork())
{
//do stuff
try
{
unitOfWork.Commit();
return View();
}
catch(Exception e)
{
unitOfWork.RollBack();
LoggerService.Log(e);
return View();
}
}
}
If this works fine then there is another issue I am concerned of. So, if I can make my services as instances per lifetime (except DbContext), is there any issue to apply async-await on every method inside of the services to make them non-blocking methods. I am asking this if there is any issue using async-await for the dbContext instance, so, for example, I would have something like this:
public async MyModel GetMyModel()
{
var result = //await on a new task which will use dbcontext instance here
return Mapper.Map<MyModel>(result);
}
Any advice or suggestion is much appreciated!
I'd approach the issue from a distance.
There are some architectural choices which can make your life easier. In web development it's practical to design your application to have stateless service layer (all the state is persisted in DB) and to fit the one HTTP request, one business operation principle (in other words one service method for one controller action).
I don't know how your architecture looks (there's not enough info in your post to determine) but chances are it meets the criteria I described above.
In this case it's easy to decide which component lifetime to choose: DbContext and service classes can be transient (InstancePerDependency in terminology of Autofac) or per request (InstancePerRequest) - it doesn't really matter. The point is that they have the same lifetime so the problem of captive dependencies doesn't arise at all.
Further implications of the above:
You can just use ctor injection in your service classes without worries. (Anyway, service locator pattern would be the last option after investigating lifetime control possibilities like lifetime scopes and IOwned<T>.)
EF itself implements the unit of work pattern via SaveChanges which is suitable most of the cases. Practically, you only need to implement an UoW over EF if its transaction handling doesn't meet your needs for some reason. These are rather special cases.
[...] is there any issue to apply async-await on every method inside of the
services to make them non-blocking methods.
If you apply the async-await pattern consistently (I mean all async operations are awaited) straight up to your controller actions (returning Task<ActionResult> instead of ActionResult), there'll be no issues. (However, keep in mind that in ASP.NET MVC 5 async support is not complete - async child actions are not supported.)
The answer, as always, is it depends... This configuration can work if:
Your scopes are created within the request boundary. Is your unit of work creating a scope?
You don't resolve any of your InstancePerLifetimeScope services before creating your scope. Otherwise they potentially live longer than they should if you create multiple scopes within the request.
I personally would just recommend making anything that depends on DbContext (either directly or indirectly) InstancePerRequest. Transient would work as well. You definitely want everything within one unit of work to be using the same DbContext. Otherwise, with Entity Framework's first level cache, you may have different services retrieving the same database record, but operating on different in-memory copies if they're not using the same DbContext. Last update would win in that case.
I would not reference your container in MyService, just constructor inject it. Container references in your domain or business logic should be used sparingly and only as a last resort.

SimpleInjector: End-to-end testing of controller's methods on a test database

I have a web app with several REST API controllers. This controllers got injected repositories as per this tutorial using SimpleInjector. I'd like to add some end-to-end testing to my project to make sure controller's method calls affect database in predictable manner (I'm using EF6, MySQL, code first). I was going to use this plan to test my app. I like overall approach but it seems like in this approach author is feeding db context directly into controller. In my case I have a Controller that gets an injected Repository from constructor and in turn Repositiry gets injected DbContext. Obviously I can hardcode the chain of creating DbContext, instantiating Repository followed by instantiating a Controller but it kinda defies the purpose of using the SimpleInjector, isn't it? I think there should be the way do it in more transparrent manner.
Basically I would like to inject separate database into my tests. When server is running it's using one database, when tests are running they are using the other ad-hoc database.
I have my test classes in a separate project, so I will need a way to instantiate my Controllers and Repositories from the main project. I'm not sure how I can do it either. Is it a good idea to expose my SimpleInjector.Container from another project somehow?
Additional info: I'm using .Net Framework (non-Core), I would like to manage withouth mocking for now unless it's required.
You can abstract the DbContext behind an interface and use SimpleInjector's option to override registrations for your tests. That will allow you to register a different implementation of your context for testing. Then in your test setup code call your standarad registrations, assuming they're all in your composition root and/or bootstrapping projedct. Then flip the override switch and register the test context.
Override Registrations
- For testing only
In your case, I would expect you to don't have to do anything in particular. Your end-to-end tests would call into a test version of the web application over HTTP, and this test application is configured with a connection string that points at a test database. This way you can use the exact same DI configuration, without having to do any changes. You certainly don't want to inject a different DbContext during testing.
Another option is to test in-memory, which means you don't call the web application over HTTP, but instead request a controller directly from Simple Injector and call its methods. Here the same holds: the only thing you want to change is your connection string, which is something that should already be configurable.

Categories