AutoFac - Initialize heavy-weight singletons on app_start - c#

Our configuration is, MVC5 C# app, using AutoFac.
We have a number of singletons, which if they're initialized with the first request, causes a bad experience for the user, because their initialization takes around 3-4 seconds in total. We're using AutoFac for Dependency injection, I'm wondering if there's any way of making sure the singletons (or these specific ones) are built on App_Start so we don't lose time when the user sends the first request? If not, what's the best way of solving this problem?

The general solution to this type of problem is to hide such heavy weight objects after a proxy implementation. This way you can trigger the initialization process directly at application startup, while the operation runs in the background without requests to be blocked (unless they require the uninitialized data during their request).
In case your code looks like this:
// The abstraction in question
public interface IMyService
{
ServiceData GetData();
}
// The heavy implementation
public class HeavyInitializationService : IMyServic {
public HeavyInitializationService() {
// Load data here
Thread.Sleep(3000);
}
public ServiceData GetData() => ...
}
A proxy can be created as follows:
public class LazyMyServiceProxy : IMyService {
private readonly Lazy<IMyService> lazyService;
public LazyMyServiceProxy(Lazy<IMyService> lazyService) {
this.lazyService = lazyService;
}
public ServiceData GetData() => this.lazyService.Value.GetData();
}
You can use this proxy as follows:
Lazy<IMyService> lazyService = new Lazy<IMyService>(() =>
new HeavyInitializationService());
container.Register<IMyService>(c => new LazyMyServiceProxy(lazyService))
.SingleInstance();
// Trigger the creation of the heavy data on a background thread:
Task.Factory.StartNew(() => {
// Triggers the creation of HeavyInitializationService on background thread.
var v = lazyService.Value;
});

Related

Share my dbContext with all my repository/service class?

I'm working on a classic .Net Framework Web API solution.
I have 3 layers. Let's call them
MVC - with POST, GET, UPDATE, DELETE controllers.
BIZZ - for business with my service class. My service class are king of repositories with CREATE, READ, UPDATE, DELETE and specific methods.
DATA - with POCO and definition of DB context.
I will not develop the EF layer. It is a classic Entity Framework project with POCO.Here is a sample of a Service and with BaseService class
public abstract class Service : IDisposable
{
protected DbContext dbContext = new DbContext();
public void Dispose()
{
dbContext.Dispose();
}
}
Then I have a cart service and a order service. They are similar in their structure so I will only write the code useful for this example.
public class CartService : Service
{
public Cart Create(Cart cart)
{
// Create the cart
}
public Cart Read(Guid id)
{
// Read
}
public Cart Update(Cart cart)
{
// I do some check first then
}
public void Delete(Cart cart)
{
// Delete
}
public void Checkout(Cart cart)
{
// Validation of cart removed in this example
dbContext.Cart.Attach(cart);
cart.DateCheckout = DateTime.UtcNow;
dbContext.Entry(cart).State = EntityState.Modified; // I think this line can be removed
dbContext.SaveChanges();
using (var orderService = new OrderService())
{
foreach (var order in cart.Orders)
{
order.DateCheckout = cart.DateCheckout;
order.Status = OrderStatus.PD; // pending
orderService.Update(order);
}
}
}
}
public class OrderService : Service
{
public Cart Create(Cart cart)
{
// Create the cart
}
public Cart Read(Guid id)
{
// Read
}
public Cart Update(Cart cart)
{
dbContext.Entry(order).State = EntityState.Modified;
dbContext.SaveChanges();
// More process here...
return order;
}
public void Delete(Cart cart)
{
// Delete
}
}
So, I have a service, cart service, that call another service, order service. I must work like this because I cannot simply accept the cart and all orders in it as it is. When I save a new order or update an existing order I must create a record in some other tables in other databases. The code is not in my example. So, I repeat I have a service that call another service and then I have 2 dbContext. At best this just create 2 context in memory, at worst this create exception. Exception like you cannot attach an entity to 2 contexts or this entity is not in context.
Well, I would like all my service use the same context. I suppose you will al tell me to use Dependency Injection. Yes, well ok but I don't want, each time I create a new service have to pass the context. I don't want to have to do that:
public void Checkout(Cart cart)
{
// ...
using (var orderService = new OrderService(dbContext))
{
// ...
}
}
I would like to do something that impact my base service only if possible. A singleton maybe... At this point I can see your face. Yes I know Singleton are soo bad. Yes but i'm doing a IIS Web API. Each request is a new instance. I don't care about the impact of the singleton. And I can load my database by changing the connection string in config file so the benefit of DI is there already. Well, I also know it is possible to have singleton with DI. I just don't know how.
So, what can I do to be sure I share my dbContext with all my services?
Disclaimer: This example is not intended to be a "good" one and certainly does not follow best practices, but faced with an existing legacy code base which from your example already suffers from a number of questionable practices, this should get you past the multiple context issues.
Essentially if you're not already using a IoC Container to perform dependency injection then what you need is to introduce a unit of work to manage the scope of a DbContext where your base Service class provides a DbContext provided by the unit of work. (Essentially a DbContext Registry)
For the unit of work and assuming EF6 I would recommend Mehdime's DbContextScope which is available as a NuGet package. Alternatively you can find the source code on Github and implement something similar without too much trouble. I like this pattern because it leverages the CallContext to serve as the communication layer between the ContextScope (Unit of Work) created by the DbContextScopeFactory and the AmbientDbContextScope. This will probably take a little time to get your head around but it injects very nicely into legacy applications where you want to leverage the Unit of Work and don't have dependency injection.
What it would look like:
In your Service class you would introduce the AmbientDbContextLocator to resolve your DbContext:
private readonly IAmbientDbContextLocator _contextLocator = new AmbientDbContextLocator();
protected DbContext DbContext
{
get { return _contextLocator.Get<DbContext>(); }
}
And that's it. Later as you refactor to accommodate Dependency injection, just inject the AmbientDbContextLocator instead of 'new'ing it up.
Then, in your web API controllers where you are using your services, (not the services themselves) you need to add the DbContextScopeFactory instance..
private readonly IDbContextScopeFactory _contextScopeFactory = new DbContextScopeFactory();
Lastly, in your API methods, when you want to call your services, you need to simply use the ContextScopeFactory to create a context scope. The AmbientDbContextLocators will retrieve the DbContext from this context scope. The context scope you create with the factory will be done in a using block to ensure your contexts are disposed. So, using your Checkout method as an example, it would look like:
In your Web API [HttpPost] Checkout() method:
using (var contextScope = _contextScopeFactory.Create())
{
using(var service = new CartService())
{
service.Checkout();
}
contextScope.SaveChanges();
}
Your cart service Checkout method would remain relatively unchanged, only instead of accessing dbContext as a variable (new DbContext()) it will access the DbContext property which gets the context through the context locator.
The Services can continue to call DbContext.SaveChanges(), but this isn't necessary and the changes will not be committed to the DB until the contextScope.SaveChanges() is called. Each service will have its own instance of the Context Locator rather than the DbContext and these will be dependent on you defining a ContextScope to function. If you call a Service method that tries to access the DbContext without being within a using (var contextScope = _contextScopeFactory.Create()) block you will receive an error. This way all of your service calls, even nested service calls (CartService calls OrderService) will be interacting with the same DbContext instance.
Even if you just want to read data, you can leverage a slightly faster DbContext using _contextScopeFactory.CreateReadOnly() which will help guard against unexpected/disallowed calls to SaveChanges().
When using the ASP.NET Core stack, the tutorial for using EF with it defaults to using DI to provide your DB context, just not with a service layer. That said, it actually does the right thing for this out of the box. I'll give a brief rundown of the bare minimum necessary for this to work, using whatever the latest versions of ASP.NET Core Web API and EF Core were on NuGet at the time of writing.
First, let's get the boilerplate out of the way, starting with the model:
Models.cs
public class ShopContext : DbContext
{
public ShopContext(DbContextOptions options) : base(options) {}
// We add a GUID here so we're able to tell it's the same object later.
public string Id { get; } = Guid.NewGuid().ToString();
public DbSet<Cart> Carts { get; set; }
public DbSet<Order> Orders { get; set; }
}
public class Cart
{
public string Id { get; set; }
public string Name { get; set; }
}
public class Order
{
public string Id { get; set; }
public string Name { get; set; }
}
Then some bare-bones services:
Services.cs
public class CartService
{
ShopContext _ctx;
public CartService(ShopContext ctx)
{
_ctx = ctx;
Console.WriteLine($"Context in CartService: {ctx.Id}");
}
public async Task<List<Cart>> List() => await _ctx.Carts.ToListAsync();
public async Task<Cart> Create(string name)
{
return (await _ctx.Carts.AddAsync(new Cart {Name = name})).Entity;
}
}
public class OrderService
{
ShopContext _ctx;
public OrderService(ShopContext ctx)
{
_ctx = ctx;
Console.WriteLine($"Context in OrderService: {ctx.Id}");
}
public async Task<List<Order>> List() => await _ctx.Orders.ToListAsync();
public async Task<Order> Create(string name)
{
return (await _ctx.Orders.AddAsync(new Order {Name = name})).Entity;
}
}
The only notable things here are: the context comes in as a constructor parameter as God intended, and we log the ID of the context to verify when it gets created with what.
Then our controller:
ShopController.cs
[ApiController]
[Route("[controller]")]
public class ShopController : ControllerBase
{
ShopContext _ctx;
CartService _cart;
OrderService _order;
public ShopController(ShopContext ctx, CartService cart, OrderService order)
{
Console.WriteLine($"Context in ShopController: {ctx.Id}");
_ctx = ctx;
_cart = cart;
_order = order;
}
[HttpGet]
public async Task<IEnumerable<string>> Get()
{
var carts = await _cart.List();
var orders = await _order.List();
return (from c in carts select c.Name).Concat(from o in orders select o.Name);
}
[HttpPost]
public async Task Post(string name)
{
await _cart.Create(name);
await _order.Create(name);
await _ctx.SaveChangesAsync();
}
}
As above, we take the context as a constructor parameter to triple-check it's what it should be; we also need it to call SaveChanges at the end of an operation. (You can refactor this out of controllers if you want to, but they'll work just fine as units of work for now.)
The part that ties this together is the DI configuration:
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
// Use whichever provider you have here, this is where you grab a connection string from the app configuration.
services.AddDbContext<ShopContext>(options =>
options.UseInMemoryDatabase("Initrode"));
services.AddScoped<CartService>();
services.AddScoped<OrderService>();
}
AddDbContext() defaults to registering a DbContext to be created per-request by the container. Web API provides the AddControllers method that puts those into the DI container, and we also register our services manually.
The rest of Startup.cs I've left as-is.
Starting this up and opening https://localhost:5001/shop should log something like:
Context in CartService: b213966e-35f2-4cc9-83d1-98a5614742a3
Context in OrderService: b213966e-35f2-4cc9-83d1-98a5614742a3
Context in ShopController: b213966e-35f2-4cc9-83d1-98a5614742a3
with the same GUID for all three lines in a request, but a different GUID between requests.
A little additional explanation of what goes on above:
Registering a component in a container (using Add() and such above) means telling the container those components exist and that it should create them for you when asked, as well as what identifiers they're available under and how to create them. The defaults for this are more or less "make the component available as its class, and create it by calling its one public constructor, passing other registered components into it" - the container looks at the constructor signature to figure this out.
"Scoped" in an ASP.NET Core app means "per-request." I think in this case one could also use services with a transient lifetime - a new one created every time it's needed, but they'll still get the same DbContext as long as they're created while handling the same request. Which one to do is a design consideration; the main constraint is that you can't inject shorter-lived components into longer-lived components without having to use more complex techniques, which is why I favour having all components as short-lived as possible. In other words, I only make things longer-lived when they actually hold some state that needs to live for that time, while also doing that as sparingly as possible because state bad. (Just recently I had to refactor an unfortunate design where my services were singletons, but I wanted my repositories to be per-request so as to be able to inject the currently logged in user's information into the repository to be able to automatically add the "created by" and "updated by" fields.)
You'll note that with support for doing things this way being built-in to both ASP.NET Core and EF Core, there's actually very little extra code involved. Also, the only thing needed to go from "injecting a context into your controllers" (as the tutorial does) to "injecting a context into services that you use from your controllers" is adding the services into DI - since the controller and context are already under DI, anything new you add can be injected into them and vice versa.
This should give you a quick introduction into how to make things "just work" and shows you the basic use case of a DI container: you declaratively tell it or it infers "this is an X", "this is an Y", "this is a Z and it needs to be created using an X and a Y"; then when you ask the container to give you a Z, it will automagically first create an X and Y, then create Z with them. They also manage the scope and lifetime of these objects, i.e. only create one of a type for an API request. Beyond that it's a question of experience with them and familiarity with a given container - say Ninject and Autofac are much more powerful than the built-in one - but it's variations on the same idea of declaratively describing how to create an object possibly using other objects (its dependencies) and having the container "figure out" how to wire things together.

Change dependency resolution for specific scope only

I have one dependency registered as follows:
interface IDependency { }
class DependencyImpl : IDependency { }
Startup:
services.AddScoped<IDependency, DependencyImpl>();
This works as intendended as I do want to reuse the same instance in the scope of my Web API requests.
However, in one background service, I'd like to tell which instance it will resolve to:
class MyBackgroundService
{
private readonly IServiceScopeFactory _scopeFactory; // set in ctor
public void DoStuff()
{
var itens = GetItens();
var dependencyInstance = new DependencyImpl();
Parallel.ForEach(itens, (item) =>
{
using(var scope = _scopeFactory.CreateScope())
{
scope.SwapDependencyForThisScopeOnly<IDependency>( () => dependencyInstance ); // something like this
var someOtherService = scope.ServiceProvider.GetRequiredService<ItemService(); // resolve subsequent services with provided dependencyInstance
someOtherService.Process(item);
}
});
}
}
I can't reuse the same Scope because ItemService (and/or it's dependencies) uses other scoped services that can't be shared. Neither I want to replace dependency resolution for the entire application.
Is it possible to do what I want here? Does it make sense?
I'm using dotnet core 2.2 with default IoC container for that matters.
Edit in reply to #Steven: DependencyImpl contains configurations for how an item will be processed. One of those includes an relatively expensive query. DependencyImpl is also injected more than once in the graph. So, currently, it reads the configuration once, cache them in private properties, and use the cached version on subsequent reads. Because I know I'll be reusing the same configuration for all itens here, I'd like to avoid reading the configuration again for each parallel execution.
My real-world dependency is more similar to this:
interface IDependency
{
Task<Configuration> GetConfigurationAsync();
}
class DependencyImpl : IDependency
{
private readonly Configuration _configuration;
private readonly DbContext _dbContext;
ctor(DbContext dbContext)
{
_dbContext = dbContext;
}
public async Task<Configuration> GetConfigurationAsync()
{
if(_configuration is null)
{
// read configurations
}
return _configuration;
}
}
I understand that, as is, my class is not thread-safe. I'd have to force a read at the start and/or add some thread safety here.
Also, those processings used to happen during the lifetime of a web request, and the background service is the new stuff. I'd prefer to change as little of existing code as possible, because there are few tests in place, and of course time constraints from the powers-that-be.
In general, it is not a good idea to change the structure of the registered object graphs while the application is running. Not only is this hard to achieve with most containers, it is prone to suble problems that are hard to detect. I, therefore, suggest a small change in your design that change circumvents the problem you are facing.
Instead of trying to change the dependency as a whole, instead pre-populate an existing dependency with the data loaded on a a different thread.
This can be done using the following abstraction/implementation pair:
public interface IConfigurationProvider
{
Task<Configuration> GetConfigurationAsync();
}
public sealed class DatabaseConfigurationProvider : IConfigurationProvider
{
private readonly DbContext _dbContext;
public DatabaseConfigurationProvider(DbContext dbContext)
{
_dbContext = dbContext;
}
public Configuration Configuration { get; set; }
public async Task<Configuration> GetConfigurationAsync()
{
if (Configuration is null)
{
await // read configurations
}
return Configuration;
}
}
Notice the public Configuration on the DatabaseConfigurationProvider implementation, which is not on the IConfigurationProvider interface.
This is the core of the solution I'm presenting. Allow your Composition Root to set the value, without polluting your application abstractions, as application code doesn't need to overwrite the Configuration object; only the Composition Root needs to.
With this abstraction/implementation pair, the background service can look like this:
class MyBackgroundService
{
private readonly IServiceScopeFactory _scopeFactory; // set in ctor
public Task DoStuff()
{
var itens = GetItens();
// Create a scope for the root operation.
using (var scope = _scopeFactory.CreateScope())
{
// Resolve the IConfigurationProvider first to load
// the configuration once eagerly.
var provider = scope.ServiceProvider
.GetRequiredService<IConfigurationProvider>();
var configuration = await provider.GetConfigurationAsync();
Parallel.ForEach(itens, (item) => Process(configuration, item));
}
}
private void Process(Configuration configuration, Item item)
{
// Create a new scope per thread
using (var scope = _scopeFactory.CreateScope())
{
// Request the configuration implementation that allows
// setting the configuration.
var provider = scope.ServiceProvider
.GetRequiredService<DatabaseConfigurationProvider>();
// Set the configuration object for the duration of the scope
provider.Configuration = configuration;
// Resolve an object graph that depends on IConfigurationProvider.
var service = scope.ServiceProvider.GetRequiredService<ItemService>();
service.Process(item);
}
}
}
To pull this off, you need the following DI configuration:
services.AddScoped<DatabaseConfigurationProvider>();
services.AddScoped<IConfigurationProvider>(
p => p.GetRequiredService<DatabaseConfigurationProvider>());
This previous configuration registers DatabaseConfigurationProvider twice: once for its concrete type, once for its interface. The interface registration forwards the call and resolves the concrete type directly. This is a special 'trick' you have to apply when working with the MS.DI container, to prevent getting two separate DatabaseConfigurationProvider instances inside a single scope. That would completely defeat the correctness of this implementation.
Make an interface that extends IDependency and only applies to the faster implementation that you need to request, e.g., IFasterDependency. Then make a registration for IFasterDependency. That way your faster class is still an IDependency object and you won't disrupt too much existing code, but you can now request it freely.
public interface IDependency
{
// Actual, useful interface definition
}
public interface IFasterDependency : IDependency
{
// You don't actually have to define anything here
}
public class SlowClass : IDependency
{
}
// FasterClass is now a IDependencyObject, but has its own interface
// so you can register it in your dependency injection
public class FasterClass : IFasterDependency
{
}

Using a Scoped service in a Singleton in an Asp.Net Core app

In my Asp.Net Core App I need a singleton service that I can reuse for the lifetime of the application. To construct it, I need a DbContext (from the EF Core), but it is a scoped service and not thread safe.
Therefore I am using the following pattern to construct my singleton service. It looks kinda hacky, therefore I was wondering whether this is an acceptable approach and won't lead to any problems?
services.AddScoped<IPersistedConfigurationDbContext, PersistedConfigurationDbContext>();
services.AddSingleton<IPersistedConfigurationService>(s =>
{
ConfigModel currentConfig;
using (var scope = s.CreateScope())
{
var dbContext = scope.ServiceProvider.GetRequiredService<IPersistedConfigurationDbContext>();
currentConfig = dbContext.retrieveConfig();
}
return new PersistedConfigurationService(currentConfig);
});
...
public class ConfigModel
{
string configParam { get; set; }
}
What you're doing is not good and can definitely lead to issues. Since this is being done in the service registration, the scoped service is going to be retrieve once when your singleton is first injected. In other words, this code here is only going to run once for the lifetime of the service you're registering, which since it's a singleton, means it's only going to happen once, period. Additionally, the context you're injecting here only exists within the scope you've created, which goes away as soon as the using statement closes. As such, by the time you actually try to use the context in your singleton, it will have been disposed, and you'll get an ObjectDisposedException.
If you need to use a scoped service inside a singleton, then you need to inject IServiceProvider into the singleton. Then, you need to create a scope and pull out your context when you need to use it, and this will need to be done every time you need to use it. For example:
public class PersistedConfigurationService : IPersistedConfigurationService
{
private readonly IServiceProvider _serviceProvider;
public PersistedConfigurationService(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public async Task Foo()
{
using (var scope = _serviceProvider.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<IPersistedConfigurationDbContext>();
// do something with context
}
}
}
Just to emphasize, again, you will need to do this in each method that needs to utilize the scoped service (your context). You cannot persist this to an ivar or something. If you're put off by the code, you should be, as this is an antipattern. If you must get a scoped service in a singleton, you have no choice, but more often than not, this is a sign of bad design. If a service needs to use scoped services, it should almost invariably be scoped itself, not singleton. There's only a few cases where you truly need a singleton lifetime, and those mostly revolve around dealing with semaphores or other state that needs to be persisted throughout the life of the application. Unless there's a very good reason to make your service a singleton, you should opt for scoped in all cases; scoped should be the default lifetime unless you have a reason to do otherwise.
Although Dependency injection: Service lifetimes documentation in ASP.NET Core says:
It's dangerous to resolve a scoped service from a singleton. It may cause the service to have incorrect state when processing subsequent requests.
But in your case this is not the issue. Actually you are not resolving the scoped service from singleton. Its just getting an instance of scoped service from singleton whenever it requires. So your code should work properly without any disposed context error!
But another potential solution can be using IHostedService. Here is the details about it:
Consuming a scoped service in a background task (IHostedService)
Looking at the name of this service - I think what you need is a custom configuration provider that loads configuration from database at startup (once only). Why don't you do something like following instead? It is a better design, more of a framework compliant approach and also something that you can build as a shared library that other people can also benefit from (or you can benefit from in multiple projects).
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureAppConfiguration((context, config) =>
{
var builtConfig = config.Build();
var persistentConfigBuilder = new ConfigurationBuilder();
var connectionString = builtConfig["ConnectionString"];
persistentStorageBuilder.AddPersistentConfig(connectionString);
var persistentConfig = persistentConfigBuilder.Build();
config.AddConfiguration(persistentConfig);
});
}
Here - AddPersistentConfig is an extension method built as a library that looks like this.
public static class ConfigurationBuilderExtensions
{
public static IConfigurationBuilder AddPersistentConfig(this IConfigurationBuilder configurationBuilder, string connectionString)
{
return configurationBuilder.Add(new PersistentConfigurationSource(connectionString));
}
}
class PersistentConfigurationSource : IConfigurationSource
{
public string ConnectionString { get; set; }
public PersistentConfigurationSource(string connectionString)
{
ConnectionString = connectionString;
}
public IConfigurationProvider Build(IConfigurationBuilder builder)
{
return new PersistentConfigurationProvider(new DbContext(ConnectionString));
}
}
class PersistentConfigurationProvider : ConfigurationProvider
{
private readonly DbContext _context;
public PersistentConfigurationProvider(DbContext context)
{
_context = context;
}
public override void Load()
{
// Using _dbContext
// Load Configuration as valuesFromDb
// Set Data
// Data = valuesFromDb.ToDictionary<string, string>...
}
}

Use AsyncLocal to store request information?

We are starting with ASP.NET Core 2. We need a way for each element that is involved in a request to write a message to a message handler.
Some limitations:
We won't use HttpContext.Items (HttpContext is not available in the class that we are using inside the Controller, and we don't like to forward the whole context there).
We tried to use it without dependency injection because if we have multiple different services, we will have too many parameters in the constructors.
Must also work with async/await.
We tried an approach using AsyncLocal<T>.
For that we created a class:
public class NotificationExecutionContext
{
private static readonly AsyncLocal<NotificationHandler> NotificationHandler =
new AsyncLocal<NotificationHandler>();
public static NotificationHandler Instance =>
NotificationHandler.Value ?? (NotificationHandler.Value = new NotificationHandler());
}
There will be a NotificationHandler created, which should live per-request. The NotificationHandler is a simple class where you can add/get messages to/from a collection:
public class NotificationHandler : INotificationHandler
{
public List<NotificationBase> Notifications { get; } = new List<NotificationBase>();
public void AddNotification(NotificationBase notification)
{
Notifications.Add(notification);
}
public void AddNotificationRange(List<NotificationBase> notifications)
{
Notifications.AddRange(notifications);
}
}
With this solution, I can easily get the NotificationHandler for this context and add a notification.
NotificationExecutionContext.Instance.AddNotification(new NotificationBase(){..})
Inside a middleware, we are waiting on the Response.OnStarting() event and then we take all messages from the NotificationHandler and add them the response header:
public async Task Invoke(HttpContext context)
{
var e = NotificationExecutionContext.Instance; // Required so that notification handler will be created in this context
context.Response.OnStarting((state) =>
{
List<NotificationBase> notifications = NotificationExecutionContext.Instance.Notifications;
if (notifications.Count > 0)
{
string messageString = JsonConvert.SerializeObject(notifications, Formatting.None);
context.Response.Headers.Add("NotificationHeader", messageString);
}
return Task.FromResult(0);
}, null);
await Next(context);
}
This code works, but are there pitfalls that we do not know? Or are there better solutions?
You should not use static singletons like that. Having static dependencies like that inside your code defeats the whole purpose of dependency injection. You should just embrace dependency injection here, which would make this super simple:
/* in Startup.ConfigureServices */
// register the notification handler as a scoped dependency, this automatically makes the
// instance shared per request but not outside of it
services.AddScoped<INotificationHandler, NotificationHandler>();
/* in Startup.Configure */
// register your custom middleware
app.Use<NotificationHandlerMiddleware>();
public class NotificationHandlerMiddleware
{
private readonly RequestDelegate _next;
private readonly NotificationHandler _notificationHandler;
public NotificationHandlerMiddleware(RequestDelegate next, INotificationHandler notificationHandler)
{
_next = next;
_notificationHandler = notificationHandler;
}
public void Invoke(HttpContext context)
{
// do whatever with _notificationHandler
await _next(context);
}
}
And that’s all. No need to introduce statics, but using full dependency injection making your code completely testable and all dependencies clear.
We tried to use it without dependency injection because if we have multiple different services we will have to many parameters in the constructors.
Too many constructor parameters is a clear sign for a violation of the single responsibility principle. If you find your services take many dependencies, you should consider splitting it up. You may also want to consider refactoring to facade services.

Async WCF: wait for another call

We have an old Silverlight UserControl + WCF component in our framework and we would like to increase the reusability of this feature. The component should work with basic functionality by default, but we would like to extend it based on the current project (without modifying the original, so more of this control can appear in the full system with different functionality).
So we made a plan, where everything looks great, except one thing. Here is a short summary:
Silverlight UserControl can be extended and manipulated via ContentPresenter at the UI and ViewModel inheritance, events and messaging in the client logic.
Back-end business logic can be manipulated with module loading.
This gonna be okay I think. For example you can disable/remove fields from the UI with overriden ViewModel properties, and at the back-end you can avoid some action with custom modules.
The interesting part is when you add new fields via the ContentPresenter. Ok, you add new properties to the inherited ViewModel, then you can bind to them. You have the additional data. When you save base data, you know it's succeeded, then you can start saving your additional data (additional data can be anything, in a different table at back-end for example). Fine, we extended our UserControl and the back-end logic and the original userControl still doesn't know anything about our extension.
But we lost transaction. For example we can save base data, but additional data saving throws an exception, we have the updated base data but nothing in the additional table. We really doesn't want this possibility, so I came up with this idea:
One WCF call should wait for the other at the back-end, and if both arrived, we can begin cross thread communication between them, and of course, we can handle the base and the additional data in the same transaction, and the base component still doesn't know anything about the other (it just provide a feature to do something with it, but it doesn't know who gonna do it).
I made a very simplified proof of concept solution, this is the output:
1 send begins
Press return to send the second piece
2 send begins
2 send completed, returned: 1
1 send completed, returned: 2
Service
namespace MyService
{
[ServiceContract]
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class Service1
{
protected bool _sameArrived;
protected Piece _same;
[OperationContract]
public Piece SendPiece(Piece piece)
{
_sameArrived = false;
Mediator.Instance.WaitFor(piece, sameArrived);
while (!_sameArrived)
{
Thread.Sleep(100);
}
return _same;
}
protected void sameArrived(Piece piece)
{
_same = piece;
_sameArrived = true;
}
}
}
Piece (entity)
namespace MyService
{
[DataContract]
public class Piece
{
[DataMember]
public long ID { get; set; }
[DataMember]
public string SameIdentifier { get; set; }
}
}
Mediator
namespace MyService
{
public sealed class Mediator
{
private static Mediator _instance;
private static object syncRoot = new Object();
private List<Tuple<Piece, Action<Piece>>> _waitsFor;
private Mediator()
{
_waitsFor = new List<Tuple<Piece, Action<Piece>>>();
}
public static Mediator Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Mediator();
}
}
return _instance;
}
}
public void WaitFor(Piece piece, Action<Piece> callback)
{
lock (_waitsFor)
{
var waiter = _waitsFor.Where(i => i.Item1.SameIdentifier == piece.SameIdentifier).FirstOrDefault();
if (waiter != null)
{
_waitsFor.Remove(waiter);
waiter.Item2(piece);
callback(waiter.Item1);
}
else
{
_waitsFor.Add(new Tuple<Piece, Action<Piece>>(piece, callback));
}
}
}
}
}
And the client side code
namespace MyClient
{
class Program
{
static void Main(string[] args)
{
Client c1 = new Client(new Piece()
{
ID = 1,
SameIdentifier = "customIdentifier"
});
Client c2 = new Client(new Piece()
{
ID = 2,
SameIdentifier = "customIdentifier"
});
c1.SendPiece();
Console.WriteLine("Press return to send the second piece");
Console.ReadLine();
c2.SendPiece();
Console.ReadLine();
}
}
class Client
{
protected Piece _piece;
protected Service1Client _service;
public Client(Piece piece)
{
_piece = piece;
_service = new Service1Client();
}
public void SendPiece()
{
Console.WriteLine("{0} send begins", _piece.ID);
_service.BeginSendPiece(_piece, new AsyncCallback(sendPieceCallback), null);
}
protected void sendPieceCallback(IAsyncResult result)
{
Piece returnedPiece = _service.EndSendPiece(result);
Console.WriteLine("{0} send completed, returned: {1}", _piece.ID, returnedPiece.ID);
}
}
}
So is it a good idea to wait for another WCF call (which may or may not be invoked, so in a real example it would be more complex), and process them together with cross threading communication? Or not and I should look for another solution?
Thanks in advance,
negra
If you want to extend your application without changing any existing code, you can use MEF that is Microsoft Extensibility Framework.
For using MEF with silverlight see: http://development-guides.silverbaylabs.org/Video/Silverlight-MEF
I would not wait for 2 WCF calls from Silverlight, for the following reasons:
You are making your code more complex and less maintainable
You are storing business knowledge, that two services should be called together, in the client
I would call a single service that aggreagated the two services.
It doesn't feel like a great idea to me, to be honest. I think it would be neater if you could package up both "partial" requests in a single "full" request, and wait for that. Unfortunately I don't know the best way of doing that within WCF. It's possible that there's a generalized mechanism for this, but I don't know about it. Basically you'd need some loosely typed service layer where you could represent a generalized request and a generalized response, routing the requests appropriately in the server. You could then represent a collection of requests and responses easily.
That's the approach I'd look at, personally - but I don't know how neatly it will turn out in WCF.

Categories