Autofac resolve dependant services by name - c#

Is it possible to register a single service that has dependencies that can change depending on a setting?
For instance
A DBExecutor requries a different DBconnection object depending which geographical region it is running under.
I've tried something like
builder.RegisterType<DbConnection>().Named<IDbConnection>("US")
builder.RegisterType<DbConnection>().Named<IDbConnection>("AU")
builder.RegisterType<SqlExecutor>().As<IDbExecutor>();
and I'd like to resolve the service with something like
var au = container.ResolveNamed<IDbExecutor>("AU");
var us = container.ResolveNamed<IDbExecutor>("US");
However this doesn't work because the IDbExecutor itself hasn't been registered with a key, and if I try a normal Resolve it wont work as it cannot create the dependent services.
Basically I just want an instance of of IDbExecutor with a DBConnection based upon a certain parameter.
I'm trying to do this in a more general sense so I'm trying to avoid any specific code where I can.
The current generic code I have that doesn't use keyed services looks like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType);
where JobType is a class Type and depending on if this is possible the final version would look something like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType, bundle.JobDetail.JobDataMap["Region"]);
where bundle.JobDetail.JobDataMap["Region"] would return either "AU" or "US"

You won't be able to rig it to resolve a named IDbExecutor because you didn't register it as named. It's also probably not the best idea since it implies that IDbExecutor somehow "knows" about its dependencies, which it shouldn't - the implementation knows, but the interface/service doesn't - and shouldn't.
You can get something close to what you want by updating the SqlExecutor to use the IIndex<X,B> relationship in Autofac. Instead of taking just an IDbConnection in your constructor, take an IIndex<string,IDbConnection>.
When you need to get the connection, look it up from the indexed dictionary using the job type:
public class SqlExecutor
{
private IIndex<string, IDbConnection> _connections;
public SqlExecutor(IIndex<string, IDbConnection> connections)
{
this._connections = connections;
}
public void DoWork(string jobType)
{
var connection = this._connections[jobType];
// do something with the connection
}
}
Another way to do it would be to create a delegate factory for the SqlExecutor that takes in the job type and automatically picks the right named service. That's a bit more involved so check out the documentation for an example.

Related

Hangfire - Configure AutomaticRetry for specific RecurringJob at runtime

I'm using Hangfire v1.7.9 and I'm trying to configure a series of recurring background jobs within my MVC 5 application to automate the retrieval of external reference data into the application. I've tested this with one task and this works great, but I'd like administrators within the system to be able to configure the Attempts and DelayInSeconds attribute parameters associated with the method that is called in these background jobs.
The AutomaticRetryAttribute states that you have to use...
...a constant expression, typeof expression or an array creation expression of an attribute parameter type
... which from what I've read is typical of all Attributes. However, this means that I can't achieve my goal by setting a property value elsewhere and then referencing that in the class that contains the method I want to run.
Additionally, it doesn't look like there is any way to configure the automatic retry properties in the BackgroundJob.Enqueue or RecurringJob.AddOrUpdate methods. Lastly, I looked at whether you could utilise a specific retry count for each named Queue but alas the only properties about Hangfire queues you can set is their names in the BackgroundJobServerOptions class when the Hangfire server is initialised.
Have I exhausted every avenue here? The only other thing I can think of is to create my own implementation of the AutomaticRetryAttribute and set the values at compile time by using an int enum, though that in itself would create an issue in the sense that I would need to provide a defined list of each of the values that a user would need to select. Since I wanted the number of retries to be configurable from 5 minutes all the way up to 1440 minutes (24 hours), I really don't want a huge, lumbering enum : int with every available value. Has anyone ever encountered this issue or is this something I should submit as a request on the Hangfire GitHub?
I would take the approach of making a custom attribute that decorates AutomaticRetryAttribute:
public class MyCustomRetryAttribute : JobFilterAttribute, IElectStateFilter, IApplyStateFilter
{
public void OnStateElection(ElectStateContext context)
{
GetAutomaticRetryAttribute().OnStateElection(context);
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
GetAutomaticRetryAttribute().OnStateApplied(context, transaction);
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
GetAutomaticRetryAttribute().OnStateUnapplied(context, transaction);
}
private AutomaticRetryAttribute GetAutomaticRetryAttribute()
{
// Somehow instantiate AutomaticRetryAttribute with dynamically fetched/set `Attempts` value
return new AutomaticRetryAttribute { Attempts = /**/ };
}
}
Edit: To clarify, this method allows you to reuse AutomaticRetryAttribute's logic, without duplicating it. However, if you need to change more aspects on per-job basis, you may need to duplicate the logic inside your own attribute.
Also, you can use context.GetJobParameter<T> to store arbitrary data on per-job basis

Injecting a factory with multiple constructor parameters

Initially I needed only one queue to be created by the MessageQueueFactory:
container.RegisterSingleton<IMessageQueueFactory>(() => {
var uploadedWaybillsQueuePath = ConfigurationManager
.AppSettings["msmq:UploadedDocumentsQueuePath"];
return new MessageQueueFactory(uploadedWaybillsQueuePath);
});
Now that requirements have changed there's a need to support several queues.
The simplest thing I can do here is to add other paths (stored in app.config) to the factory's constructor and provide methods for each queue:
container.RegisterSingleton<IMessageQueueFactory>(() => {
var uploadedDocsQueuePath = ConfigurationManager
.AppSettings["msmq:UploadedDocumentsQueuePath"];
var requestedDocsQueuePath = ConfigurationManager
.AppSettings["msmq:RequestedDocumentsQueuePath"];
return new MessageQueueFactory(
uploadedWaybillsQueuePath,
requestedDocsQueuePath
);
});
interface IMessageQueueFactory {
MessageQueue CreateUploadedDocsQueue();
MessageQueue CreateRequestedDocsQueue();
}
Is it a poor design? How can it be refactored?
I wouldn't consider this bad design. You need to provide the queue name and having it as an appSetting makes it easier to update them if you need to.
It also feels like the less friction path, which is always good, however I don't quite like it because every time you add a new name you have to change the interface and that's not that nice.
I found this post with some answers that might interest you :
IoC - Multiple implementations support for a single interface

Multiple class instantiation or change public property in ASP.NET MVC

I need to access my business layer object 4 times with different constructor.
Specifically I need to access 4 different back end systems through my separate Data Access Layer
What should i do:
1) Instantiate 4 separate objects with different constructor?
2) Instantiate one object and change the public property every time?
As i am now in my HomeController i have the following:
var obj = new BarcodeBLL(new ERPConfig
{
AS400ControlLibrary = ConfigurationManager.AppSettings["ControlLibrary"],
AS400Library = ConfigurationManager.AppSettings["DataLibrary"],
ConnectionString = ConfigurationManager.ConnectionStrings["AS400"].ConnectionString
});
To me it would seem obvious to follow #2 but i would like to know if i am correct and why
If you have 4 identical systems, it would seem logical to have a single class representing such systems. When you need access to one of these systems, you instantiate this type, passing the correct connection string to the constructor.
You may want to hide the details of which connection string is actually being used behind a factory or in the configuration of a DI container.

Passing a Data Object Between Dependent Factories

I'm currently using an IoC container, unity, for my program.
I have multiple chained factories. One calling the next to create an object it needs for populating a property. All the factories use the same raw data object in order to build their respective objects. The raw data object describes how to create all the various objects. Currently each factory has a Create method that takes in a couple parameters to state what location the object represents.
My problem is how/where do I pass in the raw data object to each factory in order for them to do their jobs?
Injecting the object into the Create() methods seems to be more procedural than object oriented. However if I inject the object into each factory's constructor then how would I resolve each factory correctly. Not to mention that these factories need to be able to work on different raw data objects. Maybe there is a better architecture over all?
Below represents the type of structure I have, minus passing the raw object anywhere.
class PhysicalObjectFactory
{
private readonly StructureAFactory _structureAFactory;
private readonly Parser _parser;
public PhysicalObjectFactory(StructureAFactory structureAFactory, Parser _parser)
{
_structureAFactory = structureAFactory;
this._parser = _parser;
}
public PhysicalObject CreatePhysicalObject()
{
RawDataObject rawDataObject = _parser.GetFromFile("foo.txt");
// do stuff
PhysicalObject physicalObject = new PhysicalObject();
physicalObject.StructureA = _structureAFactory.Create(num1, num2);
// do more stuff
return physicalObject;
}
}
class StructureAFactory
{
private readonly StructureBFactory _structureBFactory;
public StructureAFactory(StructureBFactory structureBFactory)
{
_structureBFactory = structureBFactory;
}
public StructureA Create(int a, int b)
{
// do stuff
StructureA structureA = new StructureA();
structureA.StructureB = _structureBFactory.Create(num76, num33);
// do more stuff
return structureA;
}
}
class StructureBFactory
{
public StructureBFactory(){}
public StructureB Create(int a, int b)
{
StructureB structureB = new StructureB();
// do stuff
return structureB;
}
}
My problem is how/where do I pass in the raw data object to each
factory in order for them to do their jobs?
In general you should pass in runtime data through methods and compile-time/design-time/configuration data through constructor injection.
Your services are composed at a different moment in time as when they are used. Those services can live for a long time and this means they can be used many times with different runtime values. If you make this distinction between runtime data and data that doesn't change throughout the lifetime of the service, your options become much clearer.
So the question is whether this raw data you're passing in is changing on each call or if its fixed. Perhaps it is partially fixed. In that case you should separate the the data; only pass the runtime data on through the Create methods. It seems obvious that since the factories are chained, the data they need to create that part of the object is passed on to them through their Create method.
Sometimes however, you've got some data that's in between. It is data that will change during the lifetime of the application, but do don't want to pass it on through method calls, because it's not up to the caller to determine what those values are. This is contextual information. A clear example of this is information about the logged in user that is executing the request. You don't want the caller (for instance your presentation layer) to pass that information on, since this is extra work, and a potential security risk if the presentation layer forgets to pass this information on, or accidentally passes on some invalid value.
In that case the most common solution is to inject a service that provides consumers with this information. In the case of the user information you would inject an IUserContext service that contains a UserName or UserId property, perhaps a IsInRole(string) method or something similar. The trick here is that not the user information is injected into a consumer, but a service that allows access to this information. In other words, the retrieval of the user information is deferred. This allows the composed object graph to stay independent of those contextual information. This makes it easier to compose and validate object graph.

In domain-driven design, would it be a violation of DDD to put calls to other objects' repostiories in a domain object?

I'm currently refactoring some code on a project that is wrapping up, and I ended up putting a lot of business logic into service classes rather than in the domain objects. At this point most of the domain objects are data containers only. I had decided to write most of the business logic in service objects, and refactor everything afterwards into better, more reuseable, and more readable shapes. That way I could decide what code should be placed into domain objects, and which code should be spun off into new objects of their own, and what code should be left in a service class. So I have some code:
public decimal CaculateBatchTotal(VendorApplicationBatch batch)
{
IList<VendorApplication> applications = AppRepo.GetByBatchId(batch.Id);
if (applications == null || applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal total = 0m;
foreach (VendorApplication app in applications)
total += app.Amount;
return total;
}
This code seems like it would make a good addition to a domain object, because it's only input parameter is the domain object itself. Seems like a perfect candidate for some refactoring. But the only problem is that this object calls another object's repository. Which makes me want to leave it in the service class.
My questions are thus:
Where would you put this code?
Would you break this function up?
Where would someone who's following strict Domain-Driven design put it?
Why?
Thanks for your time.
Edit Note: Can't use an ORM on this one, so I can't use a lazy loading solution.
Edit Note2: I can't alter the constructor to take in parameters, because of how the would-be data layer instantiates the domain objects using reflection (not my idea).
Edit Note3: I don't believe that a batch object should be able to total just any list of applications, it seems like it should only be able to total applications that are in that particular batch. Otherwise, it makes more sense to me to leave the function in the service class.
You shouldn't even have access to the repositories from the domain object.
What you can do is either let the service give the domain object the appropriate info or have a delegate in the domain object which is set by a service or in the constructor.
public DomainObject(delegate getApplicationsByBatchID)
{
...
}
I'm no expert on DDD but I remember an article from the great Jeremy Miller that answered this very question for me. You would typically want logic related to your domain objects - inside those objects, but your service class would exec the methods that contain this logic. This helped me push domain specific logic into the entity classes, and keep my service classes less bulky (as I found myself putting to much logic inside the service classes like you mentioned)
Edit: Example
I use the enterprise library for simple validation, so in the entity class I will set an attribute like so:
[StringLengthValidator(1, 100)]
public string Username {
get { return mUsername; }
set { mUsername = value; }
}
The entity inherits from a base class that has the following "IsValid" method that will ensure each object meets the validation criteria
public bool IsValid()
{
mResults = new ValidationResults();
Validate(mResults);
return mResults.IsValid();
}
[SelfValidation()]
public virtual void Validate(ValidationResults results)
{
if (!object.ReferenceEquals(this.GetType(), typeof(BusinessBase<T>))) {
Validator validator = ValidationFactory.CreateValidator(this.GetType());
results.AddAllResults(validator.Validate(this));
}
//before we return the bool value, if we have any validation results map them into the
//broken rules property so the parent class can display them to the end user
if (!results.IsValid()) {
mBrokenRules = new List<BrokenRule>();
foreach (Microsoft.Practices.EnterpriseLibrary.Validation.ValidationResult result in results) {
mRule = new BrokenRule();
mRule.Message = result.Message;
mRule.PropertyName = result.Key.ToString();
mBrokenRules.Add(mRule);
}
}
}
Next we need to execute this "IsValid" method in the service class save method, like so:
public void SaveUser(User UserObject)
{
if (UserObject.IsValid()) {
mRepository.SaveUser(UserObject);
}
}
A more complex example might be a bank account. The deposit logic will live inside the account object, but the service class will call this method.
Why not pass in an IList<VendorApplication> as the parameter instead of a VendorApplicationBatch? The calling code for this presumably would come from a service which would have access to the AppRepo. That way your repository access will be up where it belongs while your domain function can remain blissfully ignorant of where that data came from.
As I understand it (not enough info to know if this is the right design) VendorApplicationBatch should contain a lazy loaded IList inside the domain object, and the logic should stay in the domain.
For Example (air code):
public class VendorApplicationBatch {
private IList<VendorApplication> Applications {get; set;};
public decimal CaculateBatchTotal()
{
if (Applications == null || Applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal Total = 0m;
foreach (VendorApplication App in Applications)
Total += App.Amount;
return Total;
}
}
This is easily done with an ORM like NHibernate and I think it would be the best solution.
It seems to me that your CalculateTotal is a service for collections of VendorApplication's, and that returning the collection of VendorApplication's for a Batch fits naturally as a property of the Batch class. So some other service/controller/whatever would retrieve the appropriate collection of VendorApplication's from a batch and pass them to the VendorApplicationTotalCalculator service (or something similar). But that may break some DDD aggregate root service rules or some such thing I'm ignorant of (DDD novice).

Categories