Initially I needed only one queue to be created by the MessageQueueFactory:
container.RegisterSingleton<IMessageQueueFactory>(() => {
var uploadedWaybillsQueuePath = ConfigurationManager
.AppSettings["msmq:UploadedDocumentsQueuePath"];
return new MessageQueueFactory(uploadedWaybillsQueuePath);
});
Now that requirements have changed there's a need to support several queues.
The simplest thing I can do here is to add other paths (stored in app.config) to the factory's constructor and provide methods for each queue:
container.RegisterSingleton<IMessageQueueFactory>(() => {
var uploadedDocsQueuePath = ConfigurationManager
.AppSettings["msmq:UploadedDocumentsQueuePath"];
var requestedDocsQueuePath = ConfigurationManager
.AppSettings["msmq:RequestedDocumentsQueuePath"];
return new MessageQueueFactory(
uploadedWaybillsQueuePath,
requestedDocsQueuePath
);
});
interface IMessageQueueFactory {
MessageQueue CreateUploadedDocsQueue();
MessageQueue CreateRequestedDocsQueue();
}
Is it a poor design? How can it be refactored?
I wouldn't consider this bad design. You need to provide the queue name and having it as an appSetting makes it easier to update them if you need to.
It also feels like the less friction path, which is always good, however I don't quite like it because every time you add a new name you have to change the interface and that's not that nice.
I found this post with some answers that might interest you :
IoC - Multiple implementations support for a single interface
Related
I am still new to DI and Unit Tests. I have been tasked with adding Unit Tests to some of our legacy code. I am working on a WCF web service. A lot of refactoring has had to be done. Monster classes split into separate classes that make sense. Monster methods split to multiple methods. And lastly, creating interface classes for external dependencies. This was done initially to facilitate mocking those dependencies for unit tests.
As I have gone about this the list of dependencies keeps growing and growing. I have other web services to call, SQL Servers and DB2 Databases to interact with, a config file to read, a log to write to, and reading from Sharepoint data. So far I have 10 dependencies. And every time I add one it breaks all my Unit Tests since there is a new parameter in the constructor.
If it helps, I am using .Net 4.5, Castle Windsor as my IOC, MSTest, and Moq for testing.
I have looked here How to avoid Dependency Injection constructor madness? but this doesn't provide any real solution. Only to say "your class may be doing too much." I looked into Facade and Aggregate services but that seemed to just move where things were.
So I need some help on how to make this class to "less" but still provide the same output.
public AccountServices(ISomeWebServiceProvider someWebServiceProvider,
ISomeOtherWebProvider someOtherWebProvider,
IConfigurationSettings configurationSettings,
IDB2Connect dB2Connect,
IDB2SomeOtherData dB2SomeOtherData,
IDB2DatabaseData dB2DatabaseData,
ISharepointServiceProvider sharepointServiceProvider,
ILoggingProvider loggingProvider,
IAnotherProvider AnotherProvider,
ISQLConnect SQLConnect)
{
_configurationSettings = configurationSettings;
_someWebServiceProvider = someWebServiceProvider;
_someOtherWebProvider = someOtherWebProvider;
_dB2Connect = dB2Connect;
_dB2SomeOtherData = dB2SomeOtherData;
_dB2DatabaseData = dB2DatabaseData;
_sharepointServiceProvider = sharepointServiceProvider;
_loggingProvider = loggingProvider;
_AnotherProvider = AnotherProvider;
_SQLConnect = SQLConnect;
}
Almost all of the there are in other components but I need to be able to use them in the main application and mock them in unit tests.
Here is how one of the methods is laid out.
public ExpectedResponse GetAccountData(string AccountNumber)
{
// Get Needed Config Settings
...
// Do some data validation before processing data
...
// Try to retrieve data for DB2
...
// Try to retrieve data for Sharepoint
...
// Map data to response.
...
// If error, handle it and write error to log
}
Other methods are very similar but they may be reaching out to SQL Server or one or more web services.
Ideally what I would like to have is an example of an application that needs a lot of dependencies, that has unit tests, and avoids having to keep adding a new dependency to the constructor causing you to update all your unit tests just to add the new parameter.
Thanks
Not sure if this helps, but I came up with a pattern I called GetTester that wraps the constructor and makes handling the parameters a little easier. Here's an example:
private SmartCache GetTester(out Mock<IMemoryCache> memory, out Mock<IRedisCache> redis)
{
memory = new Mock<IMemoryCache>();
redis = new Mock<IRedisCache>();
return new SmartCache(memory.Object, redis.Object);
}
Callers look like this if they need all the mocks:
SmartCache cache = GetTester(out Mock<IMemoryCache> memory, out Mock<IRedisCache> redis);
Or like this if they don't:
SmartCache cache = GetTester(out _, out _);
These still break if you have constructor changes, but you can create overloads to minimize the changes to tests. It's a hassle but easier than it would otherwise be.
So possibly your classes might be doing too much. If you're finding that you're constantly increasing the work that a class is doing and as a result are needing to provide additional objects to assist in those additional tasks then this is probably the issue and you need to consider breaking up the work.
However, if this isn't the case then another option is to have the classes take in a reference to a dependency class that provides access to either the instantiated concrete objects that implement your various interfaces or instantiated factory objects which can be used to construct objects. Then instead of constantly passing new parameters you can just pass the single object and from there pull or create objects as necessary from that.
I am writing a service in .NET/C# that recieves updates on items.
When an item is updated, i want to do several actions, and more actions will come in the future. I want to decaouple the actions from the event through some commen pattern. My brain says IOC, but I am having a hard time finding what i seek.
Basically what I am thinking is having the controller that recieves the updated item, go through come config for actions that subscribes to the event and call them all. Surely there exists some lightweight, fast and easy to use framework for this, thats still up-to-date.
What would you recommend and why? (emphasis on easy to use and fast).
There is one pattern that uses DI-container (IoC, yes). There is a lot of DI-containers and I personally use Castle Windsor. But any container should do this trick.
For example, you can create such an interface:
public ISomeAction
{
void Execute(SomeArg arg);
}
And, when you need to call all the actions, you can use it like this:
var actions = container.ResolveAll<ISomeAction>();
foreach (var action in actions)
{
action.Execute(arg);
}
To "register" an action, you need to do something like this:
container.Register(Component.For<ISomeAction>().ImplementedBy<ConcreteAction());
You can use XML for this, if you want.
It's very flexible solution - you can use it in any part of your app. You can implement this "for future", even if you don't need to subscribe anything. If someone would add a lib to your project, he will be able to subscribe to your event by simply registering his implementation.
If you want to use .NET events, than Castle Windsor has an interesting facility:
https://github.com/castleproject/Windsor/blob/master/docs/event-wiring-facility.md
In case anyone needs it, heres is how to do it in StructureMap:
var container = new Container(_ =>
{
_.Scan(x =>
{
x.TheCallingAssembly();
x.WithDefaultConventions();
x.AddAllTypesOf<IUpdateAction>();
});
});
var actions = container.GetAllInstances<IUpdateAction>();
foreach (IUpdateAction action in actions)
{
action.Execute(updatedItem);
}
Is it possible to register a single service that has dependencies that can change depending on a setting?
For instance
A DBExecutor requries a different DBconnection object depending which geographical region it is running under.
I've tried something like
builder.RegisterType<DbConnection>().Named<IDbConnection>("US")
builder.RegisterType<DbConnection>().Named<IDbConnection>("AU")
builder.RegisterType<SqlExecutor>().As<IDbExecutor>();
and I'd like to resolve the service with something like
var au = container.ResolveNamed<IDbExecutor>("AU");
var us = container.ResolveNamed<IDbExecutor>("US");
However this doesn't work because the IDbExecutor itself hasn't been registered with a key, and if I try a normal Resolve it wont work as it cannot create the dependent services.
Basically I just want an instance of of IDbExecutor with a DBConnection based upon a certain parameter.
I'm trying to do this in a more general sense so I'm trying to avoid any specific code where I can.
The current generic code I have that doesn't use keyed services looks like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType);
where JobType is a class Type and depending on if this is possible the final version would look something like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType, bundle.JobDetail.JobDataMap["Region"]);
where bundle.JobDetail.JobDataMap["Region"] would return either "AU" or "US"
You won't be able to rig it to resolve a named IDbExecutor because you didn't register it as named. It's also probably not the best idea since it implies that IDbExecutor somehow "knows" about its dependencies, which it shouldn't - the implementation knows, but the interface/service doesn't - and shouldn't.
You can get something close to what you want by updating the SqlExecutor to use the IIndex<X,B> relationship in Autofac. Instead of taking just an IDbConnection in your constructor, take an IIndex<string,IDbConnection>.
When you need to get the connection, look it up from the indexed dictionary using the job type:
public class SqlExecutor
{
private IIndex<string, IDbConnection> _connections;
public SqlExecutor(IIndex<string, IDbConnection> connections)
{
this._connections = connections;
}
public void DoWork(string jobType)
{
var connection = this._connections[jobType];
// do something with the connection
}
}
Another way to do it would be to create a delegate factory for the SqlExecutor that takes in the job type and automatically picks the right named service. That's a bit more involved so check out the documentation for an example.
I'm using Quartz.Net (version 2) for running a method in a class every day at 8:00 and 20:00 (IntervalInHours = 12)
Everything is OK since I used the same job and triggers as the tutorials on Quartz.Net, but I need to pass some arguments in the class and run the method bases on those arguments.
Can any one help me how I can use arguments while using Quartz.Net?
You can use JobDataMap
jobDetail.JobDataMap["jobSays"] = "Hello World!";
jobDetail.JobDataMap["myFloatValue"] = 3.141f;
jobDetail.JobDataMap["myStateData"] = new ArrayList();
public class DumbJob : IJob
{
public void Execute(JobExecutionContext context)
{
string instName = context.JobDetail.Name;
string instGroup = context.JobDetail.Group;
JobDataMap dataMap = context.JobDetail.JobDataMap;
string jobSays = dataMap.GetString("jobSays");
float myFloatValue = dataMap.GetFloat("myFloatValue");
ArrayList state = (ArrayList) dataMap["myStateData"];
state.Add(DateTime.UtcNow);
Console.WriteLine("Instance {0} of DumbJob says: {1}", instName, jobSays);
}
}
To expand on #ArsenMkrt's answer, if you're doing the 2.x-style fluent job config, you could load up the JobDataMap like this:
var job = JobBuilder.Create<MyJob>()
.WithIdentity("job name")
.UsingJobData("x", x)
.UsingJobData("y", y)
.Build();
Abstract
Let me to extend a bit #arsen-mkrtchyan post with significant note which might avoid a painful support Quartz code in production:
Problem (for persistance JobStore)
Please remember about JobDataMap versioning in case you're using persistent JobStore, e.g. AdoJobStore.
Summary (TL;DR)
Carefully think on constructing/editing your JobData otherwise it will lead to issues on triggering future jobs.
Enable “quartz.jobStore.useProperties” config parameter as official documentation recommends to minimize versioning problems. Use JobDataMap.PutAsString() later.
Details
It's also stated in the documentation, however, not so highlighted, but might lead to big maintenance problem if e.g. you removing some parameter in the next version of you app:
If you use a persistent JobStore (discussed in the JobStore section of this tutorial) you should use some care in deciding what you place in the JobDataMap, because the object in it will be serialized, and they therefore become prone to class-versioning problems.
Also there is related note about configuring JobStore mentioned in the relevant document:
The “quartz.jobStore.useProperties” config parameter can be set to “true” (defaults to false) in order to instruct AdoJobStore that all values in JobDataMaps will be strings, and therefore can be stored as name-value pairs, rather than storing more complex objects in their serialized form in the BLOB column. This is much safer in the long term, as you avoid the class versioning issues that there are with serializing your non-String classes into a BLOB.
In a previous question one of the comments from Dr. Herbie on the accepted answer was that my method was performing two responsibilities..that of changing data and saving data.
What I'm trying to figure out is the best way to separate these concerns in my situation.
Carrying on with my example of having a Policy object which is retrieved via NHibernate....
The way I'm currently setting the policy to inactive is as follows:
Policy policy = new Policy();
policy.Status = Active;
policyManager.Inactivate(policy);
//method in PolicyManager which has data access and update responsibility
public void Inactivate(Policy policy)
{
policy.Status = Inactive;
Update(policy);
}
If I were to separate the responsibility of data access and data update what would be the best way to go about it?
Is it better to have the PolicyManager (which acts as the gateway to the dao) manage the state of the Policy object:
Policy policy = new Policy();
policy.Status = Active;
policyManager.Inactivate(policy);
policyManager.Update(policy);
//method in PolicyManager
public void Inactivate(Policy policy)
{
policy.Status = Inactive;
}
Or to have the Policy object maintain it's own state and then use the manager class to save the information to the database:
Policy policy = new Policy();
policy.Status = Active;
policy.Inactivate();
policyManager.Update(policy);
//method in Policy
public void Inactivate()
{
this.Status = Inactive;
}
What I would do:
Create a repository which saves and retrieves Policies. (PolicyRepository)
If you have complex logic that must be performed to activate / deactivate a policy, you could create a Service for that. If that service needs access to the database, then you can pass a PolicyRepository to it, if necessary.
If no complex logic is involved, and activating / deactivating a policy is just a matter of setting a flag to false or true, or if only members of the policy class are involved, then why is 'Activated' not a simple property of the Policy class which you can set to false / true ?
I would only create a service, if other objects are involved, or if DB access is required to activate or deactivate a policy.
As a continuation of my original comment :) ...
Currently your best bet is the third option, but if things get more complex you could go with the second, while adding facade methods to perform pre-specified sequences:
Policy policy = new Policy();
policy.Status = Active;
policyManager.InactivateAndUpdate(policy);
//methods in PolicyManager
public void Inactivate(Policy policy)
{
// possibly complex checks and validations might be put there in the future? ...
policy.Status = Inactive;
}
public void InactivateAndUpdate(Policy policy)
{
Inactivate(policy);
Update(policy);
}
The InactivateAndUpdate is a kind of facade method, which is just there to make the calling code a little neater, while still allowing the methods doing the actual work to be separate concerns (kind of breaks single responsibility for methods, but sometimes you just have to be pragmatic!). I deliberately name such methods in the style XandY to make them stand out as doing two things.
The InactivateAndUpdate method then frees you up to start implementing strategy patterns or splitting out the actual implementation methods as command objects for dynamic processing or whatever other architecture might become feasible in the future.
I would definitely go with the 3rd option for the reasons you mentioned:
the Policy object maintain it's own
state and then use the manager class
to save the information to the
database
Also take a look at the Repository Pattern. It might substitute your PolicyManager.
If the status is part of the state of the Policy class, then the Policy should also have the Inactivate method -- that's just basic encapsulation. Entangling multiple classes in a single responsibility is at least as bad as giving a single class multiple responsibilities.
Alternatively, the status could be considered metadata about the Policy, belonging not to the Policy but to the PolicyManager. In that case, though, the Policy shouldn't know its own status at all.