Hangfire - Configure AutomaticRetry for specific RecurringJob at runtime - c#

I'm using Hangfire v1.7.9 and I'm trying to configure a series of recurring background jobs within my MVC 5 application to automate the retrieval of external reference data into the application. I've tested this with one task and this works great, but I'd like administrators within the system to be able to configure the Attempts and DelayInSeconds attribute parameters associated with the method that is called in these background jobs.
The AutomaticRetryAttribute states that you have to use...
...a constant expression, typeof expression or an array creation expression of an attribute parameter type
... which from what I've read is typical of all Attributes. However, this means that I can't achieve my goal by setting a property value elsewhere and then referencing that in the class that contains the method I want to run.
Additionally, it doesn't look like there is any way to configure the automatic retry properties in the BackgroundJob.Enqueue or RecurringJob.AddOrUpdate methods. Lastly, I looked at whether you could utilise a specific retry count for each named Queue but alas the only properties about Hangfire queues you can set is their names in the BackgroundJobServerOptions class when the Hangfire server is initialised.
Have I exhausted every avenue here? The only other thing I can think of is to create my own implementation of the AutomaticRetryAttribute and set the values at compile time by using an int enum, though that in itself would create an issue in the sense that I would need to provide a defined list of each of the values that a user would need to select. Since I wanted the number of retries to be configurable from 5 minutes all the way up to 1440 minutes (24 hours), I really don't want a huge, lumbering enum : int with every available value. Has anyone ever encountered this issue or is this something I should submit as a request on the Hangfire GitHub?

I would take the approach of making a custom attribute that decorates AutomaticRetryAttribute:
public class MyCustomRetryAttribute : JobFilterAttribute, IElectStateFilter, IApplyStateFilter
{
public void OnStateElection(ElectStateContext context)
{
GetAutomaticRetryAttribute().OnStateElection(context);
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
GetAutomaticRetryAttribute().OnStateApplied(context, transaction);
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
GetAutomaticRetryAttribute().OnStateUnapplied(context, transaction);
}
private AutomaticRetryAttribute GetAutomaticRetryAttribute()
{
// Somehow instantiate AutomaticRetryAttribute with dynamically fetched/set `Attempts` value
return new AutomaticRetryAttribute { Attempts = /**/ };
}
}
Edit: To clarify, this method allows you to reuse AutomaticRetryAttribute's logic, without duplicating it. However, if you need to change more aspects on per-job basis, you may need to duplicate the logic inside your own attribute.
Also, you can use context.GetJobParameter<T> to store arbitrary data on per-job basis

Related

Alternative for initializing properties in the constructor in (Dynamics) CRM

I'm currently working one a custom CRM-style solution (EF/Winforms/OData WebApi) and I wonder how to implement a quite simple requirement:
Let's say there is a simple Project entity. It is possible to assign Tasks to it. There is a DefaultTaskResponsible defined in the Project. Whenever a Task is created, the Project's DefaultTaskResponsible is used as the Task.Responsible. But it is possible change the Task.Responsible and even set it to null.
So, in a 'normal' programming world, I would use a Task constructor accepting the Project and set the Responsible there:
public class Task {
public Task(Project p) {
this.Responsible = p.DefaultTaskResponsible;
...
}
}
But how should I implement something like this in a CRM-World with Lookup views? In Dynamics CRM (or in my custom solution), there is a Task view with a Project Lookup field. It does not make sense to use a custom Task constructor.
Maybe it is possible to use Business Rules in Dynamics CRM and update the Responsible whenever the Project changes (not sure)?! But how should I deal with the WebApi/OData Client?
If I receive a Post to the Task endpoint without a Responsible I would like to use the DefaultTaskResponsible, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)"
}.
No Responsible was send (maybe because it is an older client), so use the default one. But if a Responsible is set, the passed value should be used instead, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)",
"responsible#odata.bind": null
}.
In my TaskController I only see the Task model with the Responsible being null, but I don't know if it is null because it was set explicitly or because it wasn't send in the request.
Is there something wrong with my ideas/concepts? I think it is quite common to initialize properties based on other objects/properties, isn't it?
This question is probably out of scope for this forum, but it is a subject I am interested in. A few thoughts:
A "Task" is a generic construct which traditionally can be associated with many different types of entities. For example, you might not only have tasks associated with Projects, but also with Customer records and Sales records. To run with your code example it would look like:
public Task(Entity parent) {}
Then you have to decide whether or not your defaulting of the Responsible party is specific to Projects, or generic across all Entities which have Tasks. If the latter, then our concept looks like this:
public Task(ITaskEntity parent)
{
this.Responsible = parent.DefaultResponsible; //A property of ITaskEntity
}
This logic should be enforced at the database "pre operation" level, i.e. when your CRM application receives a request to create a Task, it should make this calculation, then persist the task to the database. This suggests that you should have a database execution pipeline, where actions can be taken before or after database operations occur. A standard simple execution pipeline looks like this:
Validation -> Pre Operation -> Operation (CRUD) -> Post Operation
Unless you are doing this for fun, I recommend abandoning the project and using an existing CRM system.

Proper permission management when using CQRS

I use Command Query Separation in my system.
To describe the problem lets start with an example. Let's say we have a code as follows:
public class TenancyController : ControllerBase{
public async Task<ActionResult> CreateTenancy(CreateTenancyRto rto){
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
//example how to run a command in the system:
var command = new UploadTemplatePackageCommand
{
Comment = package.Comment,
Data = Request.Body,
TemplatePackageId = id
};
await _commandDispatcher.DispatchAsync(command);
return Ok();
}
}
The CreateTenancy has a very complex implementation and runs many different queries and commands.
Each command or query can be reused in other places of the system.
Each Command has a CommandHandler
Each Query has a QueryHandler
Example:
public class UploadTemplatePackageCommandHandler : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
//some business logic
}
}
Every time you try to run the command or query there is a permission check. The problem which appears in the CreateTenancy is when you run let's say 10 commands.
There can be a case when you have permissions to all of the first 9 commands but you are missing some permissions to run the last command. In such a situation you can make some complex modifications to the system running these 9 commands and at the end, you are not able to finish the whole transaction because you are not able to run the last command. In such a case, there is a need to make a complex rollback.
I believe that in the above example the permission check should be done only once at the very beginning of the whole transaction but I'm not sure what is the best way to achieve this.
My first idea is to create a command called let's say CreateTenancyCommand and in the HandleCommandAsync place the whole logic from CreateTenancy(CreateTenancyRto rto)
So it would look like:
public class CreateTenancyCommand : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
}
}
I'm not sure if it's a good approach to invoke a command inside a command handler of another command?
I think that each command handler should be independent.
Am I right that the permission check should happen only once?
If yes- how to do the permission check in the case when you want to run a command to modify the database and then return some data to the client?
In such a case, you would need to do 2 permission checks...
There can be a theoretical case when you modify the database running the command and then cannot run a query which only reads the database because you are missing some of the permissions. It can be very problematic for the developer to detect such a situation if the system is big and there are hundreds of
different permissions and even the good unit tests coverage can fail.
My second idea is to create some kind of wrapper or extra layer above the commands and queries and do the permission check there
but not sure how to implement it.
What is the proper way to do the permissions check in the described transaction CreateTenancy which is implemented in the action of the controller in the above example?
In a situation where you have some sort of process which requires multiple commands / service calls to carry out the process, then this is an ideal candidate for a DomainService.
A DomainService is by definition one which has some Domain Knowledge, and is used to facilitate a process which interacts with multiple Aggregates / services.
In this instance I would look to have your Controller Action call a CQRS Command/CommandHandler. That CommandHandler will take the domain service as a single dependency. The CommandHandler then has the single responsibility of calling the Domain Service method.
This then means your CreateTenancy process is contained in one place, The DomainService.
I typically have my CommandHandlers simply call into service methods. Therefore a DomainService can call into multiple services to perform it's function, rather than calling into multiple CommandHandlers. I treat the Command Handlers as a facade through which my Controllers can access the Domain.
When it comes to permissions, I typically first decide whether the users authorisation to carry out a process is a Domain issue. If so, I will typically create an Interface to describe the users permissions. And also, I will typically create an Interface for this specific to the Bounded Context I am working in. So in this case you may have something like:
public interface ITenancyUserPermissions
{
bool CanCreateTenancy(string userId);
}
I would then have the ITenancyUserPermission interface be a dependancy in my CommandValidator:
public class CommandValidator : AbstractValidator<Command>
{
private ITenancyUserPermissions _permissions;
public CommandValidator(ITenancyUserPermissions permissions)
{
_permissions = permissions;
RuleFor(r => r).Must(HavePermissionToCreateTenancy).WithMessage("You do not have permission to create a tenancy.");
}
public bool HavePermissionToCreateTenancy(Command command)
{
return _permissions.CanCreateTenancy(command.UserId);
}
}
You said that the permission to create a Tenancy is dependent on the permission to perform the other tasks / commands. Those other commands would have their own set of Permission Interfaces. And then ultimately within your application you would have an implementation of these interfaces such as:
public class UserPermissions : ITenancyUserPermissions, IBlah1Permissions, IBlah2Permissions
{
public bool CanCreateTenancy(string userId)
{
return CanBlah1 && CanBlah2;
}
public bool CanBlah1(string userID)
{
return _authService.Can("Blah1", userID);
}
public bool CanBlah2(string userID)
{
return _authService.Can("Blah2", userID);
}
}
In my case I use a ABAC system, with the policy stored and processed as a XACML file.
Using the above method may mean you have slightly more code, and several Permissions interfaces, but it does mean that any permissions you define are specific to the Bounded Context you are working within. I feel this is better than having a Domain Model wide IUserPermissions interface, which may define methods which of no relevance, and/or confusing in your Tenancy bounded context.
This means you can check user permissions in your QueryValidator or CommandValidator instances. And of course you can use the implementation of your IPermission interfaces at the UI level to control which buttons / functions etc are shown to the user.
There is no "The Proper Way", but I'd suggest that you could approach the solution from the following angle.
Usage of the word Controller in your names and returning Ok() lets me understand that you are handling an http request. But what is happening inside is a part of a business use case that has nothing to deal with http. So, you'd better get some Onion-ish and introduce a (business) application layer.
This way, your http controller would be responsible for: 1) Parsing create tenancy http request into a create tenancy business request - i.e. the request object model in terms of domain language void of any infrastructure terms. 2) Formatting business response into an http response including translating business errors into http errors.
So, what you get entering the application layer is a business create tenancy request. But it's not a command yet. I can't remember the source, but someone once said, that command should be internal to a domain. It cannot come from outside. You may consider a command to be a comprehensive object model necessary to make a decision whether to change an application's state. So, my suggestion is that in your business application layer you build a command not only from business request, but also from results of all these queries, including queries to necessary permission read models.
Next, you may have a separate decision-making business core of a system that takes a command (a value object) with all the comprehensive data, applies a pure decision-making function and returns a decision, also a value object (event or rejection), containing, again, all necessary data calculated from the command.
Then, when your business application layer gets back a decision, it can execute it, writing to event stores or repositories, logging, firing events and ultimately producing a business response to the controller.
In most cases you'll be ok with this single-step decision-making process. If it needs more than a single step - maybe it's a hint to reconsider the business flow, because it gets too complex for a single http request processing.
This way you'll get all the permissions before handling a command. So, your business core will be able to make a decision whether those permissions are sufficient to proceed. It also may make a decision-making logic much more testable and, therefore, reliable. Because it is the main part that should be tested in any calculation flow branch.
Keep in mind that this approach leans toward eventual consistency, which you have anyway in a distributed system. Though, if interacting with a single database, you may run an application-layer code in a single transaction. I suppose, though, that you deal with eventual consistency anyway.
Hope this helps.

Autofac resolve dependant services by name

Is it possible to register a single service that has dependencies that can change depending on a setting?
For instance
A DBExecutor requries a different DBconnection object depending which geographical region it is running under.
I've tried something like
builder.RegisterType<DbConnection>().Named<IDbConnection>("US")
builder.RegisterType<DbConnection>().Named<IDbConnection>("AU")
builder.RegisterType<SqlExecutor>().As<IDbExecutor>();
and I'd like to resolve the service with something like
var au = container.ResolveNamed<IDbExecutor>("AU");
var us = container.ResolveNamed<IDbExecutor>("US");
However this doesn't work because the IDbExecutor itself hasn't been registered with a key, and if I try a normal Resolve it wont work as it cannot create the dependent services.
Basically I just want an instance of of IDbExecutor with a DBConnection based upon a certain parameter.
I'm trying to do this in a more general sense so I'm trying to avoid any specific code where I can.
The current generic code I have that doesn't use keyed services looks like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType);
where JobType is a class Type and depending on if this is possible the final version would look something like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType, bundle.JobDetail.JobDataMap["Region"]);
where bundle.JobDetail.JobDataMap["Region"] would return either "AU" or "US"
You won't be able to rig it to resolve a named IDbExecutor because you didn't register it as named. It's also probably not the best idea since it implies that IDbExecutor somehow "knows" about its dependencies, which it shouldn't - the implementation knows, but the interface/service doesn't - and shouldn't.
You can get something close to what you want by updating the SqlExecutor to use the IIndex<X,B> relationship in Autofac. Instead of taking just an IDbConnection in your constructor, take an IIndex<string,IDbConnection>.
When you need to get the connection, look it up from the indexed dictionary using the job type:
public class SqlExecutor
{
private IIndex<string, IDbConnection> _connections;
public SqlExecutor(IIndex<string, IDbConnection> connections)
{
this._connections = connections;
}
public void DoWork(string jobType)
{
var connection = this._connections[jobType];
// do something with the connection
}
}
Another way to do it would be to create a delegate factory for the SqlExecutor that takes in the job type and automatically picks the right named service. That's a bit more involved so check out the documentation for an example.

Allow or disallow method run in .NET

I need to organize some simple security in a class depends on value of the enum.
All that I can figure out is using attribute on a method and then run check then if it fails throw an exception.
Sample:
[ModulePermission(PermissonFlags.Create)]
public void CreateNew()
{
CheckPermission();
System.Windows.Forms.MessageBox.Show("Created!");
}
protected void CheckPermission()
{
var method = new System.Diagnostics.StackTrace().GetFrame(1).GetMethod();
if (!flags.HasFlag(method.GetCustomAttributes(true).Cast<ModulePermissionAttribute>().First().Flags))
{
throw new ApplicationException("Access denied");
}
}
is there more elegant or simple way to do this, like just to trigger an event when method run?
Why not just use standard Code Access Security instead of reimplementing the attribute handling and stack walking?
I think that if you read through the linked documentation, you'll see that what you have is nowhere close to what is needed to achieve actual security. Thankfully, this hard problem has already been solved...
Not with an enum, but with strings - voila (enforced by the runtime, even in full-trust):
public static class PermissionFlags {
public const string Create = "Create";
}
[PrincipalPermission(SecurityAction.Demand, Role = PermissionFlags.Create)]
public void CreateNew() {
System.Windows.Forms.MessageBox.Show("Created!");
}
All you need to do now is to represent the user as a principal. This is done for you in ASP.NET, and there is a winform plugin (in VS2008 etc) to use ASP.NET for membership. It can be configured for vanilla winforms and WCF, too; at the most basic level, GenericPrincipal / GenericIdentity:
// during login...
string[] roles = { PermissionFlags.Create /* etc */ };
Thread.CurrentPrincipal = new GenericPrincipal(
new GenericIdentity("Fred"), // user
roles);
But you can write your own principal / identity models easily enough (deferred / cached access checks, for example).
You might want to look at doing this with something like PostSharp, which will give you a framework for applying the attributes so that you don't have to run the check in your method. This may, however, increase the complexity depending on how the currently active flags are accessed. You'd probably need some class to cache the current permissions for the current user.
You could take a look at Aspect Oriented Programming.
Check out Postsharp for instance, which will enable you to 'weave' some additional logic at compile time in the methods that you've decorated with your ModulePermission attribute.
By doing so, you will not have to call the 'CheckPermission' method anymore inside that 'secured' method, since that logic can be weaved by Postsharp.
(A while ago, I've been playing around with Postsharp: http://fgheysels.blogspot.com/2008/08/locking-system-with-aspect-oriented.html )

What's the best way to separate concerns for this code?

In a previous question one of the comments from Dr. Herbie on the accepted answer was that my method was performing two responsibilities..that of changing data and saving data.
What I'm trying to figure out is the best way to separate these concerns in my situation.
Carrying on with my example of having a Policy object which is retrieved via NHibernate....
The way I'm currently setting the policy to inactive is as follows:
Policy policy = new Policy();
policy.Status = Active;
policyManager.Inactivate(policy);
//method in PolicyManager which has data access and update responsibility
public void Inactivate(Policy policy)
{
policy.Status = Inactive;
Update(policy);
}
If I were to separate the responsibility of data access and data update what would be the best way to go about it?
Is it better to have the PolicyManager (which acts as the gateway to the dao) manage the state of the Policy object:
Policy policy = new Policy();
policy.Status = Active;
policyManager.Inactivate(policy);
policyManager.Update(policy);
//method in PolicyManager
public void Inactivate(Policy policy)
{
policy.Status = Inactive;
}
Or to have the Policy object maintain it's own state and then use the manager class to save the information to the database:
Policy policy = new Policy();
policy.Status = Active;
policy.Inactivate();
policyManager.Update(policy);
//method in Policy
public void Inactivate()
{
this.Status = Inactive;
}
What I would do:
Create a repository which saves and retrieves Policies. (PolicyRepository)
If you have complex logic that must be performed to activate / deactivate a policy, you could create a Service for that. If that service needs access to the database, then you can pass a PolicyRepository to it, if necessary.
If no complex logic is involved, and activating / deactivating a policy is just a matter of setting a flag to false or true, or if only members of the policy class are involved, then why is 'Activated' not a simple property of the Policy class which you can set to false / true ?
I would only create a service, if other objects are involved, or if DB access is required to activate or deactivate a policy.
As a continuation of my original comment :) ...
Currently your best bet is the third option, but if things get more complex you could go with the second, while adding facade methods to perform pre-specified sequences:
Policy policy = new Policy();
policy.Status = Active;
policyManager.InactivateAndUpdate(policy);
//methods in PolicyManager
public void Inactivate(Policy policy)
{
// possibly complex checks and validations might be put there in the future? ...
policy.Status = Inactive;
}
public void InactivateAndUpdate(Policy policy)
{
Inactivate(policy);
Update(policy);
}
The InactivateAndUpdate is a kind of facade method, which is just there to make the calling code a little neater, while still allowing the methods doing the actual work to be separate concerns (kind of breaks single responsibility for methods, but sometimes you just have to be pragmatic!). I deliberately name such methods in the style XandY to make them stand out as doing two things.
The InactivateAndUpdate method then frees you up to start implementing strategy patterns or splitting out the actual implementation methods as command objects for dynamic processing or whatever other architecture might become feasible in the future.
I would definitely go with the 3rd option for the reasons you mentioned:
the Policy object maintain it's own
state and then use the manager class
to save the information to the
database
Also take a look at the Repository Pattern. It might substitute your PolicyManager.
If the status is part of the state of the Policy class, then the Policy should also have the Inactivate method -- that's just basic encapsulation. Entangling multiple classes in a single responsibility is at least as bad as giving a single class multiple responsibilities.
Alternatively, the status could be considered metadata about the Policy, belonging not to the Policy but to the PolicyManager. In that case, though, the Policy shouldn't know its own status at all.

Categories