I'm working on a multi-tenant web application. At the beginning of each request, the web application will use either the URL or a cookie to determine the tenant ID of the current request. I want to make this available to my data layer so that it can automatically include the ID in SQL queries without having to explicitly pass it down through the layers for each query. However, I don't want my data layer to be dependent on System.Web or to only be usable by a web application.
So I've created an interface in the data layer ITenantIDProvider
public interface ITenantIDProvider
{
Guid GetTenantID();
}
And it could be implemented something like this:
public class TenantIDProvider : ITenantIDProvider
{
public Guid GetFacilityID()
{
return (Guid)HttpContext.Current.Items["TenantID"];
}
}
In reality it will be something more complex than just retrieving from Items but the key is that it will be using the current request. There is also a business logic layer in between but it just forwards the ID along so I left that code out for simplicity.
When the web application starts it passes a new instance of TenantIDProvider to the data layer which passes it into an instance of a custom EntityFramework IDbContextInterceptor:
//Static method in data layer called at application start up
public static void SetTenantIDProvider(ITenantIDProvider tenantIDProvider)
{
System.Data.Entity.Infrastructure.Interception.DbInterception.Add(new MyContextInterceptor(tenantIDProvider));
}
DbInterception.Add only gets called once at application startup so I believe it's storing the given interceptor in a static collection, which means my interceptor instance is common to all requests. However, since my tenantIDProvider is just wrapping access to the current request (and importantly, not setting any instance variables) I shouldn't need to worry about a race condition right? I've attempted to debug through two threads, freezing them at different times to test it and it seems to work as intended. I just feel a little uneasy about an instance of an object that is shared by all threads/requests trying to access request-specific data.
Related
I'm currently working one a custom CRM-style solution (EF/Winforms/OData WebApi) and I wonder how to implement a quite simple requirement:
Let's say there is a simple Project entity. It is possible to assign Tasks to it. There is a DefaultTaskResponsible defined in the Project. Whenever a Task is created, the Project's DefaultTaskResponsible is used as the Task.Responsible. But it is possible change the Task.Responsible and even set it to null.
So, in a 'normal' programming world, I would use a Task constructor accepting the Project and set the Responsible there:
public class Task {
public Task(Project p) {
this.Responsible = p.DefaultTaskResponsible;
...
}
}
But how should I implement something like this in a CRM-World with Lookup views? In Dynamics CRM (or in my custom solution), there is a Task view with a Project Lookup field. It does not make sense to use a custom Task constructor.
Maybe it is possible to use Business Rules in Dynamics CRM and update the Responsible whenever the Project changes (not sure)?! But how should I deal with the WebApi/OData Client?
If I receive a Post to the Task endpoint without a Responsible I would like to use the DefaultTaskResponsible, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)"
}.
No Responsible was send (maybe because it is an older client), so use the default one. But if a Responsible is set, the passed value should be used instead, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)",
"responsible#odata.bind": null
}.
In my TaskController I only see the Task model with the Responsible being null, but I don't know if it is null because it was set explicitly or because it wasn't send in the request.
Is there something wrong with my ideas/concepts? I think it is quite common to initialize properties based on other objects/properties, isn't it?
This question is probably out of scope for this forum, but it is a subject I am interested in. A few thoughts:
A "Task" is a generic construct which traditionally can be associated with many different types of entities. For example, you might not only have tasks associated with Projects, but also with Customer records and Sales records. To run with your code example it would look like:
public Task(Entity parent) {}
Then you have to decide whether or not your defaulting of the Responsible party is specific to Projects, or generic across all Entities which have Tasks. If the latter, then our concept looks like this:
public Task(ITaskEntity parent)
{
this.Responsible = parent.DefaultResponsible; //A property of ITaskEntity
}
This logic should be enforced at the database "pre operation" level, i.e. when your CRM application receives a request to create a Task, it should make this calculation, then persist the task to the database. This suggests that you should have a database execution pipeline, where actions can be taken before or after database operations occur. A standard simple execution pipeline looks like this:
Validation -> Pre Operation -> Operation (CRUD) -> Post Operation
Unless you are doing this for fun, I recommend abandoning the project and using an existing CRM system.
I use Command Query Separation in my system.
To describe the problem lets start with an example. Let's say we have a code as follows:
public class TenancyController : ControllerBase{
public async Task<ActionResult> CreateTenancy(CreateTenancyRto rto){
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
//example how to run a command in the system:
var command = new UploadTemplatePackageCommand
{
Comment = package.Comment,
Data = Request.Body,
TemplatePackageId = id
};
await _commandDispatcher.DispatchAsync(command);
return Ok();
}
}
The CreateTenancy has a very complex implementation and runs many different queries and commands.
Each command or query can be reused in other places of the system.
Each Command has a CommandHandler
Each Query has a QueryHandler
Example:
public class UploadTemplatePackageCommandHandler : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
//some business logic
}
}
Every time you try to run the command or query there is a permission check. The problem which appears in the CreateTenancy is when you run let's say 10 commands.
There can be a case when you have permissions to all of the first 9 commands but you are missing some permissions to run the last command. In such a situation you can make some complex modifications to the system running these 9 commands and at the end, you are not able to finish the whole transaction because you are not able to run the last command. In such a case, there is a need to make a complex rollback.
I believe that in the above example the permission check should be done only once at the very beginning of the whole transaction but I'm not sure what is the best way to achieve this.
My first idea is to create a command called let's say CreateTenancyCommand and in the HandleCommandAsync place the whole logic from CreateTenancy(CreateTenancyRto rto)
So it would look like:
public class CreateTenancyCommand : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
}
}
I'm not sure if it's a good approach to invoke a command inside a command handler of another command?
I think that each command handler should be independent.
Am I right that the permission check should happen only once?
If yes- how to do the permission check in the case when you want to run a command to modify the database and then return some data to the client?
In such a case, you would need to do 2 permission checks...
There can be a theoretical case when you modify the database running the command and then cannot run a query which only reads the database because you are missing some of the permissions. It can be very problematic for the developer to detect such a situation if the system is big and there are hundreds of
different permissions and even the good unit tests coverage can fail.
My second idea is to create some kind of wrapper or extra layer above the commands and queries and do the permission check there
but not sure how to implement it.
What is the proper way to do the permissions check in the described transaction CreateTenancy which is implemented in the action of the controller in the above example?
In a situation where you have some sort of process which requires multiple commands / service calls to carry out the process, then this is an ideal candidate for a DomainService.
A DomainService is by definition one which has some Domain Knowledge, and is used to facilitate a process which interacts with multiple Aggregates / services.
In this instance I would look to have your Controller Action call a CQRS Command/CommandHandler. That CommandHandler will take the domain service as a single dependency. The CommandHandler then has the single responsibility of calling the Domain Service method.
This then means your CreateTenancy process is contained in one place, The DomainService.
I typically have my CommandHandlers simply call into service methods. Therefore a DomainService can call into multiple services to perform it's function, rather than calling into multiple CommandHandlers. I treat the Command Handlers as a facade through which my Controllers can access the Domain.
When it comes to permissions, I typically first decide whether the users authorisation to carry out a process is a Domain issue. If so, I will typically create an Interface to describe the users permissions. And also, I will typically create an Interface for this specific to the Bounded Context I am working in. So in this case you may have something like:
public interface ITenancyUserPermissions
{
bool CanCreateTenancy(string userId);
}
I would then have the ITenancyUserPermission interface be a dependancy in my CommandValidator:
public class CommandValidator : AbstractValidator<Command>
{
private ITenancyUserPermissions _permissions;
public CommandValidator(ITenancyUserPermissions permissions)
{
_permissions = permissions;
RuleFor(r => r).Must(HavePermissionToCreateTenancy).WithMessage("You do not have permission to create a tenancy.");
}
public bool HavePermissionToCreateTenancy(Command command)
{
return _permissions.CanCreateTenancy(command.UserId);
}
}
You said that the permission to create a Tenancy is dependent on the permission to perform the other tasks / commands. Those other commands would have their own set of Permission Interfaces. And then ultimately within your application you would have an implementation of these interfaces such as:
public class UserPermissions : ITenancyUserPermissions, IBlah1Permissions, IBlah2Permissions
{
public bool CanCreateTenancy(string userId)
{
return CanBlah1 && CanBlah2;
}
public bool CanBlah1(string userID)
{
return _authService.Can("Blah1", userID);
}
public bool CanBlah2(string userID)
{
return _authService.Can("Blah2", userID);
}
}
In my case I use a ABAC system, with the policy stored and processed as a XACML file.
Using the above method may mean you have slightly more code, and several Permissions interfaces, but it does mean that any permissions you define are specific to the Bounded Context you are working within. I feel this is better than having a Domain Model wide IUserPermissions interface, which may define methods which of no relevance, and/or confusing in your Tenancy bounded context.
This means you can check user permissions in your QueryValidator or CommandValidator instances. And of course you can use the implementation of your IPermission interfaces at the UI level to control which buttons / functions etc are shown to the user.
There is no "The Proper Way", but I'd suggest that you could approach the solution from the following angle.
Usage of the word Controller in your names and returning Ok() lets me understand that you are handling an http request. But what is happening inside is a part of a business use case that has nothing to deal with http. So, you'd better get some Onion-ish and introduce a (business) application layer.
This way, your http controller would be responsible for: 1) Parsing create tenancy http request into a create tenancy business request - i.e. the request object model in terms of domain language void of any infrastructure terms. 2) Formatting business response into an http response including translating business errors into http errors.
So, what you get entering the application layer is a business create tenancy request. But it's not a command yet. I can't remember the source, but someone once said, that command should be internal to a domain. It cannot come from outside. You may consider a command to be a comprehensive object model necessary to make a decision whether to change an application's state. So, my suggestion is that in your business application layer you build a command not only from business request, but also from results of all these queries, including queries to necessary permission read models.
Next, you may have a separate decision-making business core of a system that takes a command (a value object) with all the comprehensive data, applies a pure decision-making function and returns a decision, also a value object (event or rejection), containing, again, all necessary data calculated from the command.
Then, when your business application layer gets back a decision, it can execute it, writing to event stores or repositories, logging, firing events and ultimately producing a business response to the controller.
In most cases you'll be ok with this single-step decision-making process. If it needs more than a single step - maybe it's a hint to reconsider the business flow, because it gets too complex for a single http request processing.
This way you'll get all the permissions before handling a command. So, your business core will be able to make a decision whether those permissions are sufficient to proceed. It also may make a decision-making logic much more testable and, therefore, reliable. Because it is the main part that should be tested in any calculation flow branch.
Keep in mind that this approach leans toward eventual consistency, which you have anyway in a distributed system. Though, if interacting with a single database, you may run an application-layer code in a single transaction. I suppose, though, that you deal with eventual consistency anyway.
Hope this helps.
Trying to apply DDD principles to small project... I have PlayerProfile aggregate root, which consists of Club entity and collection of Rating value objects. Periodically I have to sync all PlayerProfile's entities from external portal, parsing the raw HTML.
For now I come up with the solution to wrap the code, which renew the PlayerProfile's in simple PlayerProfileRepository, something like this:
public interface IPlayerProfileRepository
{
Task<IReadOnlyCollection<PlayerProfile>> SyncPlayersProfilesFromPortal(string sourceUrl);
// other methods, which works with data storage
}
First, I don't really like the idea of mixing method, which work with data storage with the methods, which work with external resource (HTML pages) to periodically create PlayerProfile. For me it sounds more like PlayerProfileFactory responsibilities?
The actual implementation of IPlayerProfileRepository delegates parsing of actual pages to 3 IPageParser's, which actually lives in the same layer as my repositories do. Something like this:
public PlayerProfileRepository(
IPageParser<ParseClubDto> clubPageParser,
IPageParser<ParsePlayerProfileDto> playerProfilePageParser,
IPageParser<ParseRatingDto> ratingPageParser)
{
_playerProfilePageParser = playerProfilePageParser;
_clubPageParser = clubPageParser;
}
I am not quite sure if all these Dtos are in fact Dtos as soon as they are used only from IPageParser's to save intermediate data while parsing the pages. I would like to keep them closely to IPageParser implementations in data service layer, but not to share them in separate Dtos project and maybe named differently.
After ParseClubDto, ParsePlayerProfileDto and ParseRatingDto parsed, I passed it to PlayerProfileFactory.Create factory method, something like this:
var playerProfiles = new List<PlayerProfile>();
var clubs = await _clubPageParser.ParseAsync(sourceUrl);
foreach (var club in clubs)
{
var clubPlayers = await _playerProfilePageParser.ParseAsync(club.PlayersPageUrl);
foreach (var clubPlayer in clubPlayers)
{
var ratings = await _ratingPageParser.ParseAsync(clubPlayer.RatingsPageUrl);
playerProfiles.Add(PlayerProfileFactory.Create(club, clubPlayer, ratings));
}
}
return playerProfiles;
After this is done I have to perform actual syncing with existing agreggate roots in DB, which I do simple by calling ResyncFrom(PlayerProfile profile) on aggregate root or should it be more like separate PlayerProfile domain service?
In general I got a feeling that I am doing something wrong, so please any comments are welcomed?
I think that your example is a case of integration between two BCs using the anti corruption layer pattern.
I would have a port (interface in the domain) with a method contract that returns a list of player profile aggregates.
In the infrastructure layer I would have an adapter that implements the port by reading the html data from the remote portal (for example using a REST API ) and constructing the aggregates from that data.
In the application layer I would have an application service in which you inject both the port and the player profile aggregate repository that deals with the local db. The application service calls the port to construct the aggregates, and then calls the repository to store them.
I would run this application service periodically.
This would be an async integration without events, but you could implement it with events if the remote portal fires events.
IPlayerProfileRepository interface is usually defined in the Domain and describes to the outside world how the Aggregate Root should be retrieved, usually by the Id. So the method SyncPlayersProfilesFromPortal should certainly not be a part of this interface.
Syncing data is an infrastructure concern and can be done asynchronously in the background as already suggested in the previous answer.
We have two apps: desktop client & mvc backend. Both apps have printing functionality. And it's quite obvious that we're repeating ourselves with that. Let me explain this. The routine looks as follows:
User enters his ID / sends his data to the mvc endpoint;
We check the db if all necessary data is valid;
We compose a viewmodel (mvc) / dto (desktop) object;
As a requirement there are two type of documents to be printed;
Then we make identical calls to a PDF-rendering API (we use PdfSharp) composing two documents.
I think it would be better if we had that pdf-composing logics in a separate assembly. Then it could be reusable. The problem with this is that documents use slightly different properties (data). As a solution we can use one shared dto with all necessary properties:
public IEnumerable<string> Render(DocumentDto document) {
// ioc
foreach(var strategy in this.strategies) {
if(strategy.CanRender(document)) {
yield strategy.Render(document);
}
}
}
We can also inject DbContext object into our startegies. And each strategy would request the desired properties on its own:
public class StrategyA {
// I'll omit ctor here
private DbContext db;
public string Render() {
// make db calls
// render the document
}
}
But I don't think this is a good solution either since it requires a db dependency.
Can we design it so that each strategy only uses its own set of properties?
I'm currently using an IoC container, unity, for my program.
I have multiple chained factories. One calling the next to create an object it needs for populating a property. All the factories use the same raw data object in order to build their respective objects. The raw data object describes how to create all the various objects. Currently each factory has a Create method that takes in a couple parameters to state what location the object represents.
My problem is how/where do I pass in the raw data object to each factory in order for them to do their jobs?
Injecting the object into the Create() methods seems to be more procedural than object oriented. However if I inject the object into each factory's constructor then how would I resolve each factory correctly. Not to mention that these factories need to be able to work on different raw data objects. Maybe there is a better architecture over all?
Below represents the type of structure I have, minus passing the raw object anywhere.
class PhysicalObjectFactory
{
private readonly StructureAFactory _structureAFactory;
private readonly Parser _parser;
public PhysicalObjectFactory(StructureAFactory structureAFactory, Parser _parser)
{
_structureAFactory = structureAFactory;
this._parser = _parser;
}
public PhysicalObject CreatePhysicalObject()
{
RawDataObject rawDataObject = _parser.GetFromFile("foo.txt");
// do stuff
PhysicalObject physicalObject = new PhysicalObject();
physicalObject.StructureA = _structureAFactory.Create(num1, num2);
// do more stuff
return physicalObject;
}
}
class StructureAFactory
{
private readonly StructureBFactory _structureBFactory;
public StructureAFactory(StructureBFactory structureBFactory)
{
_structureBFactory = structureBFactory;
}
public StructureA Create(int a, int b)
{
// do stuff
StructureA structureA = new StructureA();
structureA.StructureB = _structureBFactory.Create(num76, num33);
// do more stuff
return structureA;
}
}
class StructureBFactory
{
public StructureBFactory(){}
public StructureB Create(int a, int b)
{
StructureB structureB = new StructureB();
// do stuff
return structureB;
}
}
My problem is how/where do I pass in the raw data object to each
factory in order for them to do their jobs?
In general you should pass in runtime data through methods and compile-time/design-time/configuration data through constructor injection.
Your services are composed at a different moment in time as when they are used. Those services can live for a long time and this means they can be used many times with different runtime values. If you make this distinction between runtime data and data that doesn't change throughout the lifetime of the service, your options become much clearer.
So the question is whether this raw data you're passing in is changing on each call or if its fixed. Perhaps it is partially fixed. In that case you should separate the the data; only pass the runtime data on through the Create methods. It seems obvious that since the factories are chained, the data they need to create that part of the object is passed on to them through their Create method.
Sometimes however, you've got some data that's in between. It is data that will change during the lifetime of the application, but do don't want to pass it on through method calls, because it's not up to the caller to determine what those values are. This is contextual information. A clear example of this is information about the logged in user that is executing the request. You don't want the caller (for instance your presentation layer) to pass that information on, since this is extra work, and a potential security risk if the presentation layer forgets to pass this information on, or accidentally passes on some invalid value.
In that case the most common solution is to inject a service that provides consumers with this information. In the case of the user information you would inject an IUserContext service that contains a UserName or UserId property, perhaps a IsInRole(string) method or something similar. The trick here is that not the user information is injected into a consumer, but a service that allows access to this information. In other words, the retrieval of the user information is deferred. This allows the composed object graph to stay independent of those contextual information. This makes it easier to compose and validate object graph.