Multiple class instantiation or change public property in ASP.NET MVC - c#

I need to access my business layer object 4 times with different constructor.
Specifically I need to access 4 different back end systems through my separate Data Access Layer
What should i do:
1) Instantiate 4 separate objects with different constructor?
2) Instantiate one object and change the public property every time?
As i am now in my HomeController i have the following:
var obj = new BarcodeBLL(new ERPConfig
{
AS400ControlLibrary = ConfigurationManager.AppSettings["ControlLibrary"],
AS400Library = ConfigurationManager.AppSettings["DataLibrary"],
ConnectionString = ConfigurationManager.ConnectionStrings["AS400"].ConnectionString
});
To me it would seem obvious to follow #2 but i would like to know if i am correct and why

If you have 4 identical systems, it would seem logical to have a single class representing such systems. When you need access to one of these systems, you instantiate this type, passing the correct connection string to the constructor.
You may want to hide the details of which connection string is actually being used behind a factory or in the configuration of a DI container.

Related

How to use two separate MongoDB databases in .NET Dependency Injection

I have 2 MongoDB Databases and want to use in Worker Service class:
services.Configure<DbConnectSetting>(
hostContext.Configuration.GetSection("fwkConfiguration:DataBase1Settings"))
.AddTransient<IDbConnectSetting>(s =>
s.GetRequiredService<IOptions<DbConnectSetting>>().Value)
.AddTransient(typeof(IRepository<>), typeof(Repository<>))
.AddTransient(typeof(IDB1Repository), typeof(DB1Repository));
services.Configure<DbConnectSetting>(
hostContext.Configuration.GetSection("fwkConfiguration:DataBase2Setting"))
.AddTransient<IDbConnectSetting>(s =>
s.GetRequiredService<IOptions<DbConnectSetting>>().Value)
.AddTransient(typeof(IRepository<>),typeof(Repository<>))
.AddTransient(typeof(IDB2Repository), typeof(DB2Repository));
But the problem is it always takes the lastly create DB2 value for both the Instance of Object in worker class, is there anything to resolve to take seperate values.
Try to give the options a name. Check this docs
Something like:
services.Configure<DbConnectSetting>("DB1" ,...);
services.Configure<DbConnectSetting>("DB12 ,...);
// Usage
public Foo(IOptionsSnapshot<DbConnectSetting> options){
options.Get("DB1");

Autofac resolve dependant services by name

Is it possible to register a single service that has dependencies that can change depending on a setting?
For instance
A DBExecutor requries a different DBconnection object depending which geographical region it is running under.
I've tried something like
builder.RegisterType<DbConnection>().Named<IDbConnection>("US")
builder.RegisterType<DbConnection>().Named<IDbConnection>("AU")
builder.RegisterType<SqlExecutor>().As<IDbExecutor>();
and I'd like to resolve the service with something like
var au = container.ResolveNamed<IDbExecutor>("AU");
var us = container.ResolveNamed<IDbExecutor>("US");
However this doesn't work because the IDbExecutor itself hasn't been registered with a key, and if I try a normal Resolve it wont work as it cannot create the dependent services.
Basically I just want an instance of of IDbExecutor with a DBConnection based upon a certain parameter.
I'm trying to do this in a more general sense so I'm trying to avoid any specific code where I can.
The current generic code I have that doesn't use keyed services looks like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType);
where JobType is a class Type and depending on if this is possible the final version would look something like
var job = (IJob) lifetimeScope.Resolve(bundle.JobDetail.JobType, bundle.JobDetail.JobDataMap["Region"]);
where bundle.JobDetail.JobDataMap["Region"] would return either "AU" or "US"
You won't be able to rig it to resolve a named IDbExecutor because you didn't register it as named. It's also probably not the best idea since it implies that IDbExecutor somehow "knows" about its dependencies, which it shouldn't - the implementation knows, but the interface/service doesn't - and shouldn't.
You can get something close to what you want by updating the SqlExecutor to use the IIndex<X,B> relationship in Autofac. Instead of taking just an IDbConnection in your constructor, take an IIndex<string,IDbConnection>.
When you need to get the connection, look it up from the indexed dictionary using the job type:
public class SqlExecutor
{
private IIndex<string, IDbConnection> _connections;
public SqlExecutor(IIndex<string, IDbConnection> connections)
{
this._connections = connections;
}
public void DoWork(string jobType)
{
var connection = this._connections[jobType];
// do something with the connection
}
}
Another way to do it would be to create a delegate factory for the SqlExecutor that takes in the job type and automatically picks the right named service. That's a bit more involved so check out the documentation for an example.

Passing a Data Object Between Dependent Factories

I'm currently using an IoC container, unity, for my program.
I have multiple chained factories. One calling the next to create an object it needs for populating a property. All the factories use the same raw data object in order to build their respective objects. The raw data object describes how to create all the various objects. Currently each factory has a Create method that takes in a couple parameters to state what location the object represents.
My problem is how/where do I pass in the raw data object to each factory in order for them to do their jobs?
Injecting the object into the Create() methods seems to be more procedural than object oriented. However if I inject the object into each factory's constructor then how would I resolve each factory correctly. Not to mention that these factories need to be able to work on different raw data objects. Maybe there is a better architecture over all?
Below represents the type of structure I have, minus passing the raw object anywhere.
class PhysicalObjectFactory
{
private readonly StructureAFactory _structureAFactory;
private readonly Parser _parser;
public PhysicalObjectFactory(StructureAFactory structureAFactory, Parser _parser)
{
_structureAFactory = structureAFactory;
this._parser = _parser;
}
public PhysicalObject CreatePhysicalObject()
{
RawDataObject rawDataObject = _parser.GetFromFile("foo.txt");
// do stuff
PhysicalObject physicalObject = new PhysicalObject();
physicalObject.StructureA = _structureAFactory.Create(num1, num2);
// do more stuff
return physicalObject;
}
}
class StructureAFactory
{
private readonly StructureBFactory _structureBFactory;
public StructureAFactory(StructureBFactory structureBFactory)
{
_structureBFactory = structureBFactory;
}
public StructureA Create(int a, int b)
{
// do stuff
StructureA structureA = new StructureA();
structureA.StructureB = _structureBFactory.Create(num76, num33);
// do more stuff
return structureA;
}
}
class StructureBFactory
{
public StructureBFactory(){}
public StructureB Create(int a, int b)
{
StructureB structureB = new StructureB();
// do stuff
return structureB;
}
}
My problem is how/where do I pass in the raw data object to each
factory in order for them to do their jobs?
In general you should pass in runtime data through methods and compile-time/design-time/configuration data through constructor injection.
Your services are composed at a different moment in time as when they are used. Those services can live for a long time and this means they can be used many times with different runtime values. If you make this distinction between runtime data and data that doesn't change throughout the lifetime of the service, your options become much clearer.
So the question is whether this raw data you're passing in is changing on each call or if its fixed. Perhaps it is partially fixed. In that case you should separate the the data; only pass the runtime data on through the Create methods. It seems obvious that since the factories are chained, the data they need to create that part of the object is passed on to them through their Create method.
Sometimes however, you've got some data that's in between. It is data that will change during the lifetime of the application, but do don't want to pass it on through method calls, because it's not up to the caller to determine what those values are. This is contextual information. A clear example of this is information about the logged in user that is executing the request. You don't want the caller (for instance your presentation layer) to pass that information on, since this is extra work, and a potential security risk if the presentation layer forgets to pass this information on, or accidentally passes on some invalid value.
In that case the most common solution is to inject a service that provides consumers with this information. In the case of the user information you would inject an IUserContext service that contains a UserName or UserId property, perhaps a IsInRole(string) method or something similar. The trick here is that not the user information is injected into a consumer, but a service that allows access to this information. In other words, the retrieval of the user information is deferred. This allows the composed object graph to stay independent of those contextual information. This makes it easier to compose and validate object graph.

Returning object obtained from a WCF web service

I have a function that returns an entity obtained from a WCF web service. How should I return this entity as? I don't think I can return the original object (from the web service), because that would mean that the function's caller (from other assembly) will be forced to have a service reference to this web service (because the class is defined in the service reference) and I think I want to avoid that. And I don't think I can use interface either, since I can't modify the WCF entity to implement my interface.
On the other hand, I need to return precisely all properties that the original entity has, eg. all properties needed to be there, and there is no conversion/adjustment needed to any value or any property name and type.
Is it better to create a new class that duplicate the same properties from the original WCF class? How should I implement it, is it better to create a new object that copies all values from the original object, e.g.
return new Foo() { Id = original.Id, Name = original.Name, ... etc. }?
or just wrap it with get set methods like :
public int Id
{
get { return _original.Id; }
set { _original.Id = value; }
}
And any idea how to name the new class to avoid ambiguity with the original class name from the WCF reference?
as you have figured out, it is not a good idea to force the client to use the same types as the server. This would unnecessarily expose server application architecture to the client. The best option is to use Data Transfer Objects (DTOs).
You may have DTO for each of the entity you wish to expose to the client and the DTO will have properties to expose all the required fields of the entity. There are libraries such as value injector (valueinjecter.codeplex.com) or auto mapper as suggested by #stephenl to help you in copying the values from one object to another.
Place the DTOs in a separate namespace and assembly for best physical decoupling. You can use YourCompany.YourProduct.Data.Entities as the namespace for entities and YourCompany.YourProduct.Data.DTO for the DTOs
Actually, it depends on whether you are the consumer. If you are the consumer, reusing the type assembly is ok. However if you are not in control of the consuming services, it is better to use DTO objects with [DataContract] attributes.

In domain-driven design, would it be a violation of DDD to put calls to other objects' repostiories in a domain object?

I'm currently refactoring some code on a project that is wrapping up, and I ended up putting a lot of business logic into service classes rather than in the domain objects. At this point most of the domain objects are data containers only. I had decided to write most of the business logic in service objects, and refactor everything afterwards into better, more reuseable, and more readable shapes. That way I could decide what code should be placed into domain objects, and which code should be spun off into new objects of their own, and what code should be left in a service class. So I have some code:
public decimal CaculateBatchTotal(VendorApplicationBatch batch)
{
IList<VendorApplication> applications = AppRepo.GetByBatchId(batch.Id);
if (applications == null || applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal total = 0m;
foreach (VendorApplication app in applications)
total += app.Amount;
return total;
}
This code seems like it would make a good addition to a domain object, because it's only input parameter is the domain object itself. Seems like a perfect candidate for some refactoring. But the only problem is that this object calls another object's repository. Which makes me want to leave it in the service class.
My questions are thus:
Where would you put this code?
Would you break this function up?
Where would someone who's following strict Domain-Driven design put it?
Why?
Thanks for your time.
Edit Note: Can't use an ORM on this one, so I can't use a lazy loading solution.
Edit Note2: I can't alter the constructor to take in parameters, because of how the would-be data layer instantiates the domain objects using reflection (not my idea).
Edit Note3: I don't believe that a batch object should be able to total just any list of applications, it seems like it should only be able to total applications that are in that particular batch. Otherwise, it makes more sense to me to leave the function in the service class.
You shouldn't even have access to the repositories from the domain object.
What you can do is either let the service give the domain object the appropriate info or have a delegate in the domain object which is set by a service or in the constructor.
public DomainObject(delegate getApplicationsByBatchID)
{
...
}
I'm no expert on DDD but I remember an article from the great Jeremy Miller that answered this very question for me. You would typically want logic related to your domain objects - inside those objects, but your service class would exec the methods that contain this logic. This helped me push domain specific logic into the entity classes, and keep my service classes less bulky (as I found myself putting to much logic inside the service classes like you mentioned)
Edit: Example
I use the enterprise library for simple validation, so in the entity class I will set an attribute like so:
[StringLengthValidator(1, 100)]
public string Username {
get { return mUsername; }
set { mUsername = value; }
}
The entity inherits from a base class that has the following "IsValid" method that will ensure each object meets the validation criteria
public bool IsValid()
{
mResults = new ValidationResults();
Validate(mResults);
return mResults.IsValid();
}
[SelfValidation()]
public virtual void Validate(ValidationResults results)
{
if (!object.ReferenceEquals(this.GetType(), typeof(BusinessBase<T>))) {
Validator validator = ValidationFactory.CreateValidator(this.GetType());
results.AddAllResults(validator.Validate(this));
}
//before we return the bool value, if we have any validation results map them into the
//broken rules property so the parent class can display them to the end user
if (!results.IsValid()) {
mBrokenRules = new List<BrokenRule>();
foreach (Microsoft.Practices.EnterpriseLibrary.Validation.ValidationResult result in results) {
mRule = new BrokenRule();
mRule.Message = result.Message;
mRule.PropertyName = result.Key.ToString();
mBrokenRules.Add(mRule);
}
}
}
Next we need to execute this "IsValid" method in the service class save method, like so:
public void SaveUser(User UserObject)
{
if (UserObject.IsValid()) {
mRepository.SaveUser(UserObject);
}
}
A more complex example might be a bank account. The deposit logic will live inside the account object, but the service class will call this method.
Why not pass in an IList<VendorApplication> as the parameter instead of a VendorApplicationBatch? The calling code for this presumably would come from a service which would have access to the AppRepo. That way your repository access will be up where it belongs while your domain function can remain blissfully ignorant of where that data came from.
As I understand it (not enough info to know if this is the right design) VendorApplicationBatch should contain a lazy loaded IList inside the domain object, and the logic should stay in the domain.
For Example (air code):
public class VendorApplicationBatch {
private IList<VendorApplication> Applications {get; set;};
public decimal CaculateBatchTotal()
{
if (Applications == null || Applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal Total = 0m;
foreach (VendorApplication App in Applications)
Total += App.Amount;
return Total;
}
}
This is easily done with an ORM like NHibernate and I think it would be the best solution.
It seems to me that your CalculateTotal is a service for collections of VendorApplication's, and that returning the collection of VendorApplication's for a Batch fits naturally as a property of the Batch class. So some other service/controller/whatever would retrieve the appropriate collection of VendorApplication's from a batch and pass them to the VendorApplicationTotalCalculator service (or something similar). But that may break some DDD aggregate root service rules or some such thing I'm ignorant of (DDD novice).

Categories