C# MongoDB: How to correctly map a domain object? - c#

I recently started reading Evans' Domain-Driven design book and started a small sample project to get some experience in DDD. At the same time I wanted to learn more about MongoDB and started to replace my SQL EF4 repositories with MongoDB and the latest official C# driver.
Now this question is about MongoDB mapping. I see that it is pretty easy to map simple objects with public getters and setters - no pain there. But I have difficulties mapping domain entities without public setters. As I learnt, the only really clean approach to construct a valid entity is to pass the required parameters into the constructor. Consider the following example:
public class Transport : IEntity<Transport>
{
private readonly TransportID transportID;
private readonly PersonCapacity personCapacity;
public Transport(TransportID transportID,PersonCapacity personCapacity)
{
Validate.NotNull(personCapacity, "personCapacity is required");
Validate.NotNull(transportID, "transportID is required");
this.transportID = transportID;
this.personCapacity = personCapacity;
}
public virtual PersonCapacity PersonCapacity
{
get { return personCapacity; }
}
public virtual TransportID TransportID
{
get { return transportID; }
}
}
public class TransportID:IValueObject<TransportID>
{
private readonly string number;
#region Constr
public TransportID(string number)
{
Validate.NotNull(number);
this.number = number;
}
#endregion
public string IdString
{
get { return number; }
}
}
public class PersonCapacity:IValueObject<PersonCapacity>
{
private readonly int numberOfSeats;
#region Constr
public PersonCapacity(int numberOfSeats)
{
Validate.NotNull(numberOfSeats);
this.numberOfSeats = numberOfSeats;
}
#endregion
public int NumberOfSeats
{
get { return numberOfSeats; }
}
}
Obviously automapping does not work here. Now I can map those three classes by hand via BsonClassMaps and they will be stored just fine. The problem is, when I want to load them from the DB I have to load them as BsonDocuments, and parse them into my domain object. I tried lots of things but ultimately failed to get a clean solution. Do I really have to produce DTOs with public getters/setters for MongoDB and map those over to my domain objects? Maybe someone can give me some advice on this.

It is possible to serialize/deserialize classes where the properties are read-only. If you are trying to keep your domain objects persistance ignorant, you won't want to use BsonAttributes to guide the serialization, and as you pointed out AutoMapping requires read/write properties, so you would have to register the class maps yourself. For example, the class:
public class C {
private ObjectId id;
private int x;
public C(ObjectId id, int x) {
this.id = id;
this.x = x;
}
public ObjectId Id { get { return id; } }
public int X { get { return x; } }
}
Can be mapped using the following initialization code:
BsonClassMap.RegisterClassMap<C>(cm => {
cm.MapIdField("id");
cm.MapField("x");
});
Note that the private fields cannot be readonly. Note also that deserialization bypasses your constructor and directly initializes the private fields (.NET serialization works this way also).
Here's a full sample program that tests this:
http://www.pastie.org/1822994

I'd go with parsing the BSON documents and move the parsing logic to a factory.
First define a factory base class, which contains a builder class. The builder class will act as the DTO, but with additional validation of the values before constructing the domain object.
public class TransportFactory<TSource>
{
public Transport Create(TSource source)
{
return Create(source, new TransportBuilder());
}
protected abstract Transport Create(TSource source, TransportBuilder builder);
protected class TransportBuilder
{
private TransportId transportId;
private PersonCapacity personCapacity;
internal TransportBuilder()
{
}
public TransportBuilder WithTransportId(TransportId value)
{
this.transportId = value;
return this;
}
public TransportBuilder WithPersonCapacity(PersonCapacity value)
{
this.personCapacity = value;
return this;
}
public Transport Build()
{
// TODO: Validate the builder's fields before constructing.
return new Transport(this.transportId, this.personCapacity);
}
}
}
Now, create a factory subclass in your repository. This factory will construct domain objects from the BSON documents.
public class TransportRepository
{
public Transport GetMostPopularTransport()
{
// Query MongoDB for the BSON document.
BsonDocument transportDocument = mongo.Query(...);
return TransportFactory.Instance.Create(transportDocument);
}
private class TransportFactory : TransportFactory<BsonDocument>
{
public static readonly TransportFactory Instance = new TransportFactory();
protected override Transport Create(BsonDocument source, TransportBuilder builder)
{
return builder
.WithTransportId(new TransportId(source.GetString("transportId")))
.WithPersonCapacity(new PersonCapacity(source.GetInt("personCapacity")))
.Build();
}
}
}
The advantages of this approach:
The builder is responsible for building the domain object. This allows you to move some trivial validation out of the domain object, especially if the domain object doesn't expose any public constructors.
The factory is responsible for parsing the source data.
The domain object can focus on business rules. It's not bothered with parsing or trivial validation.
The abstract factory class defines a generic contract, which can be implemented for each type of source data you need. For example, if you need to interface with a web service that returns XML, you just create a new factory subclass:
public class TransportWebServiceWrapper
{
private class TransportFactory : TransportFactory<XDocument>
{
protected override Transport Create(XDocument source, TransportBuilder builder)
{
// Construct domain object from XML.
}
}
}
The parsing logic of the source data is close to where the data originates, i.e. the parsing of BSON documents is in the repository, the parsing of XML is in the web service wrapper. This keeps related logic grouped together.
Some disadvantages:
I haven't tried this approach in large and complex projects yet, only in small-scale projects. There may be some difficulties in some scenarios I haven't encountered yet.
It's quite some code for something seemingly simple. Especially the builders can grow quite large. You can reduce the amount of code in the builders by converting all the WithXxx() methods to simple properties.

A better approach to handling this now is using MapCreator (which was possibly added after most of these answers were written).
e.g. I have a class called Time with three readonly properties: Hour, Minute and Second. Here's how I get it to store those three values in the database and to construct new Time objects during deserialization.
BsonClassMap.RegisterClassMap<Time>(cm =>
{
cm.AutoMap();
cm.MapCreator(p => new Time(p.Hour, p.Minute, p.Second));
cm.MapProperty(p => p.Hour);
cm.MapProperty(p => p.Minute);
cm.MapProperty(p => p.Second);
}

Niels has an interesting solution but I propose a much different approach:
Simplify your data model.
I say this because you are trying to convert RDBMS style entities to MongoDB and it doesnt map over very well, as you have found.
One of the most important things to think about when using any NoSQL solution is your data model. You need to free your mind of much of what you know about SQL and relationships and think more about embedded documents.
And remember, MongoDB is not the right answer for every problem so try not to force it to be. The examples you are following may work great with standard SQL servers but dont kill yourself trying to figure out how to make them work with MongoDB - they probably dont. Instead, I think a good excercise would be trying to figure out the correct way to model the example data with MongoDB.

Consider NoRM, an open-source ORM for MongoDB in C#.
Here are some links:
http://www.codevoyeur.com/Articles/20/A-NoRM-MongoDB-Repository-Base-Class.aspx
http://lukencode.com/2010/07/09/getting-started-with-mongodb-and-norm/
https://github.com/atheken/NoRM (download)

Related

Refactoring to make code open for extensions but closed for modifications

For my project purpose I need to send metrics to AWS.
I have main class called SendingMetrics.
private CPUMetric _cpuMetric;
private RAMMetric _ramMetric;
private HDDMetric _hddMetric;
private CloudWatchClient _cloudWatchClient(); //AWS Client which contains method Send() that sends metrics to AWS
public SendingMetrics()
{
_cpuMetric = new CPUMetric();
_ramMetric = new RAMMetric();
_hddMetric = new HDDMetric();
_cloudwatchClient = new CloudwatchClient();
InitializeTimer();
}
private void InitializeTimer()
{
//here I initialize Timer object which will call method SendMetrics() each 60 seconds.
}
private void SendMetrics()
{
SendCPUMetric();
SendRAMMetric();
SendHDDMetric();
}
private void SendCPUMetric()
{
_cloudwatchClient.Send("CPU_Metric", _cpuMetric.GetValue());
}
private void SendRAMMetric()
{
_cloudwatchClient.Send("RAM_Metric", _ramMetric.GetValue());
}
private void SendHDDMetric()
{
_cloudwatchClient.Send("HDD_Metric", _hddMetric.GetValue());
}
Also I have CPUMetric, RAMMetric and HDDMetric classes that looks pretty much similar so I will just show code of one class.
internal sealed class CPUMetric
{
private int _cpuThreshold;
public CPUMetric()
{
_cpuThreshold = 95;
}
public int GetValue()
{
var currentCpuLoad = ... //logic for getting machine CPU load
if(currentCpuLoad > _cpuThreshold)
{
return 1;
}
else
{
return 0;
}
}
}
So the problem I have is that clean coding is not satisfied in my example. I have 3 metrics to send and if I need to introduce new metric I will need to create new class, initialize it in SendingMetrics class and modify that class and that is not what I want. I want to satisfy Open Closed principle, so it is open for extensions but closed for modifications.
What is the right way to do it? I would move those send methods (SendCPUMetric, SendRAMMetric, SendHDDMetric) to corresponding classes (SendCPUMetric method to CPUMetric class, SendRAMMEtric to RAMMetric, etc) but how to modfy SendingMetrics class so it is closed for modifications and if I need to add new metric to not change that class.
In object oriented languages like C# the Open Closed Principle (OCP) is usually achieved by using the concept of polymorphism. That is that objects of the same kind react different to one and the same message. Looking at your class "SendingMetrics" it's obvious that the class works with different types of "Metrics". The good thing is that your class "SendingMetrics" talks to a all types of metrics in the same way by sending the message "getData". Hence you can introduce a new abstraction by creating an Interface "IMetric" that is implemented by the concrete types of metrics. That way you decouple your "SendingMetrics" class from the concrete metric types wich means the class does not know about the specific metric types. It only knows IMetric and treats them all in the same way wich makes it possible to add any new collaborator (type of metric) that implements the IMetric interface (open for extension) without the need to change the "SendingMetrics" class (closed for modification). This also requires that the objects of the different types of metrics are not created within the "SendingMetrics" class but e.g. by a factory or outside of the class and being injected as IMetrics.
In addition to using inheritance to enable polymorphism and achiving OCP by introducing the interface IMetric you can also use inheritance to remove redundancy. Which means you can introduce an abstract base class for all metric types that implements common behaviour that is used by all types of metrics.
Your design is almost correct. You got 3 data retriever and 1 data sender. So it's easy to add more metric (more retriever) (open for extensions) without affecting current metrics (closed for modifications), you just need a bit more refactor to reduce duplicated code.
Instead of have 3 metrics classes look very similar. Only below line is different
var currentCpuLoad = ... //logic for getting machine CPU load
You can create a generic metric like this
internal interface IGetMetric
{
int GetData();
}
internal sealed class Metric
{
private int _threshold;
private IGetMetric _getDataService;
public Metric(IGetMetric getDataService)
{
_cpuThreshold = 95;
_getDataService = getDataService;
}
public int GetValue()
{
var currentCpuLoad = _getDataService.GetData();
if(currentCpuLoad > _cpuThreshold)
{
return 1;
}
else
{
return 0;
}
}
}
Then just create 3 GetMetric classes to implement that interface. This is just 1 way to reduce the code duplication. You can also use inheritance (but I don't like inheritance). Or you can use a Func param.
UPDATED: added class to get CPU metric
internal class CPUMetricService : IGetMetric
{
public int GetData() { return ....; }
}
internal class RAMMetricService : IGetMetric
{
public int GetData() { return ....; }
}
public class AllMetrics
{
private List<Metric> _metrics = new List<Metric>()
{
new Metric(new CPUMetricService());
new Metric(new RAMMetricService());
}
public void SendMetrics()
{
_metrics.ForEach(m => ....);
}
}

C# / DDD: How to model entities with internal state objects not instantiable by the domain layer when using onion architecture?

I am in the process of migrating a "big ball of mud" (BBOM)-like system towards a system based on the ideas of domain driven design.
After various iterations of refactoring, domain aggregates/entities are currently modelled using inner state objects, as described by Vaughn Vernon in this article, for example: https://vaughnvernon.co/?p=879#comment-1896
So basically, an entity might look like this:
public class Customer
{
private readonly CustomerState state;
public Customer(CustomerState state)
{
this.state = state;
}
public Customer()
{
this.state = new CustomerState();
}
public string CustomerName => this.state.CustomerName;
[...]
}
As of today, the state object in this system is always a database table wrapper coming from the currently used proprietary data access framework of the application, which resembles an Active Record pattern. All the state objects therefore inherit from a base class part of the data access framework. At this time, it is not possible to use POCOs as state object, Entity Framework or any of that.
The application currently uses a classic layer architecture in which the infrastructure (including the mentioned table wrappers / state objects) is at the bottom, followed by the domain. The domain knows the infrastructure and the repositories are implemented in the domain, using the infrastructure. As you can see above, most entities contain a public constructor for conveniently creating new instances inside the domain, which internally just creates a new state object (because the domain knows it).
Now, we would like to further evolve this and gradually turn the architecture around, resulting more in an "onion" kind of architecture. In that architecture, the domain would only contain repository interfaces, and the actual implementations would be provided by the infrastructure layer sitting on top of it. In this case, the domain could no longer know the actual state objects / database table wrappers.
One idea to solve this would be to have the state objects implement interfaces defined by the domain, and this actually seems like a good solution for now. It is also technically possible because, even though the state objects must inherit from a special data access base class, they are free to implement interfaces.
So the above example would change to something like:
public class Customer
{
private readonly ICustomerState state;
public Customer(ICustomerState state)
{
this.state = state;
}
public Customer()
{
this.state= <<<-- what to do here??;
}
[...]
}
So when the repository (now implemented in the infrastructure) instantiates a new Customer, it can easily pass in the database wrapper object which implements the ICustomerState. So far so good
However, when creating new entities in the domain, it is no longer possible to also create the inner state object as we no longer know the actual implementation of it.
There are several possible solutions to this, but none of them seem really attractive:
We could always use abstract factories for creating new entities, and those factories would then be implemented by the infrastructure. While there are certain cases where a domain factory is appropriate due to the complexity of the entity, I would not want to have to use one in every case as they lead to a lot of clutter in the domain and to yet another dependency being passed around.
Instead of directly using the database table wrappers as state objects, we could use another class (POCO) which just holds the values and then gets translated from/to database wrappers by the infrastructure. This might work but it would end up in a lot of additional mapping code and result in 3 or more classes per database table (DB wrapper, state object, domain entity) complicating maintenance. We would like to avoid this, if possible.
To avoid passing around factories, the constructor inside the entity could call some magic, singleton-like StateFactory.Instance.Create<TState>() method for creating the inner state object. It would then be the infrastructure's responsibility to register an appropriate implementation for it. A similar approach would be to somehow get the DI container and resolve the factory from there. I personally don't really like this sort of Service Locator approach but it might be acceptable in this special case.
Are there any better options that I'm missing?
Domain driven design is not a god fit for big ball of muds. Trying to apply DDD in big systems is not as effective as object oriented desing. Try to think in terms of object that collaborate together and hide the complexity of data and start thinking in methods/behavior to manipulate the object internals through behavior.
In order to achieve onion arquitecture i would suggest the following rules:
Try to avoid OrmĀ“s(EF, Hibernate, etc) in your business rules because it adds the complexity of database(DataContext, DataSet,getters, setters, anemic models, code smells, etc) in business code.
In business rules use composition, the key is to inject through constructors the objects (the actors in the system), try to have purity in business rules.
Ask the object to do something with the data
Invest time in the design of the object API.
Leave implemetation details to the end(database, cloud, mongo, etc). You should implement the details in the class and dont let the complexity of the code spread outside it.
Try not to fit design patterns in your code always, only when needed.
Here is how i would design business rules with objects in order to have readability and maintenance:
public interface IProductBacklog
{
KeyValuePair<bool, int> TryAddProductBacklogItem(string description);
bool ExistProductBacklogItem(string description);
bool ExistProductBacklogItem(int backlogItemId);
bool TryDeleteProductBacklogItem(int backlogItemId);
}
public sealed class AddProductBacklogItemBusinessRule
{
private readonly IProductBacklog productBacklog;
public AddProductBacklogItemBusinessRule(IProductBacklog productBacklog)
{
this.productBacklog = productBacklog ?? throw new ArgumentNullException(nameof(productBacklog));
}
public int Execute(string productBacklogItemDescription)
{
if (productBacklog.ExistProductBacklogItem(productBacklogItemDescription))
throw new InvalidOperationException("Duplicate");
KeyValuePair<bool, int> result = productBacklog.TryAddProductBacklogItem(productBacklogItemDescription);
if (!result.Key)
throw new InvalidOperationException("Error adding productBacklogItem");
return result.Value;
}
}
public sealed class DeleteProductBacklogItemBusinessRule
{
private readonly IProductBacklog productBacklog;
public DeleteProductBacklogItemBusinessRule(IProductBacklog productBacklog)
{
this.productBacklog = productBacklog ?? throw new ArgumentNullException(nameof(productBacklog));
}
public void Execute(int productBacklogItemId)
{
if (productBacklog.ExistProductBacklogItem(productBacklogItemId))
throw new InvalidOperationException("Not exists");
if(!productBacklog.TryDeleteProductBacklogItem(productBacklogItemId))
throw new InvalidOperationException("Error deleting productBacklogItem");
}
}
public sealed class SqlProductBacklog : IProductBacklog
{
//High performance, not loading unnesesary data
public bool ExistProductBacklogItem(string description)
{
//Sql implementation
throw new NotImplementedException();
}
public bool ExistProductBacklogItem(int backlogItemId)
{
//Sql implementation
throw new NotImplementedException();
}
public KeyValuePair<bool, int> TryAddProductBacklogItem(string description)
{
//Sql implementation
throw new NotImplementedException();
}
public bool TryDeleteProductBacklogItem(int backlogItemId)
{
//Sql implementation
throw new NotImplementedException();
}
}
public sealed class EntityFrameworkProductBacklog : IProductBacklog
{
//Use EF here
public bool ExistProductBacklogItem(string description)
{
//EF implementation
throw new NotImplementedException();
}
public bool ExistProductBacklogItem(int backlogItemId)
{
//EF implementation
throw new NotImplementedException();
}
public KeyValuePair<bool, int> TryAddProductBacklogItem(string description)
{
//EF implementation
throw new NotImplementedException();
}
public bool TryDeleteProductBacklogItem(int backlogItemId)
{
//EF implementation
throw new NotImplementedException();
}
}
public class ControllerClientCode
{
private readonly IProductBacklog productBacklog;
//Inject from Services, IoC, etc to unit test
public ControllerClientCode(IProductBacklog productBacklog)
{
this.productBacklog = productBacklog;
}
public void AddProductBacklogItem(string description)
{
var businessRule = new AddProductBacklogItemBusinessRule(productBacklog);
var generatedId = businessRule.Execute(description);
//Do something with the generated backlog item id
}
public void DeletePRoductBacklogItem(int productBacklogId)
{
var businessRule = new DeleteProductBacklogItemBusinessRule(productBacklog);
businessRule.Execute(productBacklogId);
}
}

3 Tier application with singleton Pattern

I am Just creating a 3 Tier WinForm Application with following pattern.
-- MY BASE CLASS : DAL Class
public class Domain
{
public string CommandName = string.Empty;
public List<Object> Parameters = new List<Object>();
public void Save()
{
List<Object> Params = this.SaveEntity();
this.ExecuteNonQuery(CommandName, Params.ToArray());
}
public void Delete()
{
List<Object> Params = this.DeleteEntity();
this.ExecuteNonQuery(CommandName, Params.ToArray());
}
public void Update()
{
List<Object> Params = this.UpdateEntity();
this.ExecuteNonQuery(CommandName, Params.ToArray());
}
protected virtual List<Object> SaveEntity()
{
return null;
}
protected virtual List<Object> UpdateEntity()
{
return null;
}
protected virtual List<Object> DeleteEntity()
{
return null;
}
public int ExecuteNonQuery(string SqlText, params object[] Params)
{
/*
* Code block for executing Sql
*/
return 0;
}
}
My Business Layer Class which is going to inherit DLL Class
-- MY Children CLASS : BLL CLASS
public class Person : Domain
{
public string name
{
get;
set;
}
public string number
{
get;
set;
}
protected override List<object> SaveEntity()
{
this.Parameters.Add(name);
this.Parameters.Add(number);
return this.Parameters;
}
}
-- USE
This is way to use my Base Class
void Main()
{
Person p = new Person();
p.name = "Vijay";
p.number = "23";
p.Save();
}
Questions
Is this the right architecture I am following and Is there any chance to create the base class as Singleton?
Is there any other batter architecture?
Is there any pattern I can follow to extend my functionality?
Kindly suggest.
Lets see. I would try to give my input.
What I see here you are trying to do is ORM. So please change the name of base class from Domain to something else
Is this the right architecture I am following and Is there any chance to create the base class as Singleton?
Why do you need you base class as singleton. You would be inheriting your base class and you would create instances of child classes. Never ever you would be creating a instance of base itself.(99% times :) )
Is there any other batter architecture?
Understand this. To do a certain thing, there could be multiple ways. Its just the matter of fact, which one suits you the most.
Is there any pattern I can follow to extend my functionality?
Always remember the SOLID principles which gives you loose coupling and allow easy extensibility.
SOLID
There are couple of changes that I would suggest. Instead of a base class, start with Interface and then inherit it to make an abstract class.
Also make sure your base class can do all the CRUD functionality. I do not see a retrieval functionality here. How are you planning to do it? Probably you need a repository class that returns all the entity of your application. So when you need person, you would just go on ask the repository to return all the Person.
All said and done, there are lots of ORM tool, that does this kind of functionality and saves developer time. Its better to learn those technologies. For example LINQ - SQL.
Is this the right architecture I am following
There is no architecture which is optimal for any problem without context. That said, there are things that you can do to make your life more difficult. Singleton is not your problem in your implementation.
Is there any other batter architecture?
Probably, yes. Just glimpsing at the code, I see quite a lot of stuff that is going to hurt you in the near and not so near future.
First, a piece of advice: get the basics right, don't run before you can walk. This may be the cause for the downvotes.
Some random issues:
You are talking about 3-Tier architecture, but there are technically no tiers there, not even layers. Person doesn't look like business logic to me: if I understood correctly, it also must supply the string for the commands to execute, so it has to know SQL.
Empty virtual methods should be abstract. If you want to be able to execute arbitrary SQL move this outside the class
As #Anand pointed out, there are no methods to query
CommandName and Parameters are exposed as fields instead of properties
CommandName is not a Name, Domain doesn't look like a fitting name for that class
It looks like an awkward solution to a well-known problem (ORM). You say that you want to be able to execute custom SQL but any decent ORM should be able to let you do that.
Suggested reads: Code Complete for the basic stuff and Architecting Applications for the Enterprise for some clarity on the architectural patterns you could need.
As suggested by Anand, I removed all SQL related functions from my base class and put them all in another class, Sql.
Following that, I made the Sql class into a singleton. And I stored the Sql instance in BaseDAL so it can be accessible in all DAL class.
My code looks something like this
public class BaseDAL
{
// Singleton Instance
protected Sql _dal = Sql.Instance;
public string CommandName = string.Empty;
public List<Object> Parameters = new List<Object>();
public void Save()
{
List<Object> Params = this.SaveEntity();
_dal.ExecuteNonQuery(CommandName, Params.ToArray());
}
public void Delete()
{
List<Object> Params = this.DeleteEntity();
_dal.ExecuteNonQuery(CommandName, Params.ToArray());
}
public void Update()
{
List<Object> Params = this.UpdateEntity();
_dal.ExecuteNonQuery(CommandName, Params.ToArray());
}
protected virtual List<Object> SaveEntity()
{
return null;
}
protected virtual List<Object> UpdateEntity()
{
return null;
}
protected virtual List<Object> DeleteEntity()
{
return null;
}
// Other functions, like DataTable and DataSet querying
}
And the new SQL class is
public class Sql
{
// All other functions are also present in this class for DataTable DataSet and many other
// So this class is more then enough for me.
public int ExecuteNonQuery(string SqlText, params object[] Params)
{
// Code block for executing SQL
return 0;
}
}
CommandName and Parameters are exposed as fields instead of properties. In the original solution, they were properties. Also, I have a method in BaseDAL to query data so to help with implementing the Person class.

Can I use more generic interfaces to simplify my classes to use a command pattern?

I'm trying to make an app I'm designing more generic and implement the command pattern into it to use manager classes to invoke methods exposed by interfaces.
I have several classes with the GetItem() and GetList() methods in them, some are overloaded. They accept different parameters as I was trying to use dependency injection, and they return different types. Here are a couple of examples:
class DatastoreHelper
{
public Datastore GetItem(string DatastoreName)
{
// return new Datastore(); from somewhere
}
public Datastore GetItem(int DatastoreID)
{
// return new Datastore(); from somewhere
}
public List<Datastore> GetList()
{
// return List<Datastore>(); from somewhere
}
public List<Datastore> GetList(HostSystem myHostSystem)
{
// return List<Datastore>(); from somewhere
}
}
class HostSystemHelper
{
public HostSystem GetItem(int HostSystemID)
{
// return new HostSystem(); from somewhere
}
public List<HostSystem> GetList(string ClusterName)
{
//return new List<HostSystem>(); from somewhere
}
}
I'm trying to figure out if I could use a generic interface for these two methods, and a manager class which would effectively be the controller. Doing this would increase the reuse ability of my manager class.
interface IGetObjects
{
public object GetItem();
public object GetList();
}
class GetObjectsManager
{
private IGetObjects mGetObject;
public GetObjectsManager(IGetObjects GetObject)
{
this.mGetObject = GetObject;
}
public object GetItem()
{
return this.mGetObject.GetItem();
}
public object GetList()
{
return this.GetList();
}
}
I know I'd have to ditch passing in the parameters to the methods themselves and use class properties instead, but I'd lose the dependency injection. I know I'd have to cast the return objects at the calling code into what they're supposed to be. So my helper classes would then look like this:
class DatastoreHelper
{
public string DatastoreName { get; set; }
public string DatastoreID { get; set; }
public object GetItem()
{
// return new Datastore(); from somewhere
}
public List<object> GetList()
{
// return List<Datastore>(); from somewhere
}
}
class HostSystemHelper
{
public int HostSystemID { get; set; }
public string ClusterName {get; set;}
public object GetItem()
{
// return new HostSystem(); from somewhere
}
public List<object> GetList()
{
//return new List<HostSystem>(); from somewhere
}
}
But is the above a good idea or am I trying to fit a pattern in somewhere it doesn't belong?
EDIT: I've added some more overloaded methods to illustrate that my classes are complex and contain many methods, some overloaded many times according to different input params.
If I understand the concept correctly, a design like this is a really bad idea:
class DatastoreHelper
{
public string DatastoreName { get; set; }
public string DatastoreID { get; set; }
public object GetItem()
{
// return new Datastore(); from somewhere
}
public List<object> GetList()
{
// return List<Datastore>(); from somewhere
}
}
The reason is that getting results would now be a two-step process: first setting properties, then calling a method. This presents a whole array of problems:
Unintuitive (everyone is used to providing parameters as part of the method call)
Moves the parameter binding away from the call site (granted, this would probably mean "moves them to the previous LOC", but still)
It's no longer obvious which method uses which property values
Take an instance of this object and just add a few threads for instant fun
Suggestions:
Make both IGetObjects and GetObjectsManager generic so that you don't lose type safety. This loses you the ability to treat different managers polymorphically, but what is the point in that? Each manager will be in the end specialized for a specific type of object, and unless you know what that type is then you cannot really use the return value of the getter methods. So what do you stand to gain by being able to treat managers as "manager of unknown"?
Look into rewriting your GetX methods to accept an Expression<Func<T, bool>> instead of bare values. This way you can use lambda predicates which will make your code massively more flexible without really losing anything. For example:
helper.GetItem(i => i.DataStoreID == 42);
helper.GetList(i => i.DataStoreName.Contains("Foo"));
The first code samples look quite similar to the Repository Pattern. I think this is what are you trying to apply. The last sample is not good and Jon told you why. However, instead of reinventing the wheel, read a bit about the Repository (lots of questions about it on SO) because, if I understood correctly, this is what you really want.
About reuse, not many things and especially persistence interface are reusable. There is the Generic Repository Pattern (I consider it an anti-pattern) which tries to accomplish that but really, do all the application needs the same persistence interface?
As a general guideline, when you design an object, design it to fullfil the specific application needs, if it happens to be reused that's a bonus, but that's not a primary purpose of an object.
It is not a good idea. Based on these examples you would be better off with a generic interface for the varying return type and parameters of GetItem/GetList. Though honestly the prevalence of Managers, the use of something cas vague as GetItem in multiple places and trying to fit your solution into design patterns (rather than defining the solution in terms of the patterns) are huge code smells to me for the wider solution.

How to make a Generic Repository?

I am wondering if anyone has any good tutorials(or maybe even a library that is already made and well documented) on making a generic repository.
I am using currently linq to sql but it might change so I don't know if you can make a generic repository that would take little to no changes if I would say switch to entity framework.
Thanks
I think I should also add why I want a generic repository. The reason is in my database I have like corporate tables(users who's subscriptions are paid by someone else) and individual tables(people who find my site through google or whatever and pay for their own subscription)
But I will have 2 very similar tables. For instance I have 2 settings tables one for corporate users and one for the individuals.
Now since they are 2 different tables I need 2 different insert methods as I am inserting it into 2 different tables and at this current time only one field is different(that is the PK).
So now I need all these duplicate methods and I don't want that. Maybe what I have in my database is could be considered as a design flaw(and maybe it is) but one of the reasons behind this was if needed I can break up my database into 2 different databases very easy and I am not going to change my design anytime soon.
Here is my answer to another question of the same type. Hope it helps:
Advantage of creating a generic repository vs. specific repository for each object?
Edit:
It sounds like you want to treat two concrete types as one logical type. To do that, first define the logical type:
public interface ISubscription
{
// ...
}
Then, define the concrete types as part of your data model (interfaces would be implemented in another partial class):
[Table("CorporateSubscription")]
public partial class CorporateSubscription : ISubscription
{
}
[Table("IndividualSubscription")]
public partial class IndividualSubscription : ISubscription
{
}
Next, define the repository which operates on the logical type:
public interface ISubscriptionRepository
{
CorporateSubscription GetCorporate(string key);
IndividualSubscription GetIndividual(int userId);
IEnumerable<ISubscription> ListAll();
IEnumerable<CorporateSubscription> ListCorporate();
IEnumerable<IndividualSubscription> ListIndividual();
void Insert(ISubscription subscription);
}
Finally, implement the interface by using both tables:
public class SubscriptionRepository : ISubscriptionRepository
{
private readonly YourDataContext _dataContext;
public SubscriptionRepository(YourDataContext dataContext)
{
_dataContext = dataContext;
}
#region ISubscriptionRepository
public CorporateSubscription GetCorporate(string key)
{
return _dataContext.CorporateSubscriptions.Where(c => c.Key == key).FirstOrDefault();
}
public IndividualSubscription GetIndividual(int userId)
{
return _dataContext.IndividualSubscriptions.Where(i => i.UserId == userId).FirstOrDefault();
}
public IEnumerable<ISubscription> ListAll()
{
return ListCorporate()
.Cast<ISubscription>()
.Concat(ListIndividual().Cast<ISubscription>());
}
public IEnumerable<CorporateSubscription> ListCorporate()
{
return _dataContext.CorporateSubscriptions;
}
public IEnumerable<IndividualSubscription> ListIndividual()
{
return _dataContext.IndividualSubscriptions;
}
public void Insert(ISubscription subscription)
{
if(subscription is CorporateSubscription)
{
_dataContext.CorporateSubscriptions.InsertOnCommit((CorporateSubscription) subscription);
}
else if(subscription is IndividualSubscription)
{
_dataContext.IndividualSubscriptions.InsertOnCommit((IndividualSubscription) subscription);
}
else
{
// Forgive me, Liskov
throw new ArgumentException(
"Only corporate and individual subscriptions are supported",
"subscription");
}
}
#endregion
}
Here is an example of an insert. Don't get too wrapped up in the presenter class; I just needed a situation in which subscriptions would be created based on a flag:
public class CreateSubscriptionPresenter
{
private readonly ICreateSubscriptionView _view;
private readonly ISubscriptionRepository _subscriptions;
public CreateSubscriptionPresenter(
ICreateSubscriptionView view,
ISubscriptionRepository subscriptions)
{
_view = view;
_subscriptions = subscriptions;
}
public void Submit()
{
ISubscription subscription;
if(_view.IsCorporate)
{
subscription = new CorporateSubscription();
}
else
{
subscription = new IndividualSubscription();
}
subscription.Notes = _view.Notes;
_subscriptions.Insert(subscription);
}
}
Great Linq to Sql resources:
A t4 template that by generates exactly what is created by default, but can be fully customised.
http://l2st4.codeplex.com/
Using Linq to Sql for a multi tier application. It has a GenericObjectDataSource which I have found very handy
http://multitierlinqtosql.codeplex.com
Search all properties of an IQueryable with one single search
http://naspinski.codeplex.com/

Categories