Consider the following interface:
public interface SomeRepo
{
public IEnumerable<IThings> GetThingsByParameters(DateTime startDate,
DateTime endDate,
IEnumerable<int> categorIds,
IEnumerable<int> userIds,
IEnumerable<int> typeIds,
string someStringToFilerBy);
}
Is there any benefit in doing this instead?
public IEnuemrable<IThings> GetThingsByParamters(IParameter parameter);
Where IParameter is an object defined as such:
public interface IParameter
{
DateTime startDate { get; }
DateTime endDate { get; }
IEnumerable<int> categorIds { get; }
IEnumerable<int> userIds { get; }
IEnumerable<int> typeIds { get; }
string someStringToFilerBy { get; }
}
I don't see any benefit in doing IParameter other than it makes it a bit more readable but the extra layer of complexity doesn't seem to be worth it.
Anything that I maybe missing? Thanks.
If that's just for that single place, it may not be worth it all that much.
Creating a class on its own does have some possible benefits, but they're quite dependent on exactly that; whether you would be able to reuse it.
You could add some sort of early data validation to your IParameters implementation (eg. endDate can't be earlier than startDate - it's common sense, you don't need to be a repository object to know that).
If some values are optional and some are not, a Parameters class gives you an opportunity to clearly distinguish these two categories.
It's much easier to find all usages of Parameters in your code than all the occurences of raw "start date / end date / ids" packs.
This being said, readability isn't a minor concern. I feel that 6 parameters per method is twice too many. And based off experience, I wouldn't bet it will stop at 6.
You can see in book Clean Code (Robert C. Martin) that is not a good idea to use many parameters in a method (the book recommends use at most 3), if you have a method that requires so many parameters you have to think again on your design, or it suggests that your model need one more class.
The extreme of that is developing your own expression system where IParameter has a string operator ("Equals", "LessThanOrEqualTo", "Plus", etc.) and then has an array of IParameter[] called Children or something. Of course if you're going to do that, why not use something built-in like LINQ or C#'s Expression's? If this isn't backed by a database and you need to use string filtering that's a good option (or use a DataTable's built-in filtering/parsing of expressions if you don't care about performance).
If this is backed by a database, usually it's a bad idea to expose arbitrary querying on a repository that, say, ties to a SQL database because the end developer may not know which columns are indexed and may write ill-performant queries (particularly if they don't have easy access to production-scale data) - it's better to give specific query methods that take in specific methods that map to essentially a SQL SELECT and fine-tuning each query (assuming your repository is backed by a SQL database).
This is more performant because now you explicitly control which indexes the end developer can query from by exposing a method that takes in explicit arguments.
This also makes unit-testing dependencies of your repository much easier because it's easy to mock repository a strongly typed method like that - you'd end up making a fake in-memory abstraction of the database using LINQ-to-Objects if you allow your services to define their own queries - and that can sometimes give false positives.
There's nothing inherently absolutely wrong - I just see the typical use case for something like that of being very explicit if backed by a database or if not leveraging an already-existing filtering/expression system if this is all in-memory.
Related
I have been carving out a section of code for a reporting app. I have derived many interfaces, for example
public interface IDateRangeSearchable
{
DateTime StartDate { get; }
DateTime EndDate { get; }
}
which I then have created helper methods to access expressions and add them, without needing to rewrite the same logic over and over, also attempting to preserve consistency and business logic in one place:
public static Expression<Func<Thingy, bool>> AddDateRangeFilterExpr<T>(this T model, Expression<Func<Thingy,bool>> webOrderItemExpr)
where T : IDateRangeSearchable
{
return webOrderItemExpr.AndAlso(thing => thing.Date >= model.StartDate && thing.Date <= model.EndDate);
}
AndAlso is a custom function that essentially combines expressions so as to avoid using multiple Where statements.
But here is the problem I see developing from this pattern:
Option 1: I have to write a custom implementation of "AddDateRangeFilterExpr" for Entity Object "Thingy1", "Thingy2, ad infinitum.
Problem: this is not dry. Probably what I will do starting out, since i am mainly concerned with 3 or 4 entity objects, and prefer duplication to the wrong abstraction. But I am looking for the right one here, since more may be added.
Option 2: I add an interface onto the Entity Object that has field "Date", and rewrite the signature.
Problem : Date fields I deal with vary.. nullable, not nullable, and can be named "Date", "DateAdded", "thingDate" etc. Meaning multiple interfaces AND implementations, clunky, not dry probably even worse..
Option 3:???
Call me crazy, but I am not an expression wizard yet. I am very interested in them though. I want to know if it is possible to transform an expression from:
Expression<Func<Thingy, DateTime?>> dateExpr = t => t.Date;
into
Expression<Func<Thingy, bool>> thingExpr = t => t.Date >= someDate;
which would allow me to just pass in the expression which would then perform that date filter on the column specified.
thingExpr = model.AddDateRangeFilterExpr(thingExpr, dateExpr);
Then i would only need an implementation for DateTime and DateTime? and some entity objects with multiple date columns, i could choose different date columns depending on what was needed.
Or in other words, can you transform a predicate(correct term?) somehow from a Date, to a boolean constructed from the column of that date field?
Sorry, I am really on the border of my knowledge here with expressions, so my language gets less precise as i tread into what I don't understand fully, I am just really here to determine if my wishful thinking could bear fruit in this direction. Open to criticisms on the whole approach as well, or resources for learning more about expressions in relation to this.
I'm trying to use DDD when developing a new system. In this system I have places and I need to give access to places based on which adgroups you're a member of. I also need to get a list of allowed places from a list of adgroups.
I've come up with the following:
interface IPlaceRepository
{
Places[] GetPlacesForGroups(AdGroup[] adGroups);
}
class AdGroup()
{
string Name { get; private set; }
}
class Place
{
string Name { get; private set; }
}
Now I need to add a function that grants a group access to a particular place. According to DDD which is the right way to do it? I have a two suggestions.
I assume that adgroups can be considered as value objects.
Add a function to Place.
void GiveAccessTo(AdGroup adGroup) { ... }
and add a function to IPlaceRepository.
void AddGroupToPlace(Place p, Group g) { ... }
Then I need to inject the IPlaceRepository into Place for use inside GiveAccessTo.
Another way maybe is to create an ISecurityService? I can think of a method like this on that service.
void GiveAccessToPlace(AdGroup g, Place p)
In the same way as option 1 I need to implement a method on IPlaceRepository and inject the repository into the service.
Which is the DDD way to do this?
Repositories persist complete aggregates, normally you wouldn't have an AddGroupToPlace method.
Since ADGroup is a value object, you can use the GiveAccessTo method to add the groups to the Place aggregate. After doing that, you use the repository to persist the complete Place aggregate.
Services are mostly used when an operation spans multiple aggregates. This can often be avoided using events though.
(indirect answer)
Not sure if DDD has some rules specifically for your case. I would follow these steps:
Draw on a paper pad the aggregates, paying attention to the root aggregate (which entity contains another one)
Draw your queries
List several approaches to store the list of allowed items
Keep in mind how you would store this, in a document-oriented database or else where (this is where materialized view may complicate things)
Several approaches can be valid, return to your queries and pick the best approach (memory consumption, speed, least number of items required to be queried)
Separate security from business API, use Authorization pattern from the framework you use
Only use white-listing (list all allowed resources, deny all by default)
I'm currently trying to pick a C# ORM to use with my PostgreSQL database, and I'm interested in the micro-ORMs, since they allow me to better utilize the power of Postgres(and since full blown ORMs are hard to configure. While Dapper simply works, trying to deal with NHibernate has left a forehead shaped dent in my screen...)
Anyways, currently PetaPoco has the lead, but there is one feature I need and can't figure if it has(to be fair - I couldn't find it in the other ORMs either) - mapping of custom types.
My PostgreSQL database uses the hstore and Postgis extensions, which define custom types. I don't expect any ORM to support those types(it's hard enough to find one that supports PostgreSQL!) but I want to be able to provide my own mappers for them, so when I get them as columns or send them as parameters PetaPoco will automatically use my mappers.
Is this even possible? The closest I could find is IDbParameter support, but those are built-in types and I need to write mappers for extension types that are not part of the list...
Based on Schotime's comment, I came with half a solution - how to parse the hstore from the query results into the object. I'm leaving this question open in case someone wants to get the other solution.
I need to define my own mapper. Obviously I want to use PetaPoco's default mapping for regular types, so it's only natural to inherit PetaPoco.StandardMapper - but that won't work, because StandardMapper implements PetaPoco.IMapper's fields without the virtual attribute - so I can't override them(I can only overshadow them, but that's not really helping).
What I did instead was to implement IMapper directly, and delegate regular types to an instance of PetaPoco.IMapper:
public class MyMapper:PetaPoco.IMapper{
private PetaPoco.StandardMapper standardMapper=new PetaPoco.StandardMapper();
public PetaPoco.TableInfo GetTableInfo(Type pocoType){
return standardMapper.GetTableInfo(pocoType);
}
public PetaPoco.ColumnInfo GetColumnInfo(PropertyInfo pocoProperty){
return standardMapper.GetColumnInfo(pocoProperty);
}
public Func<object, object> GetFromDbConverter(PropertyInfo TargetProperty, Type SourceType){
if(TargetProperty.PropertyType==typeof(HStore)){
return (x)=>HStore.Create((string)x);
}
return standardMapper.GetFromDbConverter(TargetProperty,SourceType);
}
public Func<object, object> GetToDbConverter(PropertyInfo SourceProperty){
if(SourceProperty.PropertyType==typeof(HStore)){
return (x)=>((HStore)x).ToSqlString();
}
return standardMapper.GetToDbConverter(SourceProperty);
}
}
The HStore object is constructed similarly to the one in Schotime's gist.
I also need to register the mapper:
PetaPoco.Mappers.Register(Assembly.GetAssembly(typeof(MainClass)),new MyMapper());
PetaPoco.Mappers.Register(typeof(HStore),new MyMapper());
Now, all of this works perfectly when I read from the query - but not when I write query parameters(even though I defined GetToDbConverter. It seems my mapper simply isn't called when I'm writing query parameters. Any idea how to do that?
I've built an open source application, and I'd be curious to know how others are handling customer-specific requests. It's important to me to keep the app simple; I'm not trying to make it all things for all people. Apps can get bloated, complex, and just about unusable that way. However, there are some customer-specific options that would be nice (it just wouldn't apply to all customers). For example...
Say we have a domain entity called Server. In the UI, we let a customer pick from a list of servers. For one company, it's helpful to filter the servers by location (US, Germany, France, etc...). It would be easy enough to add a server property like this:
public class Server
{
public Location Location { get; set; }
// other properties here
}
My concern is that Server could become bloated with properties over time. And even if I only add location, not all customers would care about that property.
One option is to allow for user-defined fields:
public class Server
{
public string UserField1 { get; set; }
public string UserField2 { get; set; }
public string UserField3 { get; set; }
// etc...
// other properties here
}
Is that the best way to handle this? I don't like the fact that type safety is gone by making everything a string. Are there other/better ways that people are handling issues like this? Is there even a design pattern for something like this?
In my opinion, a good design pattern for something like this is to use schemas at the database level and then basic inheritance at the class level.
CREATE TABLE dbo.A (
ColumnA INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
ColumnB VARCHAR(50),
ColumnC INT,
etc.
)
And now we have a client who needs some specific functionality, so let's create an extension to this table in a different schema:
CREATE TABLE CustomerA.A (
ColumnA INT NOT NULL PRIMARY KEY,
Location VARCHAR(50)
)
But now we have another client who needs to extend it differently:
CREATE TABLE CustomerB.B (
ColumnA INT NOT NULL PRIMARY KEY,
DataCenterID INT
)
Though the fields may not be relevant, you get the idea, and so now we need to build the customer specific domain models here:
public abstract class A
{
public int ColumnA { get; set; }
public string ColumnB { get; set; }
public int ColumnC { get; set; }
}
public class CustomerA_A : A
{
public string Location { get; set; }
}
public class CustomerB_A : A
{
public int DataCenterID { get; set; }
}
And so now when we need to build something for Customer A, we'll build their subclass, and for Customer B theirs, and so on.
Now, FYI, this is the beginnings of a very dynamic system. I say that because the piece that's missing, that's not yet dynamic, is the user-interface. There is a significant number of ways that can be accomplished, but way outside the scope of this question. That is something you'll have to consider. I say that because the way you manage the interface will determine how you even know to build which subclass.
I hope this has helped.
The usual approach early on is to use the config XML files for this sort of thing. But programming for client-specific needs requires a whole mindset around how you program. Refer to this answer to a similar question.
Of course it always depends on how much customization you want to allow. In our product we went as far as enabling users to completely defined their own entities with properties and relations among them. Basically, every EntityObject, as we call our entities, in the end consists of a value collection and a reference to a meta-model describing the values within them. We designed our own query language that allows us to query the database and use expressions that are translate-able to any target language (although we currently only do SQL and .net).
The game does not end there and you quickly find that things like validation rules, permissions, default values and so on become a must have. Of course all of this then requires UI support, at least for the execution of the meta-model.
So it really depends on the amount of adjustment a end-user should be able to perform. I'd guess that in most cases simple user fields, as you described, will be sufficient. In that case I would provide a single field and store JSON text within that. In the UI you can then provide at least a semi-decent UI allowing structure and extensibility.
Option 1: Say "no". :-)
And while I say that (half) jokingly, there is some truth to it. Too often, developers open themselves up to endless customization by allowing one or two custom features, setting the snowball in motion.
Of course, this has to be balanced, and it sounds like you may be doing this to an extent. But if you truly want to keep your app simple, then keep it simple and avoid adding customizations like this.
Option 2: Inheritance.
If you really need to add the customization, I would lean the way of building a base class with all "standard" options, and then building customer-specific classes containing customer-specific optimizations.
For example:
public class Server
{
// all standard properties here
}
Then for Joe's Pizza, you can have:
public class JoesPizzaServer : Server
{
public Location Location { get; set; }
}
The side-benefit to this is that it will allow you to base your presentation views off of the client-specific (or base) models.
For example, in MVC you could set up your view models like this, and then you could have specific views for each customer.
For example, Bob's Burgers would have its own view on the base model:
#model MyApp.Server
#* implement the base form *#
And Joe's Pizza's view would use the custom model:
#model MyApp.JoesPizza
#* implement the base form -- a partial view -- with addtional custom fields
MVC does a really good job of supporting this type of pattern. If you're not using MVC (maybe WPF or Web Forms), there are still ways to leverage partial "view" files for accomplishing something similar.
Of course, your database can (and probably should) support a similar inheritance model. Entity Framework even supports various inheritance models like this.
I may be wrong here, but it looks like you want to handle different versions of your software with the same code base. I can think of two approaches for this:
Actually define different versions for it and handle changes for each client. This won't give you problems from the domain-modeling point of view, but will require a supporting infrastructure, which will have to scale according to your client requirements. There are some related questions out there (e.g. this, this and this).
Handle this at the domain-model level, as a user-defined configuration. The advantage of this approach is that you don't have to incorporate multiple versions of your software, but this comes at the expense of making your model more generic and potentially more complex. Also your tests will surely have to be adapted to handle different scenarios. If you are going in that direction I would model an object representing the attribute (with a name and a value) and consider the Server class as having a collection of attributes. In that way your model still captures your requirements in an OO style.
HTH
I approach from Python that I think would work rather well hear is a dictionary. The key is your field name, the value is the, errrrr... value ;)
It'd be simple enough to represent in a database too.
I've been using LINQ extensively in my recent projects, however, I have not been able to find a way of dealing with objects that doesn't either seem sloppy or impractical.
I'll also note that I primarily work with ASP.net.
I hate the idea of exposing the my data context or LINQ returned types to my UI code. I prefer finer grained control over my business objects, and it also seems too tightly coupled to the db to be good practice.
Here are the approaches I've tried ..
Project items into a custom class
dc.TableName.Select(λ => new MyCustomClass(λ.ID, λ.Name, λ.Monkey)).ToList();
This obviously tends to result in a lot of wireup code for creating, updating etc...
Creating a wrapper around returned object
public class MyCustomClass
{
LinqClassName _core;
Internal MyCustomClass(LINQClassName blah)
{
_core = blah;
}
int ID {get { return _core.ID;}}
string Name { get {return _core.Name;} set {_core.Name = value;} }
}
...
dc.TableName.Select(λ => new MyCustomClass(λ)).ToList();
Seems to work pretty well but reattaching for updates seems to be nigh impossible somewhat defeating the purpose.
I also tend to like using LINQ Queries for transformations and such through my code and I'm worried about a speed hit with this method, although I haven't tried it with large enough sets to confirm yet.
Creating a wrapper around returned object while persisting data context
public class MyCustomClass
{
LinqClassName _core;
MyDataContext _dc;
...
}
Persisting the data context within my object greatly simplifies updates but seems like a lot of overhead especially when utilizing session state.
A quick Note: I know the usage of λ is not mathematically correct here - I tend to use it for my bound variable because it stands out visually, and in most lambda statements it is the transformation that is important not the variable - not sure if that makes any sense but blah
Sorry for the extremely long question.
Thanks in advance for your input and Happy New Years!
I create "Map" extension functions on the tables returning from the LINQ queries. The Map function returns a plain old CLR object. For example:
public static MyClrObject Map(this MyLinqObject o)
{
MyClrObject myObject = new MyClrObject()
{
stringValue = o.String,
secondValue = o.Second
};
return myObject;
}
You can then add the Map function to the select list in the LINQ query and have LINQ return the CLR Object like:
return (from t in dc.MyLinqObject
select t.Map()).FirstOrDefault();
If you are returning a list, you can use the ToList to get a List<> back. If you prefer to create your own list types, you need to do two things. First, create a constructor that takes an IEnumerable<> of the underlying type as it's one argument. That constructor should copy the items from the IEnumerable<> collection. Second, create a static extension method to call that constructor from the LINQ query:
public static MyObjectList ToMyObjectList(this IEnumerable<MyObjectList> collection)
{
return new MyObjectList (collection);
}
Once these methods are created, they kind of hide in the background. They don't clutter up the LINQ queries and they don't limit what operations you can perform in teh query.
This blog entry has a more thorough explanation.