I'm trying to find a collective name for these non-"helper" classes which encapsulate method results (e.g. "SignupResult"), classes which hold multiple filter values (e.g. "ContactSearchFilter"), my SortDirection enum etc. -- I want to organize these correctly but can't find the correct name for these as a whole. Help?
Do they really have anything in common that would justify an own category name?
If you want to organize such files, I suggest putting them in the same folder/namespace as their dependencies, i.e. the enum belongs in the same namespace as the dictionary you use it with, SignupResult belongs together with the other signup process classes etc.
Depends on what you do with it. If you save it in the database, it's effectively an 'Entity'. If you just use it to pass variables around, I'd call it a 'Holder' class (though that's not really a formal term).
It's arguably interesting to consider that if you have too many of these, perhaps your design is not so great. You probably shouldn't be so-much passing results around, as doing actions based on things happening. JMHO. FWIW.
Perhaps creating a class called UserSession or something and have things like SignupResult/ContactSearchFilter as properties.
Related
I am tacking a large refactor of a project, and I had asked this question to confirm/understand the direction I should go in and I think I got the answer that I wanted, which is not to throw away years worth of code. So, now begins the challenge of refactoring the code. I've been reading Martine Fowler and Martin Feathers' books, and they have a lot of insight, but I am looking for advice on the ultimate goal of where I want the application to be.
So to reiterate the application a little bit, its a dynamic forms system, with lots of validation logic and data logic between the fields. The main record that gets inserted is the set of form fields that is on the page. Another part of it is 'Actions' that you can do for a person. These 'Actions' can differ client by client, and there are hundreds of 'Actions'. There is also talk that we can somehow make an engine that can eventually take on other similar areas, where a 'person' can be something else (such as student, or employee). So I want to build something very de-coupled. We have one codebase, but different DBs for different clients. The set of form fields on the page are dynamic, but the DB is not - it is translated into the specific DB table via stored procs. So, the generic set of fields are sent to the stored proc and the stored proc then decides what to do with the fields (figure out which table it needs to go to). These tables in fact are pretty static, meaning that they are not really dynamic, and there is a certain structure to it.
What I'm struggling specifically is how to setup a good way to do the dynamic form control page. It seems majority of the logic will be in code on the UI/aspx.cs page, because its loading controls onto the webpage. Is there some way I can do this, so it is done in a streamlined fashion, so the aspx.cs page isn't 5000 lines long? I have a 'FORM' object, and one of the properties is its' 'FIELDS'. So this object is loaded up in the business layer and the Data layer, but now on the fron end, it has to loop through the FIELDS and output the controls onto the page. Also, someway to be able to control the placement would be useful, too - not sure how do get that into this model....
Also, from another point of view - how can I 'really' get this into an object-oriented-structure? Because technically, they can create forms of anything. And those form fields can represent any object. So, for example, today they can create a set of form fields, that represent a 'person' - tomorrow they can create a set of form fields that represent a 'furniture'. How can I possibly translate this to to a person or a furniture object (or should I even be trying to?). And I don't really have controls over the form fields, because they can create whatever....
Any thought process would be really helpful - thanks!
How can I possibly translate this to to a person or a furniture object
(or should I even be trying to?)
If I understand you correctly, you probably shouldn't try to convert these fields to specific objects since the nature of your application is so dynamic. If the stored procedures are capable of figuring out which combination of fields belongs to which tables, then great.
If you can change the DB schema, I would suggest coming up with something much more dynamic. Rather than have a single table for each type of dynamic object, I would create the following schema:
Object {
ID
Name
... (clientID, etc.) ...
}
Property {
ID
ObjectID
Name
DBType (int, string, object-id, etc.)
FormType ( textbox, checkbox, etc.)
[FormValidationRegex] <== optional, could be used by field controls
Value
}
If you can't change the database schema, you can still apply the following to the old system using the stored procedures and fixed tables:
Then when you read in a specific object from the database, you can loop through each of the properties and get the form type and simple add the appropriate generic form type to the page:
foreach(Property p in Object.Properties)
{
switch(p.FormType)
{
case FormType.CheckBox:
PageForm.AddField(new CheckboxFormField(p.Name, p.Value));
break;
case FormType.Email:
PageForm.AddField(new EmailFormField(p.Name, p.Value));
break;
case FormType.etc:
...
break;
}
}
Of course, I threw in a PageForm object, as well as CheckboxFormField and EmailFormField objects. The PageForm object could simply be a placeholder, and the CheckboxFormField and EmailFormField could be UserControls or ServerControls.
I would not recommend trying to control placement. Just list off each field one by one vertically. This is becoming more and more popular anyway, even with static forms who's layout can be controlled completely. Most signup forms, for example, follow this convention.
I hope that helps. If I understood your question wrong, or if you'd like further explanations, let me know.
Not sure I understand the question. But there's two toolboxes suitable for writing generic code. It's generics, and it's reflection - typically in combination.
I don't think I really understand what you're trying to do, but a method using relfection to identify all the properties of an object might look like this:
using System.Reflection;
(...)
public void VisitProperties(object subject)
{
Type subjectType = subject.GetType();
foreach (PropertyInfo info in subjectType.GetProperties()
{
object value = info.GetValue(subject, null);
Console.WriteLine("The name of the property is " + info.Name);
Console.WriteLine("The value is " + value.ToString());
}
}
You can also check out an entry on my blog where I discuss using attributes on objects in conjunction with reflection. It's actually discussing how this can be utilized to write generic UI. Not exactly what you want, but at least the same principles could be used.
http://codepatrol.wordpress.com/2011/08/19/129/
This means that you could create your own custom attributes, or use those that already exists within the .NET framework already, to describe your types. Attributes to specify rules for validation, field label, even field placement could be used.
public class Person
{
[FieldLabel("First name")]
[ValidationRules(Rules.NotEmpty | Rules.OnlyCharacters)]
[FormColumn(1)]
[FormRow(1)]
public string FirstName{get;set;}
[FieldLabel("Last name")]
[ValidationRules(Rules.NotEmpty | Rules.OnlyCharacters)]
[FormColumn(2)]
[FormRow(1)]
public string LastName{get;set;}
}
Then you'd use the method described in my blog to identify these attributes and take the apropriate action - e.g. placing them in the proper row, giving the correct label, and so forth. I won't propose how to solve these things, but at least reflection is a great and simple tool to get descriptive information about an unknown type.
I found xml invaluable for this same situation. You can build an object graph in your code to represent the form easily enough. This object graph can again be loaded/saved from a db easily.
You can turn your object graph into xml & use xslt to generate the html for display. You now also have the benefit of customising this transform for differnetn clients/versions/etc. I also store the xml in the database for performance & to give me a publish function.
You need some specific code to deal with the incoming data, as you're going to be accessing the raw request post. You need to validate the incoming data against what you think you was shown. That stops people spoofing/meddling with your forms.
I hope that all makes sense.
In order to protect ourself from failure because of any renaming of properties (Let's say you regenerate your poco classes because you have changed some column names in the relevant Db table) is it a good practice to decalre constant strings that keep the property names inside?
public const string StudentCountPropertyName = "StudentCount";
public int StudentCount {get;set;}
For example: Think about a DataBinding; where you type the property name in the DataFieldName attribute explicitly.
Or this is not a good idea and there is a better and still safer way?
It is always a good idea IMHO to move any 'magic strings' to constants.
You could consider using lambda expressions to 'pick' your properties, for example:
GetDataFieldName(studentCollection => studentCollection.Count)
You will have to implement GetDataFieldName yourself, using a bit of reflection. You can look at HtmlHelperExtensions from MVC to see how it can be done. This will be the most safe approach, which gives you compile-time errors when something goes wrong and allows easy property renaming using existing refactoring tools.
From one point of view: if you using this property name multiple times it is good practice. It will help for sure with the refactoring and when you for example change property name you see that you need change this const also.
From another point of view i guess it will be ugly when my class with 10 properties will have 10 additional consts. Another solution if you want avoid consts or explicit name typing can be getting property names through the reflection.
Use such approach or not you should decide yourself.
I think it's a common practice to put this "magical string" or "magical numbers" in some kind of strong typed store.
Something you can consider is to code it in a Aspect Orientied Way.
For example the calls to notifypropertychagned can be realized with an attribute implemented with an aop framework, like PostSharp .
[NotifyChange]
public int Value {get;private set}
This tools also have some downsides but i think there are scenarios where they can save you a lot of work
I do not know if I fully understand your question, but if I understand it right I would have used an attribute for that, an example could be the use of ColumnAttribute in Linq which you use to map a property to a specific column in a database (http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.columnattribute.dbtype.aspx), like in this example:
[Column(Storage="ProductID", DbType="VarChar(150)", CanBeNull=False)]
public string Id { get; set; }
And I would never use DataFieldName, I would DataBind to the strongly typed objects (and of course also make an interface to the class that uses the property above so I easily can change the implementation in the future ;))
I suppose if the names are used in many places then it would be easier just to change them in this one place and use the constant as described in your comment.
However, a change to a database column name and object property name implies a change to your conceptual data model. How often do you think this is going to happen? In the early stages of a project, whilst conceptual modelling and implementation are paralellised across a dev team, this may be quite fluid, but once the initial conceptual modelling is done (whether this in a formalised conscious manner or just organically), it's usually quite unlikely that fundamental things like these are going to change. For this reason I think it's relatively unusual to have do this and the technique will only be productive in edge cases.
Absolutely. It's a good idea.
By the way, I would argue that these kind of things could be better stored in application settings, because you can define such things in an application configuration file later by overriding these settings.
Doing that this way you'll avoid re-compiling if some database, POCO or whatever changes, and as in newer Visual Studio versions like 2010, you can tell it to generate settings with "public" accessibility, you can share strongly-typed settings with any assembly that reference the one containing them.
At the end of the day, I'd change your code with DataBindingSettings.StudentCountPropertyName instead of a constant.
Easy to manage, more re-usable, and readable, as "you configure a data-binding with its settings".
Check this MSDN article to learn more about application settings:
http://msdn.microsoft.com/en-us/library/a65txexh(v=VS.100).aspx
I'm not really sure what tags should be on this sort of question so feel free to give me some suggestions if you think some others are more suited.
I have a dynamic object with an unknown number or properties on it, it's from a sort of dynamic self describing data model that lets the user build the data model at runtime. However because all of the fields holding relevant information to the user are in dynamic properties, it's difficult to determine what should be the human readable identifier, so it's left up to the administrator. (Don't think it matters but this is an ASP.NET MVC3 Application). To help during debugging I had started decorating some classes with DebuggerDisplayAttribute to make it easier to debug. This allow me to do things like
[DebuggerDisplay(#"\{Description = {Description}}")]
public class Group
to get a better picture of what a specific instance of an object is. And this sort of setup would be perfect but I can't seem to find the implementation of this flexibility. This is especially useful on my dynamic objects because the string value of the DebuggerDisplayAttribute is resolved by the .NET framework and I have implementations of TryGetMember on my base object class to handle the dynamic aspect. But this only makes it easier for development. So I've added a field on what part of my object is still strongly typed and called it Title, and I'd like to let the administer set the implementation using their own format, so to speak. So for example they might build out a very simplistic rental tracking system to show rentals and they might specify a format string along the lines of
"{MovieTitle} (Due: {DueDate})"
I would like that when they save the record to add some logic to first update the Title property by resolving the format string to substitute each place holder with the value of the appropriate property on the dynamic object. So this might resolve to a title of
"Inception (Due: May 21, 2011)", or a more realistic scenario of a format string of
"{LastName}, {FirstName}"
I don't want the user to have to update the title of a record when they change the first name field or the last name field. I fully realize this will likely use reflection but I'm hoping some one out there can give me some pointers or even a working example to handle complex format strings that could be a mix if literal text and placeholders.
I've not had much luck looking for an implementation on the net that will do what I want since I'm not really sure what keywords would give me the most relevant search results?
You need two things:
1) A syntax for formatting strings
You have already described a syntax where variables are surrounded by bracers, and if you want to use that you need to build a parser that can parse that. Perhaps you also want to add ways to specify say a date or a number format.
2) Rules for resolving variables
If there is a single context object you can use reflection and match variable names to properties but if your object model is more complex you can add conventions for searching say a hierarchy of objects.
If you are planning to base your model objects on dynamic chances are that you will find the Clay library on CodePlex interesting.
Back story:
So I've been stuck on an architecture problem for the past couple of nights on a refactor I've been toying with. Nothing important, but it's been bothering me. It's actually an exercise in DRY, and an attempt to take it to such an extreme as the DAL architecture is completely DRY. It's a completely philosophical/theoretical exercise.
The code is based in part on one of #JohnMacIntyre's refactorings which I recently convinced him to blog about at http://whileicompile.wordpress.com/2010/08/24/my-clean-code-experience-no-1/. I've modified the code slightly, as I tend to, in order to take the code one level further - usually, just to see what extra mileage I can get out of a concept... anyway, my reasons are largely irrelevant.
Part of my data access layer is based on the following architecture:
abstract public class AppCommandBase : IDisposable { }
This contains basic stuff, like creation of a command object and cleanup after the AppCommand is disposed of. All of my command base objects derive from this.
abstract public class ReadCommandBase<T, ResultT> : AppCommandBase
This contains basic stuff that affects all read-commands - specifically in this case, reading data from tables and views. No editing, no updating, no saving.
abstract public class ReadItemCommandBase<T, FilterT> : ReadCommandBase<T, T> { }
This contains some more basic generic stuff - like definition of methods that will be required to read a single item from a table in the database, where the table name, key field name and field list names are defined as required abstract properties (to be defined by the derived class.
public class MyTableReadItemCommand : ReadItemCommandBase<MyTableClass, Int?> { }
This contains specific properties that define my table name, the list of fields from the table or view, the name of the key field, a method to parse the data out of the IDataReader row into my business object and a method that initiates the whole process.
Now, I also have this structure for my ReadList...
abstract public ReadListCommandBase<T> : ReadCommandBase<T, IEnumerable<T>> { }
public class MyTableReadListCommand : ReadListCommandBase<MyTableClass> { }
The difference being that the List classes contain properties that pertain to list generation (i.e. PageStart, PageSize, Sort and returns an IEnumerable) vs. return of a single DataObject (which just requires a filter that identifies a unique record).
Problem:
I'm hating that I've got a bunch of properties in my MyTableReadListCommand class that are identical in my MyTableReadItemCommand class. I've thought about moving them to a helper class, but while that may centralize the member contents in one place, I'll still have identical members in each of the classes, that instead point to the helper class, which I still dislike.
My first thought was dual inheritance would solve this nicely, even though I agree that dual inheritance is usually a code smell - but it would solve this issue very elegantly. So, given that .NET doesn't support dual inheritance, where do I go from here?
Perhaps a different refactor would be more suitable... but I'm having trouble wrapping my head around how to sidestep this problem.
If anyone needs a full code base to see what I'm harping on about, I've got a prototype solution on my DropBox at http://dl.dropbox.com/u/3029830/Prototypes/Prototype%20-%20DAL%20Refactor.zip. The code in question is in the DataAccessLayer project.
P.S. This isn't part of an ongoing active project, it's more a refactor puzzle for my own amusement.
Thanks in advance folks, I appreciate it.
Separate the result processing from the data retrieval. Your inheritance hierarchy is already more than deep enough at ReadCommandBase.
Define an interface IDatabaseResultParser. Implement ItemDatabaseResultParser and ListDatabaseResultParser, both with a constructor parameter of type ReadCommandBase ( and maybe convert that to an interface too ).
When you call IDatabaseResultParser.Value() it executes the command, parses the results and returns a result of type T.
Your commands focus on retrieving the data from the database and returning them as tuples of some description ( actual Tuples or and array of arrays etc etc ), your parser focuses on converting the tuples into objects of whatever type you need. See NHibernates IResultTransformer for an idea of how this can work (and it's probably a better name than IDatabaseResultParser too).
Favor composition over inheritance.
Having looked at the sample I'll go even further...
Throw away AppCommandBase - it adds no value to your inheritance hierarchy as all it does is check that the connection is not null and open and creates a command.
Separate query building from query execution and result parsing - now you can greatly simplify the query execution implementation as it is either a read operation that returns an enumeration of tuples or a write operation that returns the number of rows affected.
Your query builder could all be wrapped up in one class to include paging / sorting / filtering, however it may be easier to build some form of limited structure around these so you can separate paging and sorting and filtering. If I was doing this I wouldn't bother building the queries, I would simply write the sql inside an object that allowed me to pass in some parameters ( effectively stored procedures in c# ).
So now you have IDatabaseQuery / IDatabaseCommand / IResultTransformer and almost no inheritance =)
I think the short answer is that, in a system where multiple inheritance has been outlawed "for your protection", strategy/delegation is the direct substitute. Yes, you still end up with some parallel structure, such as the property for the delegate object. But it is minimized as much as possible within the confines of the language.
But lets step back from the simple answer and take a wide view....
Another big alternative is to refactor the larger design structure such that you inherently avoid this situation where a given class consists of the composite of behaviors of multiple "sibling" or "cousin" classes above it in the inheritance tree. To put it more concisely, refactor to an inheritance chain rather than an inheritance tree. This is easier said than done. It usually requires abstracting very different pieces of functionality.
The challenge you'll have in taking this tack that I'm recommending is that you've already made a concession in your design: You're optimizing for different SQL in the "item" and "list" cases. Preserving this as is will get in your way no matter what, because you've given them equal billing, so they must by necessity be siblings. So I would say that your first step in trying to get out of this "local maximum" of design elegance would be to roll back that optimization and treat the single item as what it truly is: a special case of a list, with just one element. You can always try to re-introduce an optimization for single items again later. But wait till you've addressed the elegance issue that is vexing you at the moment.
But you have to acknowledge that any optimization for anything other than the elegance of your C# code is going to put a roadblock in the way of design elegance for the C# code. This trade-off, just like the "memory-space" conjugate of algorithm design, is fundamental to the very nature of programming.
As is mentioned by Kirk, this is the delegation pattern. When I do this, I usually construct an interface that is implemented by the delegator and the delegated class. This reduces the perceived code smell, at least for me.
I think the simple answer is... Since .NET doesn't support Multiple Inheritence, there is always going to be some repetition when creating objects of a similar type. .NET simply does not give you the tools to re-use some classes in a way that would facilitate perfect DRY.
The not-so-simple answer is that you could use code generation tools, instrumentation, code dom, and other techniques to inject the objects you want into the classes you want. It still creates duplication in memory, but it would simplify the source code (at the cost of added complexity in your code injection framework).
This may seem unsatisfying like the other solutions, however if you think about it, that's really what languages that support MI are doing behind the scenes, hooking up delegation systems that you can't see in source code.
The question comes down to, how much effort are you willing to put into making your source code simple. Think about that, it's rather profound.
I haven't looked deeply at your scenario, but I have some thoughs on the dual-hierarchy problem in C#. To share code in a dual-hierarchy, we need a different construct in the language: either a mixin, a trait (pdf) (C# research -pdf) or a role (as in perl 6). C# makes it very easy to share code with inheritance (which is not the right operator for code-reuse), and very laborious to share code via composition (you know, you have to write all that delegation code by hand).
There are ways to get a kind of mixin in C#, but it's not ideal.
The Oxygene (download) language (an Object Pascal for .NET) also has an interesting feature for interface delegation that can be used to create all that delegating code for you.
Small design question here. I'm trying to develop a calculation app in C#. I have a class, let's call it InputRecord, which holds 100s of fields (multi dimensional arrays) This InputRecordclass will be used in a number of CalculationEngines. Each CalculcationEngine can make changes to a number of fields in the InputRecord. These changes are steps needed for it's calculation.
Now I don't want the local changes made to the InputRecord to be used in other CalculcationEngine's classes.
The first solution that comes to mind is using a struct: these are value types. However I'd like to use inheritance: each CalculationEngine needs a few fields only relevant to that engine: it's has it's own InputRecord, based on BaseInputRecord.
Can anyone point me to a design that will help me accomplish this?
If you really have a lot of data, using structs or common cloning techniques may not be very space-efficient (e.g. it would use much memory).
Sounds like a design where you need to have a "master store" and a "diff store", just analogous to a RDBMS you have data files and transactions.
Basically, you need to keep a list of the changes performed per calculation engine, and use the master values for items which aren't affected by any changes.
The elegant solution would be to not change the inputrecord. That would allow sharing (and parallel processing).
If that is not an option you will have to Clone the data. Give each derived class a constructor that takes the base Input as a parameter.
You can declare a Clone() method on your BaseInputRecord, then pass a copy to each CalculationEngine.