First of all, sorry could not find a good title :(
Problem
So let's say we have an object and we can apply different actions to manipulate that object and i want to persist this in the database.
As an example, we can define the following classes.
interface IAction{ object transform(object input); }
class Stretch : IAction { }
class Shrink : IAction {}
And an object can have a list of IAction. Also each action can have a type that we defined.
enum ActionType { Stretch, Shrink }
When we retrieve the actions from the database then we need to do logics to checkh what is the type of the action to know which class to create.
If i just change the order of the items in the enum, then there is nothing that tell me my database has also to change and this can introduce breaking changes.
I'm trying to find a design pattern that will avoid this situation. Is there any?
Personally, I do not understand the problem as described. Normally, I would not store/register actions or changes to an object itself and just let the objects maintain their current state.
When I want to store/register actions or create an undo/redo buffer, I would set up a logging or auditing mechanism. But I would design it completely separate from the basic storage/modeling structure, since I do not want to introduce such tight functional coupling in an application.
As with all problems, there might be many possible solutions. It is quite difficult to give an out-of-the-box solution for your problem.
You may want to look into somewhat more traditional solutions, like implementing a logging mechanism, an auditing mechanism and/or an undo/redo buffer in your application.
Choosing the best strategy depends on way more factors than described here and should be based on the overall design and architecture of your application (and the quality requirements that should be met).
By the way: In my experience, the future will certainly bring (unexpected) change requests that will break your current code logic and/or data structures. For that reason, it is important to keep everything in your application as simple and as elegant as possible while regarding all the required specifications. That will make the code more maintainable and it might also simplify future code and data migrations.
Related
I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.
In some projects I see that a dummy record is needed to create in Db in order to keep the business logic go on without breaking the Db constraints.
So far I have seen its usage in 2 ways:
By adding a field like IsDummy
By adding a field something called ObjectType which points a type: Dummy
Ok, it helps on what needs to be achieved.
But what makes me feel alert on such solutions is sometimes you have to keep in mind that some dummy records exist in the application which needs to be handled in some processes. If not, you face some problems until you realize their existence or until someone in the team tells you "Aha! You have forgotten the dummy records. You should also do..."
So the question is:
Is it a good idea to create dummy records to keep business logic as it is without making the Db complain? If yes, what is the best practice to prevent developers from skipping their existence? If not, what do you do to prevent yourself from falling in a situation where you end up with an only option of creating a dummy record?
Thanks!
Using dummy records is inferior to getting the constraints right.
There's often a temptation to use them because using dummy records can seem like the fastest way to deliver a new feature (and maybe sometimes it is), but they are never part of a good design, because they hide differences between your domain logic and data model.
Dummy records are only required when the modeller cannot easily change the Database Definition, which means the definition and/or the data model is pretty bad. One should never end up in a situation where there has to be special code in the app layer to handle special cases in the database. That is a guaranteed maintenance nightmare.
Any good definition or model will allow changes easily, without "affecting existing code".
All business logic [that is defined in the Database] should be implemented using ANSI SQL Constraints, Checks, and Rules. (Of course Lower level structures are already constrained via Domains/Datatypes, etc., but I would not classify them as "business rules".) I ensure that I don't end up having to implement dummies, simply by doing that.
If that cannot be done, then the modeller lacks knowledge and experience. Or higher level requirements such as Normalisation, have been broken, and that presents obstacles to implementing Constraints which are dependent on them; also meaning the modeller failed.
I have never needed to break such Constraints, or add dummy records (and I have worked on an awful lot of databases). I have removed dummy records (and duplicates) when I have reworked databases created by others.
I've never run across having to do this. If you need to do this, there's something wrong with your data structure, and it's going to cause problems further down the line for reporting...
Using Dummies is dumb.
In general you should aim to get your logic right without them. I have seen them used too, but only as an emergency solution. Your description sounds way too much like making it a standard practice. That would cause more problems than it solves.
The only reason I can see for adding "dummy" records is when you have a seriously bad app and database design.
It is most definitely not common practice.
If your business logic depends on a record existing then you need to do one of two things: Either make sure that a CORRECT record is created prior to executing that logic; or, change the logic to take missing information into account.
I think any situation where something isn't very easily distinguishable as "business-logic" is a cause for trying to think of a better way.
The fact that you mention "which points a type: Dummy" leads me to believe you are using some kind of ORM for handling your data access. A very good checkpoint (though not the only) for ORM solutions like NHibernate is that your source code VERY EXPLICITLY describes your data structures driving your application. This not only allows your data access to easily be managed under source control, but it also allows for easier debugging down the line should a problem occur (and let's face it, it's not a matter of IF a problem will occur, but WHEN).
When you introduce some kind of "crutch" like a dummy record, you are ignoring the point of a database. A database is there to enforce rules against your data, in an effort to ELIMINATE the need for this kind of thing. I recommend you take a look at your application logic FIRST, before resorting to this kind of technique. Think about your fellow dev's, or a new hire. What if they need to add a feature and forget your little "dummy record" logic?
You mention yourself in your question feeling apprehension. Go with your gut. Get rid of the dummy records.
I have to go with the common feeling here and argue against dummy records.
What will happen is that a new developer will not know about them and not code to handle them, or delete a table and forget to add in a new dummy record.
I have experienced them in legacy databases and have seen both of the above mentioned happen.
Also the longer they exist the harder it is to take them out and the more code you have to write to take into account these dummy records which could probably have been removed if you just did the original design without them.
The correct solution would be to update your business logic.
To quote your expanded explanation:
Assume that you have a Package object and you have implement a business logic that a Package without any content cannot be created. YOu created some business layer rules and designed your Db with relevant constraints. But after some years a new feature is requested and to accomplish that you have to be able to create a package without a contnent. To overcome this, you decide to create a dummy content which is not visible on UI but lets you to create an empty package.
So the at one time to a package w/o content was invalid thus business layer enforced existence of content in a package object. That makes sense. Now if the real world scenario has changed such there is now a need VALID reason to create Package objects without content it is the business logic layer which needs to be changed.
Almost universally using "dummy" anything anywhere is a bad idea and usually indicates an issue in implementation. In this instance you are using dummy data to allow "compliance" with a business layer which is no longer accurately representing the real world constraints of the business.
If package without content is not valid then dummy data to allow "compliance" with business layer is a foolish hack. In essence you wrote rules to protect your own system and then how are attempting to circumvent your own protection. On the other hand if package without content is valid then business layer shouldn't be enforcing bogus constraints. In neither instance is dummy data valid.
I need to work on an application that consists of two major parts:
The business logic part with specific business classes (e.g. Book, Library, Author, ...)
A generic part that can show Books, Libraries, ... in data grids, map them to a database, ...).
The generic part uses reflection to get the data out of the business classes without the need to write specific data-grid or database logic in the business classes. This works fine and allows us to add new business classes (e.g. LibraryMember) without the need to adjust the data grid and database logic.
However, over the years, code was added to the business classes that also makes use of reflection to get things done in the business classes. E.g. if the Author of a Book is changed, observers are called to tell the Author itself that it should add this book to its collection of books written by him (Author.Books). In these observers, not only the instances are passed, but also information that is directly derived from the reflection (the FieldInfo is added to the observer call so that the caller knows that the field "Author" of the book is changed).
I can clearly see advantages in using reflection in these generic modules (like the data grid or database interface), but it seems to me that using reflection in the business classes is a bad idea. After all, shouldn't the application work without relying on reflection as much as possible? Or is the use of reflection the 'normal way of working' in the 21st century?
Is it good practice to use reflection in your business logic?
EDIT: Some clarification on the remark of Kirk:
Imagine that Author implements an observer on Book.
Book calls all its observers whenever some field of Book changes (like Title, Year, #Pages, Author, ...). The 'FieldInfo' of the changed field is passed in the observer.
The Author-observer then uses this FieldInfo to decide whether it is interested in this change. In this case, if FieldInfo is for the field Author of Book, the Author-Observer will update its own vector of Books.
The main danger with Reflection is that the flexibility can escalate into disorganized, unmaintainable code, particularly if more junior devs are used to make changes, who may not fully understand the Reflection code or are so enamored of it that they use it to solve every problem, even when simpler tools would suffice.
My observation has been that over-generalization leads to over-complication. It gets worse when the actual boundary cases turn out to not be accommodated by the generalized design, requiring hacks to fit in the new features on schedule, transmuting flexibility into complexity.
I avoid using reflection. Yes, it makes your program more flexible. But this flexibility comes at a high price: There is no compile-time checking of field names or types or whatever information you're collecting through reflection.
Like many things, it depends on what you're doing. If the nature of your logic is that you NEVER compare the field names (or whatever) found to a constant value, then using reflection is probably a good thing. But if you use reflection to find field names, and then loop through them searching for the fields named "Author" and "Title", you've just created a more-complex simulation of an object with two named fields. And what if you search for "Author" when the field is actually called "AuthorName", or you intend to search for "Author" and accidentally type "Auhtor"? Now you have errors that won't show up until runtime instead of being flagged at compile time.
With hard-coded field names, your IDE can tell you every place that a certain field is used. With reflection ... not so easy to tell. Maybe you can do a text search on the name, but if field names are passed around as variables, it can get very difficult.
I'm working on a system now where the original authors loved reflection and similar techniques. There are all sorts of places where they need to create an instance of a class and instead of just saying "new" and the class, they create a token that they look up in a table to get the class name. What does this gain? Yes, we could change the table to map that token to a different name. And this gains us ... what? When was the last time that you said, "Oh, every place that my program creates an instance of Customer, I want to change to create an instance of NewKindOfCustomer." If you have changes to a class, you change the class, not create a new class but keep the old one around for nostalgia.
To take a similar issue, I make a regular practice of building data entry screens on the fly by asking the database for a list of field names, types, and sizes, and then laying it out from there. This gives me the advantage of using the same program for all the simpler data entry screens -- just pass in the table name as a parameter -- and if a field is added or deleted, zero code change is required. But this only works as long as I don't care what the fields are. Once I start having validations or side effects specific to this screen, the system is more trouble than it's worth, and I'm better off to fall back to more explicit coding.
Based on your edit, it sounds like you are using reflection purely as a mechanism for identifying fields. This is as opposed to dynamic behavior such as looking up the fields, which should be avoided when possible (since such lookups usually use strings which ruin static type safety). Using FieldInfo to provide an identifier for a field is fairly harmless, though it does expose some internals (the info class) in a way that is not entirely ideal.
I tend not to use reflection where i can help it. by using interfaces and coding against these i can do a lot of things that some would use reflection for.
But im a big fan of if it works, it works.
Also by using reflection you probably have something that can adapt fairly easily.
Ie the only objection most would have is fairly religious ... and if your performance is fine and the code is maintainable and clear .... who cares?
Edit: based on your edit i would indeed use interfaces to achieve what you want. Unless i misunderstand you.
I think it is a good idea to stay away from Reflection when possible, but dont be afraid to resort to it when it provides a better or more flexible solution to your problem. The performance hit for anything but tight loop operations is likely to be minimal in the overall scheme of an application or Web Form request.
Just a good article to share about reflection -
http://www.simple-talk.com/dotnet/.net-framework/a-defense-of-reflection-in-.net/
I tend to use interfaces in my business layer and leave the reflection to my presentation layer. This is not an absolute but rather a guideline.
What I'd like to do is take some action using the value returned by every method in a class.
So for instance, if I have a class Order which has a method
public Customer GetCustomer()
{
Customer CustomerInstance = // get customer
return CustomerInstance;
}
Let's say I want to log the creation of these - Log(CustomerInstance);
My options (AFAIK) are:
Call Log() in each of these methods before returning the object. I'm not a fan of this because it gets unwieldy if used on a lot of classes with a lot of methods. It also is not an intrinsic part of the method's purpose.
Use composition or inheritance to layer the log callon the Order class similar to:
public Customer GetCustomer()
{
Customer CustomerInstance = this.originalCustomer.GetCustomer();
Log(CustomerInstance);
return CustomerInstance;
}
I don't think this buys me anything over #1.
Create extension methods on each of the returned types:
Customer CustomerInstance = Order.GetCustomer().Log();
which has just as many downsides.
I'm looking to do this for every (or almost every) object returned, automatically if possible, without having to write double the amount of code. I feel like I'm either trying to bend the language into doing something it's not supposed to, or failing to recognize some language feature that would enable this. Possible solutions would be greatly appreciated.
You need to look into Aspect Oriented Programming:
Typically, an aspect is scattered or tangled as code, making it harder to understand and maintain. It is scattered by virtue of the function (such as logging) being spread over a number of unrelated functions that might use its function, possibly in entirely unrelated systems, different source languages, etc. That means to change logging can require modifying all affected modules. Aspects become tangled not only with the mainline function of the systems in which they are expressed but also with each other. That means changing one concern entails understanding all the tangled concerns or having some means by which the effect of changes can be inferred.
Adding logging is one of the uses of this methodology.
You should check Microsofts Enterprise Library.
Think you may find usefull the Policy Injection Application Block.
Your option 1 is, in my opinion, the way to do it. Even if this will be at the end of each method, that's what is done. I would not add extra layers of obscurity because it's 'not an intrinsic purpose' of a method.
By the way, Aspect Oriented Programming addresses exactly this issue that you have (see ChrisF's answer), but then we're not talking C# anymore.
I'm puzzling with where to place some code. I have a listbox and the listitems are stored in the database. The thing is, where should I place the code that retrieves the data from the database? I do already have a database class with a method ExecuteSelectQuery(). Shall I create Utility class with a method public DataTable GetGroupData() ? /* group is the listbox */ This method then calls the method ExecuteSelectQuery() from the database class.
What should I do?
There are many data access patterns you could look at implementing. A Repository pattern might get you where you need to go.
The idea is to have a GroupRepository class that fetches Group data. It doesn't have to be overly complicated. A simple method like GetAllGroups could return a collection you can use for your ListBox items.
You could simply abstract the database code into a utility class, as you suggest, and this wouldn't be a terrible solution. (You probably can't get much better with WebForms.) Yet if the system is going to end up quite complicated, then you might want to pick a more formal architecture...
Probably the best option for ASP.NET is the ASP.NET MVC Framework, which fully replaces WebForms. It's built around the Model-View-Controller architecture pattern, which is designed specifically to clearly separate code for the user interface and backened logic, which seems to be exactly what you want. (Indeed, it makes the design of websites a lot more structured in many ways.)
Also, if you want to create a more structured model for your database system, consider using an ORM such as the ADO.NET Entity Framework. However, this may be overkill if your database isn't very complicated. Using the MVC pattern is probably going to make a lot more difference.
consider adding a layer between your UI and data access layers. There you can implement some kind of caching if the data is not being changed frequently - this way you will avoid retrieving multiple times the same data from the DB.
I would recommend you the Application Architecture Guide book.
Pavel
Looks like you already have the data access layer implemented via your database class. I guess the confusion right now is in implementing the presentation and business layers. To me 'GetGroupData' is more of a part of the Business layer and is the right thing to implement. It will allow you to make changes in future with minimal or no impact to the presentation layer. Therefore, the flow that you suggested looks right. LoadListBox followed by GetGroupData followed by ExecuteSelectQuery.