I am designing a class diagram for an application and I was wondering if I should add a constructor to every class.
Some classes cannot be created unless they are given an initial value.
But other classes, the object can be created without an initial value the user may enter a value or may not.
My question is should I include a constructor into every class or just classes that need an initial value?
The key considerations for me when I create diagrams - is it readable, is it clear, and does it give only the relevant information? If you introduce too much clutter on to a diagram, it makes it harder to see the parts that really matter. With that said, if it's in an academic context or if your company has very specific standards, you might do it differently.
If it were me, and 90% of the classes have a straightforward no-args constructor and 10% do something special, I would show the special cases explicitly and just a UML note element with something like "Unless indicated otherwise, all classes have a default no-arg constructor".
Related
In the example I'm thinking of I have about 4 lines of code that could be encapsulated by a function, and this function will surely be used in other classes in the same hierarchy.
I have the following options for reusing that code:
Copy paste the function around to the classes that need it.
Make a base class for the classes that need the function and put it there.
Make a class that contains the function which gets passed into the classes that need it through DI or is just a member of the class. (seems like major overkill)
Make a static utility class and put that method in it.
I definitely wouldn't do 1 or 4. I would have done 2 in the past but I'm trying to keep to the composition over inheritance principle so I'm leaning towards 4 however it seems like a lot for something that will most likely never be used outside the hierarchy and is only 4 lines. I know this is very nitpicky but I want to figure out the right way to do it.
Inheritance was created for a reason. The fact that it has been overused doesn't mean that it doesn't have legitimate uses. The key is that whether you use it should not depend on whether you can get easy reuse out of it, but whether it makes sense for it to belong to the base class, based on what your base class represents.
Without better understanding what your classes are, and what the method is that you're trying to reuse, I can't give specific advice in your particular case. But think of it this way: When you say it will "most likely never be used outside the hierarchy," is that because it purely just doesn't make sense outside of that hierarchy? Or is it just that you don't think somebody's going to build something that happens to use this feature, even though it could conceivably make sense outside of the hierarchy?
If this method would make any sense outside of the specific hierarchy you're talking about, I would suggest approach #3.
And of course, all of this assumes that your class hierarchy is really a hierarchy in the first place. Another common abuse of inheritance is when people force a hierarchy on objects that don't need to be hierarchical in the context of their application.
I agree that composition is a better option than inheritance IN GENERAL. But composing your objects with some logic, perhaps via the strategy pattern, is a different issue than reusing the same code by multiple classes.
If those classes that need this functionality all have the same base class, then it makes sense to put it in the base class. It's not like the subclasses need to know the inner workings of the base class to make this call.
If various subclasses need different versions of this code, then creating behaviors via the strategy pattern (using composition) is the way to go. But I'm making an assumption that the same code satisfies every subclass.
I wouldn't do #4 because then that code is available to other classes that have no business calling it. If the code is in the base class, then you can make it protected and therefore only available to the classes that need it.
if such function arguments are going to be fields of the classes, than it is intended to be operating on your class state and thus should be a member of the base class that addresses such a manipulation.
if you operate on some data that makes sense outside of your hierarchy or from several branches of the hierarchy and meaning of the parameters is not bound to object state make it a function in a utility class.
If it's specifically related to your class hierarchy, use a base class. If not, use option 4. There is no need for composition here.
I am encapsulating access to a database in connection classes that all inherit from the same base implementation. This base implementation has a protected LINQ provider for database access that many child classes will use - but not all of them. Some might need their own provider and would then generally have no use for the "default" one.
This "other" provider would not be derived from the default one (but share a common quasi-abstract ancestor which in itself is of no use anywhere), but would have exactly the same role within the respective class, so it would seem nice to be able to use it in exactly the same way, i.e. use the same syntax. I could achieve this by hiding the respective members using the new keyword, but I'm unsure whether this is good practice.
On one hand, doing so would help avoiding accidentally using the wrong one because there is only one. On the other hand, being used to using the same name for the default and specific providers might lead to actually forgetting to implement a specific one and working with one that's not correct to use here. So it might make sense to name the default one appropriately; whoever will be developing a particular connection class will know when they need to use a specific provider and be reminded that they need to create the code to get to that.
Which reasoning is the more plausible one? I'm now leaning a bit towards the latter.
The new keyword is almost always a bad idea. It's one of those "last resort" features for cases where you have no other option, and in this case you definitely have other options.
Do the two providers have different semantics/APIs, or is the usage exactly the same? If the latter, you might want to look into the adapter pattern, and implement the property as a simple virtual property of type ILinqProviderAdapter (which you would define and implement), overriden where appropriate.
Closed. This question is opinion-based. It is not currently accepting answers.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have a function that returns same kind of objects (query results) but with no properties or methods in common. In order to have a common type I resorted using an empty interface as a return type and "implemented" that on both.
That doesn't sound right of course. I can only console myself by clinging to hope that someday those classes will have something in common and I will move that common logic to my empty interface. Yet I'm not satisfied and thinking about whether I should have two different methods and conditionally call next. Would that be a better approach?
I've been also told that .NET Framework uses empty interfaces for tagging purposes.
My question is: is an empty interface a strong sign of a design problem or is it widely used?
EDIT: For those interested, I later found out that discriminated unions in functional languages are the perfect solution for what I was trying to achieve. C# doesn't seem friendly to that concept yet.
EDIT: I wrote a longer piece about this issue, explaining the issue and the solution in detail.
Although it seems there exists a design pattern (a lot have mentioned "marker interface" now) for that use case, i believe that the usage of such a practice is an indication of a code smell (most of the time at least).
As #V4Vendetta posted, there is a static analysis rule that targets this:
http://msdn.microsoft.com/en-us/library/ms182128(v=VS.100).aspx
If your design includes empty interfaces that types are expected to implement, you are probably using an interface as a marker or a way to identify a group of types. If this identification will occur at run time, the correct way to accomplish this is to use a custom attribute. Use the presence or absence of the attribute, or the properties of the attribute, to identify the target types. If the identification must occur at compile time, then it is acceptable to use an empty interface.
This is the quoted MSDN recommendation:
Remove the interface or add members to it. If the empty interface is being used to label a set of types, replace the interface with a custom attribute.
This also reflects the Critique section of the already posted wikipedia link.
A major problem with marker interfaces is that an interface defines a contract for implementing classes, and that contract is inherited by all subclasses. This means that you cannot "unimplement" a marker. In the example given, if you create a subclass that you do not want to serialize (perhaps because it depends on transient state), you must resort to explicitly throwing NotSerializableException (per ObjectOutputStream docs).
You state that your function "returns entirely different objects based on certain cases" - but just how different are they? Could one be a stream writer, another a UI class, another a data object? No ... I doubt it!
Your objects might not have any common methods or properties, however, they are probably alike in their role or usage. In that case, a marker interface seems entirely appropriate.
If not used as a marker interface, I would say that yes, this is a code smell.
An interface defines a contract that the implementer adheres to - if you have empty interfaces that you don't use reflection over (as one does with marker interfaces), then you might as well use Object as the (already existing) base type.
You answered your own question... "I have a function that returns entirely different objects based on certain cases."... Why would you want to have the same function that returns completely different objects? I can't see a reason for this to be useful, maybe you have a good one, in which case, please share.
EDIT: Considering your clarification, you should indeed use a marker interface. "completely different" is quite different than "are the same kind". If they were completely different (not just that they don't have shared members), that would be a code smell.
As many have probably already said, an empty interface does have valid use as a "marker interface".
Probably the best use I can think of is to denote an object as belonging to a particular subset of the domain, handled by a corresponding Repository. Say you have different databases from which you retrieve data, and you have a Repository implementation for each. A particular Repository can only handle one subset, and should not be given an instance of an object from any other subset. Your domain model might look like this:
//Every object in the domain has an identity-sourced Id field
public interface IDomainObject
{
long Id{get;}
}
//No additional useful information other than this is an object from the user security DB
public interface ISecurityDomainObject:IDomainObject {}
//No additional useful information other than this is an object from the Northwind DB
public interface INorthwindDomainObject:IDomainObject {}
//No additional useful information other than this is an object from the Southwind DB
public interface ISouthwindDomainObject:IDomainObject {}
Your repositories can then be made generic to ISecurityDomainObject, INorthwindDomainObject, and ISouthwindDomainObject, and you then have a compile-time check that your code isn't trying to pass a Security object to the Northwind DB (or any other permutation). In situations like this, the interface provides valuable information regarding the nature of the class even if it does not provide any implementation contract.
I know this is a subjective question, but I'm always curious about best-practices in coding style. ReSharper 4.5 is giving me a warning for the keyword "base" before base method calls in implementation classes, i.e.,
base.DoCommonBaseBehaviorThing();
While I appreciate the "less is better" mentality, I also have spent a lot of time debugging/maintaining highly-chained applications, and feel like it might help to know that a member call is to a base object just by looking at it. It's simple enough to change ReSharper's rules, of course, but what do y'all think? Should "base" be used when calling base members?
The only time you should use base.MethodCall(); is when you have an overridden method of the same name in the child class, but you actually want to call the method in the parent.
For all other cases, just use MethodCall();.
Keywords like this and base do not make the code more readable and should be avoided for all cases unless they are necessary--such as in the case I described above.
I am not really sure using this is a bad practice or not. base, however is not a matter of good or bad practice, but a matter of semantics. Whereas this is polymorphic, meaning that even if the method using it belongs to a base class, it will use the overriden method, base is not. base will always refer to the method defined at the base class of the method calling it, hence it is not polymorphic. This is a huge semantic difference. base should then be used accordingly. If you want that method, use base. If you want the call to remain polymorphic, don't use base.
Another important point to take into consideration is that while you haven't currently overridden that method that doesn't mean you won't ever in the future and by prefacing all of your calls with base. you won't get the new functionality without performing a find and replace for all your calls.
While prefacing calls with this. will not do anything other than decrease / increase readability (ignoring the situation where two variables in scope have the same name) the base. prefix will change the functionality of the code you write in many common scenarios. So I would never add base. unless it is needed.
I think generally you should use base only when overriding previous functionality.
Some languages (C# does not) also provide this functionality by calling the function by it's base class name explicitly like this: Foo.common() (called from somewhere in Bar, of course).
This would allow you to skip upwards in the chain, or pick from multiple implementations -- in the case of multiple-inheritance.
Regardless, I feel base should be used only when needed to explicitly call your parent's functionality because you are or have overridden that functionality in this class.
It's really a matter of personal preference. If you like seeing "base." at the beginning of your members, you can easily turn off the rule (Go to Options>Inspection Severity>Code Redundancies>Redundant 'base.' qualifier). Don't let non-behavioral static code analysis rules affect your preferred coding style.
EDIT
One thing to consider is that the static code analysis in FXCop and R# are there to provide rules for all possible needs. To actually adhere to all of the rules simultaneously is a little onerous. You should define your preferred coding style (if you're working in a team, do it collectively), and stick with it. Modify your rules to match your coding standards, not vice versa.
Are there specific cases when one should use custom attributes on class instead of properties?
I know that properties are preferrable because of their discoverability and performance, but attributes... When should I definitely use them?
UPDATE:
Here is a post by Eric Lippert about this decision.
Eric Lippert has a great blog post tackling exactly this decision.
His summary is:
In short: use attributes to describe your mechanisms, use properties to model the domain.
I'd also add to that the consideration that an attribute value is effectively static - in other words it's part of the description of the type rather than any instance of the type.
One tricky bit can come when every instance of some base type has to have a property (e.g. a description) but different concrete derived types want to specify descriptions on a per-type basis rather than per-instance. You often end up with virtual properties which always return constants - this isn't terribly satisfactory. I suspect Delphi's class references might help here... not sure.
EDIT: To give an example of a mechanism, if you decorate a type to say which table it's from in the database, that's describing the data transfer mechanism rather than saying anything about the model of data that's being transferred.
There are two use cases:
1) Using a custom attribute that someone else has defined, such as the System.LoaderOptimization attribute that may be used on the Main method. These kinds of attributes are used to direct platform code such as the CLR, WPF, WCF or the debugger to run the code in a certain way, and can be very useful at times. Reading books on various platform topic is a good way to learn when and how to use these attributes.
2) Creating your own custom attribute and using it to decorate a class (or method, property, etc). These have no effect unless you also have code that uses Reflection to notice those attribute usages and change the behavior in some way. This usages should be avoided whenever possible because of very poor performance, orders of magnitude larger than, say, accessing a static member of a class.