Why are Constructors Evil? - c#

I recall reading an article about constructors being evil (but can't place it). The author mentioned that constructors are a special case of methods; but have restrictions (such as that they cannot have a return value).
Are constructors evil? Is it better to have no constructors and instead rely on a method like Initialize, along with default values for member variables?
(Your answer can be specific to C# or Java, if you must pin down a language.)

That sounds like Allen Holub. One might argue that constructors are evil solely to drive web traffic :) They are no more evil than any other language construct. They have good and bad effects. Of course you can't eliminate them -- no way to construct objects without them!
What you can do, though, and this is the case that Allen was making, is you can limit your actual invocation of them, and instead favor, when sensible, factory methods like your Initialize. The reason for this is simple: it reduces coupling between classes, and makes it easier to substitute one class for another during testing or when an application evolves.
Imagine if your application does something like
DatabaseConnection dc = new OracleDatabaseConnection(connectionString);
dc.query("...");
and imagine that this happens in a hundred places in your application. Now, how do you unit test any class that does this? And what happens when you switch to Mysql to save money?
But if you did this:
DatabaseConnection dc = DatabaseConnectionFactory.get(connectionString);
dc.query("...");
Then to update your app, you just have to change what DatabaseConnectionFactory.get() returns, and that could be controlled by a configuration file. Avoiding the explicit use of constructors makes your code more flexible.
Edit: I can't find a "constructors" article, but here's his extends is evil one, and here's his getters and setters are evil one.

They aren't. In fact, there is a specific pattern known as Inversion of Control that makes ingenious use of Constructors to nicely decouple code and make maintenance easier. In addition, certain problems are only solvable by using non default constructors.

Evil? No.
Calling a constructor does require that you call "new", which does tie you to a particular implementation. Factories and dependency injection allow you to be more dynamic about runtime types, but they require programming to interfaces.
I think the latter are more flexible, but constructors evil? That's going too far, just like having an iterface for everything goes too far.

Constructors aren't evil, but (at least in Java) often it's better to use static Factory methods instead (which of course use constructors internally).
Here are a few quotes From Effective Java, Item 1: Consider static factory methods instead of constructors:
One advantage of static factory
methods is that, unlike constructors,
they have names. If the parameters to
a constructor do not, in and of
themselves, describe the object being
returned, a static factory with a
well-chosen name is easier to use and
the resulting client code easier to
read.
...
A second advantage of static factory
methods is that, unlike constructors,
they are not required to create a new
object each time they’re invoked. This
allows immutable classes (Item 15) to
use preconstructed instances, or to
cache instances as they’re
constructed, and dispense them
repeatedly to avoid creating
unnecessary duplicate objects.
...
A third advantage of static factory
methods is that, unlike constructors,
they can return an object of any
subtype of their return type.
...
A fourth advantage of static factory
methods is that they reduce the
verbosity of creating parameterized
type instances. Unfortunately, you
must specify the type parameters when
you invoke the constructor of a
parameterized class even if they’re
obvious from context. This typically
requires you to provide the type
parameters twice in quick succession:
Map<String, List<String>> m =
new HashMap<String, List<String>>();
...
The main disadvantage of providing
only static factory methods is that
classes without public or protected
constructors cannot be subclassed.
...
A second disadvantage of static
factory methods is that they are not
readily distinguishable from other
static methods.

Constructors allow initialization lists and other useful things. There's no way to dynamically initialize an object in an array (that doesn't use pointers to objects) without a copy constructor.
No they aren't evil.
They are special cases.

Constructors are not evil. They exist so that code can be run when an instance of a class is initialized. Just as with any other programming concept, if they aren't used right they can be a disaster to work with. But, if used correctly, they can be a great (and essential) tool.
http://en.wikipedia.org/wiki/Constructor_(object-oriented_programming)

I wouldn't say constructors are evil.
Constructors return a reference to the Object you are instantianting and should be used to set an object up to a default state. I can see benefits of using an Initialize method but there isn't much point. UNLESS you need to initialize some logic AFTER the object has been allocated stack space and initial values.

Constructors are not evil. Whatever article you read is wrong and you're better off having forgotten the link.

Constructors are not evil nor are they good. Constructors are a tool that can be very helpful if used properly and in the correct context. In fact at least in .NET languages such as C# if you do not explicitly declare a constructor in your code a constructor will be created for you by the compiler with no functionality.

Constructors are good when following normal OO programming paradigms. There are situations where you might need to place extra constraints on how objects are created, so in some cases a Factory pattern with private ctors might be a better fit. There is also a philosophy/best practice that says object instantiation should be the same as initialization, in which case constructors are the only real option outside factories that you have.
(factories still use constructors internally of course)

The question may not be about constructors in Java or C#, but constructors in javascript. In Javascript constructors can be a source of subtle errors. A lot of Javascript books recommend that beginners steer clear of constructors.
For a more detailed discussion of the evilness of constructors, and the new keyword, in javascript look : Is JavaScript's "new" keyword considered harmful?

I picked up this sort of vibe, to a degree, from Miško Hevery's talk "Don't look for things" which is available on YouTube. Part of the outlining discussion he gives, I interpret as a criticism of 'fat constructors', if not constructors in general.
The point of this argument, at least as I understood it, was that constructors that take in everything the object wants encourage you to enforce correctness using the constructor, instead of enforcing correctness with tests. Constructors do tend to bloat to do exactly that, so if this bothers you, you could consider it an evil streak in the concept of the constructor. In the talk he says he prefers to only require an object to 'have' another object when it's needed to do something, rather than requiring it on construction.

Here's one article that considers whether constructors are harmful, not "evil". It's mostly in the context of JavaScript/Es6 but the arguments may hold for any language which has constructors.
The argument is really about user-defined constructors, you still need to call the system provided default constructors, else you could not create any instances at all.
The simplest argument is that if you can do the same thing with static methods as with custom constructors, isn't it better to choose one of the approaches and always stick to that, except if there is a specific reason not to.
That makes your program simpler in total and thus less error-prone. Note that Smalltalk never had "constructors".
https://medium.com/#panuviljamaa/constructors-considered-harmful-c3af0d72c2b1

Related

Have practices regarding on the use of extension methods changed or are there two schools of thought?

When I first learned about extension methods I read this:
In general, we recommend that you implement extension methods
sparingly and only when you have to. Whenever possible, client code
that must extend an existing type should do so by creating a new type
derived from the existing type.
However, I have numerious times seen a very liberal use of extension methods in various production code bases.
Granted, my experience is not representative of the majority but I would like to know if there's a shift in the guidelines, an alternate design philosophy, or if I just happened to see enough code that ignored the guidlines to make me think so?
NOTE: I am not trying to spark a debate (which will promptly lead to this question closing) - I've honestly been wondering about this for a while and feel my best chance at getting an answer is here on SO.
I think all this highlights is the usual difference between theory and practice. In theory we should use them sparingly but in practice we don't. In theory we should do a lot of things that we don't in practice do, in life in general, not only programming.
Polymorphism does not work with extension methods as they are bound in compile time.
ParentClass parentClass = new ParentClass();
ParentClass derivedClass = new DerivedClass();
If both of these classes have an extention method called ExtensionMethod() (I have defintely seen extension methods attempting to mimick virtual/overriden methods), both of these calls would call the parent class's extension method:
parentClass.ExtensionMethod();
derivedClass.ExtensionMethod();
In order to actually use the derived extension method, you would have to call:
((DerivedClass)derivedClass).ExtensionMethod();
Extending class is good approach, but often it is not available for code that uses third party libraries (which is common case for production code unlike in education/sample code). So derive new class if it makes sense, but feel free to use extension methods when they make code more readable.
There are multiple reasons why there are a lot of extension methods:
often you can't extend a class to add methods that would make your product code to be more readable. I.e. value types like String or some base classes in hierarchies like Stream.
extension methods is valuable way to add methods to interfaces without polluting the interface. LINQ is good example how it produces more readable code.
some frameworks recommend to use extension methods in particular cases. I.e. for MVC it is recommended to add extensions to HtmlHelper.
I think that there is pros and cons about using the extension methods.
Pros: The extension methods could be regarded as the visitor pattern(GoF). So, it takes advantage of the pattern, which is, without modifying code, we can extend some features.
Cons: Despite the advantage , why the MSDN tells using the extension methods sparingly is that, IMO, the extension methods would cause problem at some point. First, if the object for extension methods has different namespace from the extension, we should know where it is. It causes that sometimes we cannot use some important features from the extension. Moreover, I saw lot of code with extension abuse. Since extension is based on Type, sometime an object of that Type really do not need the extension, but when coding, we should see lots of extension method.
[update]
IMO, abuse about the extension methods
using so general type for extension method, like object: if there is lots of extension methods with object type and that extension is only focused on few types, it'll annoy us, as every object is linked with the methods.
broken linking with dot operator or misconception: Let me show an example. There is the code like below and we should call the Read and then Write method in order. However, we can call the methods in opposite order.
public static string Read(this string message)
{
//do something
return message;
}
public static string Write(this string message)
{
//do something
return message;
}
public static void Method()
{
"message".Read().Write();
"message".Write().Read(); // this is problem!
}

Is it OK to have a class with just properties for refactoring purposes?

I have a method that takes 30 parameters. I took the parameters and put them into one class, so that I could just pass one parameter (the class) into the method. Is it perfectly fine in the case of refactoring to pass in an object that encapsulates all the parameters even if that is all it contains.
That is a great idea. It is typically how data contracts are done in WCF for example.
One advantage of this model is that if you add a new parameter, the consumer of the class doesn't need to change just to add the parameter.
As David Heffernan mentions, it can help self document the code:
FrobRequest frobRequest = new FrobRequest
{
FrobTarget = "Joe",
Url = new Uri("http://example.com"),
Count = 42,
};
FrobResult frobResult = Frob(frobRequest);
While other answers here are correctly point out that passing an instance of a class is better than passing 30 parameters, be aware that a large number of parameters may be a symptom of an underlying issue.
E.g., many times static methods grow in their number of parameters, because they should have been instance methods all along, and you are passing a lot of info that could more easily be maintained in an instance of that class.
Alternatively, look for ways to group the parameters into objects of a higher abstraction level. Dumping a bunch of unrelated parameters into a single class is a last resort IMO.
See How many parameters are too many? for some more ideas on this.
It's a good start. But now that you've got that new class, consider turning your code inside-out. Move the method which takes that class as a parameter into your new class (of course, passing an instance of the original class as the parameter). Now you've got a big method, alone in a class, and it will be easier to tease it apart into smaller, more manageable, testable methods. Some of those methods might move back to the original class, but a fair chunk will probably stay in your new class. You've moved beyond Introduce Parameter Object on to Replace Method with Method Object.
Having a method with thirty parameters is a pretty strong sign that the method is too long and too complicated. Too hard to debug, too hard to test. So you should do something about it, and Introduce Parameter Object is a fine place to start.
Whilst refactoring to a Parameter Object isn't in itself a bad idea it shouldn't be used to hide the problem that a class that needs 30 pieces of data provided from elsewhere could still be something of a code smell. The Introduce Parameter Object refactoring should probably be regarded as a step along the way in a broader refactoring process rather than the end of that procedure.
One of the concerns that it doesn't really address is that of Feature Envy. Does the fact that the class being passed the Parameter Object is so interested in the data of another class not indicate that maybe the methods that operate on that data should be moved to where the data resides? It's really better to identify clusters of methods and data that belong together and group them into classes, thereby increasing encapsulation and making your code more flexible.
After several iterations of splitting off behaviour and the data it operates on into separate units you should find that you no longer have any classes with enormous numbers of dependencies which is always a better end result because it'll make your code more supple.
That is an excellent idea and a very common solution to the problem. Methods with more than 2 or 3 parameters get exponentially harder and harder to understand.
Encapsulating all this in a single class makes for much clearer code. Because your properties have names you can write self-documenting code like this:
params.height = 42;
params.width = 666;
obj.doSomething(params);
Naturally when you have a lot of parameters the alternative based on positional identication is simply horrid.
Yet another benefit is that adding extra parameters to the interface contract can be done without forcing changes at all call sites. However, this is not always as trivial as it seems. If different call sites require different values for the new parameter, then it is harder to hunt them down than with the parameter based approach. In the parameter based approach, adding a new parameter forces a change at each call site to supply the new parameter and you can let the compiler do the work of finding them all.
Martin Fowler calls this Introduce Parameter Object in his book Refactoring. With that citation, few would call it a bad idea.
30 parameters is a mess. I think it's way prettier to have a class with the properties. You could even create multiple "parameter classes" for groups of parameters that fit in the same category.
You could also consider using a structure instead of a class.
But what you're trying to do is very common and a great idea!
It can be reasonable to use a Plain Old Data class whether you're refactoring or not. I'm curious as to why you thought it might not be.
Maybe C# 4.0's optional and named parameters be a good alternative to this?
Anyway, the method you are describing can also be good for abstracting the programs behavior. For example you could have one standard SaveImage(ImageSaveParameters saveParams)-function in an Interface where ImageSaveParameters also is an interface and can have additional parameters depending on the image-format. For example JpegSaveParameters has a Quality-property while PngSaveParameters has a BitDepth-property.
This is how the save save-dialog in Paint.NET does it so it is a very real life example.
As stated before: it is the right step to do, but consider the followings too:
your method might be too complex (you should consider dividing it into more methods, or even turn it into a separate class)
if you create the class for the parameters, make it immutable
if many of the parameters could be null or could have some default value, you might want to use the builder pattern for your class.
So many great answers here. I would like to add my two cents.
Parameter object is a good start. But there is more that could be done. Consider the following (ruby examples):
/1/ Instead of simply grouping all the parameters, see if there can be meaningful grouping of parameters. You might need more than one parameter object.
def display_line(startPoint, endPoint, option1, option2)
might become
def display_line(line, display_options)
/2/ Parameter object may have lesser number of properties than the original number of parameters.
def double_click?(cursor_location1, control1, cursor_location2, control2)
might become
def double_click?(first_click_info, second_click_info)
# MouseClickInfo being the parameter object type
# having cursor_location and control_at_click as properties
Such uses will help you discover possibilities of adding meaningful behavior to these parameter objects. You will find that they shake off their initial Data Class smell sooner to your comfort. :--)

Why constructors will always have same name as of class and how they are invoked implicitly?

I want to know that why the name of constructor is always same as that of class name and how its get invoked implicitly when we create object of that class. Can anyone please explain the flow of execution in such situation?
I want to know that why the name of constructor is always same as that of class name
Because this syntax does not require any new keywords. Aside from that, there is no good reason.
To minimize the number of new keywords, I didn't use an explicit syntax like this:
class X {
constructor();
destructor();
}
Instead, I chose a declaration syntax that mirrored the use of constructors.
class X {
X();
~X();
This may have been overly clever. [The Design And Evolution Of C++, 3.11.2 Constructor Notation]
Can anyone please explain the flow of execution in such situation?
The lifetime of an object can be summarized like this:
allocate memory
call constructor
use object
call destructor/finalizer
release memory
In Java, step 1 always allocates from the heap. In C#, classes are allocated from the heap as well, whereas the memory for structs is already available (either on the stack in the case of non-captured local structs or within their parent object/closure). Note that knowing these details is generally not necessary or very helpful. In C++, memory allocation is extremely complicated, so I won't go into the details here.
Step 5 depends on how the memory was allocated. Stack memory is automatically released as soon as the method ends. In Java and C#, heap memory is implicitly released by the Garbage Collector at some unknown time after it is no longer needed. In C++, heap memory is technically released by calling delete. In modern C++, delete is rarely called manually. Instead, you should use RAII objects such as std::string, std::vector<T> and std::shared_ptr<T> that take care of that themselves.
Why? Because the designers of the different languages you mention decided to make them that way. It is entirely possible for someone to design an OOP language where constructors do not have to have the same name as the class (as commented, this is the case in python).
It is a simple way to distinguish constructors from other functions and makes the constructing of a class in code very readable, so makes sense as a language design choice.
The mechanism is slightly different in the different languages, but essentially this is just a method call assisted by language features (the new keyword in java and c#, for example).
The constructor gets invoked by the runtime whenever a new object is created.
Seem to me that having sepearte keywords for declaring constructor(s) would be "better", as it would remove the otherwise unnecessary dependency to the name of the class itself.
Then, for instance, the code inside the class could be copied as the body of another without having to make changes regarding name of the constructor(s). Why one would want to do this I don't know (possibly during some code refactoring process), but the point is one always strives for independency between things and here the language syntax goes against that, I think.
Same for destructors.
One of the good reasons for constructor having the same name is their expressiveness. For example, in Java you create an object like,
MyClass obj = new MyClass(); // almost same in other languages too
Now, the constructor is defined as,
class MyClass {
public MyClass () {... }
}
So the statement above very well expresses that, you are creating an object and while this process the constructor MyClass() is called.
Now, whenever you create an object, it always calls its constructor. If that class is extending some other Base class, then their constructor will be called first and so on. All these operations are implicit. First the memory for the object is allocated (on heap) and then the constructor is called to initialize the object. If you don't provide a constructor, compiler will generate one for your class.
In C++, strictly speaking constructors do not have names at all. 12.1/1 in the standard states, "Constructors do not have names", it doesn't get much clearer than that.
The syntax for declaring and defining constructors in C++ uses the name of the class. There has to be some way of doing that, and using the name of the class is concise, and easy to understand. C# and Java both copied C++'s syntax, presumably because it would be familiar to at least some of the audience they were targeting.
The precise flow of execution depends what language you're talking about, but what the three you list have in common is that first some memory is assigned from somewhere (perhaps allocated dynamically, perhaps it's some specific region of stack memory or whatever). Then the runtime is responsible for ensuring that the correct constructor or constructors are called in the correct order, for the most-derived class and also base classes. It's up to the implementation how to ensure this happens, but the required effects are defined by each of those languages.
For the simplest possible case in C++, of a class that has no base classes, the compiler simply emits a call to the constructor specified by the code that creates the object, i.e. the constructor that matches any arguments supplied. It gets more complicated once you have a few virtual bases in play.
I want to know that why the name of constructor is always same as that
of class name
So that it can be unambigously identified as the constructor.
and how its get invoked implicitly when we create object of that class.
It is invoked by the compiler because it has already been unambiguously identified because of its naming sheme.
Can anyone please explain the flow of execution in such situation?
The new X() operator is called.
Memory is allocated, or an exception is thrown.
The constructor is called.
The new() operator returns to the caller.
the question is why designers decided so?
Naming the constructor after its class is a long-established convention dating back at least to the early days of C++ in the early 1980s, possibly to its Simula predecessor.
The convention for the same name of the constructor as that of the class is for programming ease, constructor chaining, and consistency in the language.
For example, consider a scenario where you want to use Scanner class, now what if the JAVA developers named the constructor as xyz!
Then how will you get to know that you need to write :
Scanner scObj = new xyz(System.in) ;
which could've have been really weird, right! Or, rather you might have to reference a huge manual to check constructor name of each class so as to get object created, which is again meaningless if you could have a solution of the problem by just naming constructors same as that of the class.
Secondly, the constructor is itself created by the compiler if you don't explicitly provide it, then what could be the best name for constructor could automatically be chosen by the compiler so it is clear to the programmer! Obviously, the best choice is to keep it the same as that of the class.
Thirdly, you may have heard of constructor chaining, then while chaining the calls among the constructors, how the compiler will know what name you have given to the constructor of the chained class! Obviously, the solution to the problem is again same, KEEP NAME OF THE CONSTRUCTOR SAME AS THAT OF THE CLASS.
When you create the object, you invoke the constructor by calling it in your code with the use of new keyword(and passing arguments if needed) then all superclass constructors are invoked by chaining the calls which finally gives the object.
Thanks for Asking.

To use the 'I' prefix for interfaces or not to

That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway.
So would not using this convention cause confusion?
Are there any c# projects or libraries of note that drop this convention?
Do any c# projects that mix conventions, as unfortunately Apache Wicket does?
The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.
The difference between Java and C# is that Java allows you to easily distinguish whether you implement an interface or extend a class since it has the corresponding keywords implements and extends.
As C# only has the : to express either an implementation or extension, I recommend following the standard and put an I before an interface's name.
It's bad practice in my opionion too. The reasons why, additional to yours are:
The whole purpose of interfaces is to abstract away implementation details. So it shouldn't matter if you call a method with a IParam or Param.
Elaborated tools have their own possibilities to mark interfaces with an icon.
If your eye is searching in a IDE for a name, the most significant part is the beginning of a string. Maybe your classes get sorted by alphabet, and now you have a block of similar names, all starting with I... together. They look similar, while it would be of advantage to distinguish them easily. It's ergonomical wrong to use an I-prefix.
Even more annoying: ImplList, ImplThat, AFoo for an abstract Foo, AImplFooBar for an abstract Foo, which implements Bar? SSomething as Singleton, or SMath for a static class? Stop it! :)
With respect, in your post you are only considering your needs (I, I, I), and not the needs of the readers of your code. If you are a one-man shop, then fair enough, but if your code if ever read by others, then consider that they will be expecting interfaces to have an I prefix--that is just the way it is in .Net, and too many people are used to it to change now.
Also, it would help if you used more readable names for classes. What is PSec? How can I tell whether IPSec is an interface, when I can't even tell what PSec is? If instead PSec was renamed to e.g., PersonalSecurity, then IPersonalSecurity is much more likely to be an interface.
Using I for interfaces goes against the whole point of an interface imo, that it is a connector that you can plug different concrete implementations in to dependencies.
An object that uses the database needs a DataStore, not an IDataStore, and it should be up to configuration whether that gets a DatabaseDataStore or a FileSystemDataStore or whatever plugged into it (or a MockDataStore for testing).
Read this and move on. If you're using Java, follow the Java naming conventions.
It's not a sin per se, it's best practice. It makes things a lot more readable all in all. Also, think about it. IMyClass is the interface to MyClass. It just makes sense, and stops unnecessary confusion. Also remember the : syntax vs. implements/extends. Lastly, you can bypass all of this by simply checking the tooltips/go to in VS, but for pure readability, the standard is important in my opinion.
Not that I'm aware of, but I'm sure they exist.
Haven't seen any, but I'm sure they exist.
I think the main reason for the I-Prefix is not that those using it can see it's an interface but that those implementing/deriving from existing classes and interfaces can see more easily wether it's an interface or base class.
Another advantage is that it prevents stupid things like (If my Java memory serves me correctly):
List foo = new List(); // Why does it fail?
The third advantage is refactoring. If you move through your objects and read the code you can see where you forgot to code-by-interface. "A method accepting something with a type not prefixed with I? Fix it!".
I used it even in Java and found it quite usefull, but it always depends on the guidelines for your company/team. Follow them, no matter how stupid you may think they are, some day you will be happy they exist.
Ask yourself: If my IDE could give me some hint in the text (e.g different colour, underline, italic...) that the type was an interface would I still bother?
Sounds like you are naming the types like that just so you can tell from the name something about parts of the definition other than the name.
Best practices override convention sometimes, in my opinion. While I may not personally like the convention, not using it goes against the best practice that has been in place for longer than I care to think about.
I would look at it more from the point of how other people do it, in this case. Since 99% of the common world will be prefacing with the "I", that is good enough to keep this best practice. If you have to bring in a contractor or on-board a new developer, you should be able to focus on the code and not have to explain/defend choices that you made.
It has been around long enough, and is ingrained well enough, that I don't expect it to change in my lifetime. It is just one of those "unwritten rules", better defined as an "unwritten best practice", that will probably outlive me.
I would say that not following this convention would get you down to .NET hell. It's a convention that's almost as important to me as using self in instance methods in Python.
I don't see any good reason to do this. 'Extends' vs 'implements' already tells you whether you are dealing with a class or an interface in the cases where it actually matters. In all other cases the whole idea is that you don't care.
In my opinion the biggest reason "I" is often prefixed is that the IDEs for both Java (Eclipse) and .NET (V Studio) do not make it extremely clear that the Class you are looking at is in fact an interface. The package browser in eclipse shows the same icon till you expand the class file and the font of an Interface declaration is not any different than a class.
An Example would be if I type:
ISomeInterface s = factory.create();
ISomeInterface should atleast have some sort of font modification to show that its an interface (like italics or underline).
The other big reason is in the Java world that people prefix with "I" is that it makes it easier in Eclipse to do a "Ctrl-Shift-R" and search for only interfaces.
This is important in the Java/Spring world where you need interfaces as your collaborators if you plan on using any AOP magic or some other Dynamic proxies.
Than you have the nasty choice of either prefixing your interface with "I" or suffixing your implementation class with "Impl" like ListImpl. I abhor the suffixing of classes with "Impl" to make the interface and concrete differ in name and prefix the prefix of "I".
In general I try to avoid making lots of interfaces.
In my own code I would never prefix with "I". I'm only give some reasons why people do it which is for old code consistency.
conventions exist to help all of us. If there is a chance another .net developer will be working with you then yes, follow the convention.
One idea is that the "I" part can be followed by a verb, stating what classes that implement the interface does; like ISaveXmlData, forming a nice human language name.
The key thing is consistency - as long you stick to having I prefixed to all interfaces or none at all, it's a matter of preference.
I use the I prefix for interfaces at work since the existing code already uses it for a naming convention for each interface. I find it more intuitive to quickly determine if a class implements an interface or another class simply by looking for the I prefix in the name of the base class.
On the other hand, some of the older projects at work don't use this naming convention and this makes the code slightly less readable, but it might just be that I'm used to the prefix.
Look at the BCL. In the Base Class Libraries you have IList<>, IQueryable, IDisposable.
If you don't prepend it with a 'I', how would people know it's an interface other than going to the definition?
Anyways, just my 2 cents
You can choose all names in your program how you like, but it's a good idea to hold naming conversion, if not you only will be read the program.
Usage of Interfaces is good not only if you design you own classes and interfaces. In some cases you makes other accents in your program it you use interfaces. For example, you can write code like
SqlDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
or like
IDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
the last one have looks like the same, but if you are using IDataReader instead of SqlDataReader you can easier to place some parts which works with dr in a method, which works not only with SqlDataReader class (but with OleDbDataReader, OracleDataReader, OdbcDataReader etc). On the other hand your program stay working exactly quick as before.
Updated (based on questions from comments):
The advantage is, like I written before, if you'll separate some parts of you code which work with IDataReader. For example, you can define delegate T ReadRowFromDataReader<T> (IDataReader dr, ...) and use it inside of while (dr.Read ()) block. So you write code which is more general as the code working with SqlDataReader directly. Inside of while (dr.Read ()) block you call rowReader (dr, ...). Your different implementations of code reading rows of data can be placed in a method with signature ReadRowFromDataReader<T> rowReader and place it as a actual parameter.
With the way you can write more independent code working with database. At the first time probably usage of generic delegate looks a little complex, but all code will be really easy to read. I want to accentuate one more time, that you really receive some advantages of using interfaces in this case only if you separate some parts of the code in another method. If you don't separate the code, the only advantage which you receive is: you receive code parts which are written more independend and you could copy and paced this parts easier in another program.
Usage of names started with 'I' makes easier to understand that now we are working with something more general as with one class.
I stick to the convention only because I have to, if I am to use any interfaces in the BCL and maintain consistency.
I don't like the convention, either.
Cannot believe it that so many people hate the 'I' prefix. I love the prefix 'I'.
Here is why:
Are abstract and interface different? Yes
Do I care the difference as a developer? Yes, but not always.
When do I need to care?
Design discussion(When I draw on the board, prefix 'I' clearly telling everyone it's an interface)
Read existing code(When I see prefix 'I', clearly I know it's an interface. There'are exceptions for words start with 'I', but very few cases)
Do I always need 'I'? No. But I want consistency, so YES.
With just one prefix 'I', it avoids so much communication overhead.
I think the real question in case of .NET should be: why do we ever need to distinguish between a class and an interface in a client code?
And for the C# & .NET there is a shameful answer - because someone invented an explicit interface implementations language support. A thing that is in my opinion a complete mess, because it allows to break a Single Responsibility Principle in an invisible way to the caller. Lets assume we have an IList interface and a List class.
This is only by convention that List.Count() does the same thing as IList.Count() does for the class. Normally you can't be so sure. As for me explicit interface implementation is a hidden form of method overloading done in the most wrong way ever. Let's assume like in old native languages that the instance reference is a first argument of a called method.
Now we have int Count(IList list) and int Count(List list). From the language point of view these are two separate methods that clearly advertise their responsibility - one can work with a more abstract IList, and another with the specific implementation List. But this is clearly visible here! No one would expect that both methods return the same value, because the more specific method may retrieve extra properties etc. It is however non obvious in the C# language in an explicit interface implementation form, because the caller is non aware which form is actually used - compiler knows, but I as a programmer might be unaware.
Unless I know if I call a class method or an interface method! I think it is a source of this somehow stupid convention for interfaces. If you use types named without the "I" prefix - especially in method arguments and return types - you may be unaware of whether you call a class instance method or an interface method.
As a good programmer using SOLID principles you should work with interfaces all the time - as long it is possible, especially if you are aware of explicit implementations.
This is in my opinion a hidden purpose of naming C# interfaces is this way - to cover the bad design of explicit interface implementations. You may not agree, but think twice about it - how could you ever make a method overloading feature that is effectively hidden from the calling site without expecting that a naming convention will naturally appear in order to manage it?

What is better? Static methods OR Instance methods

I found that there are two type of methods called static methods and instance methods and their differences.
But still I couldnt understand the advantages of one over another.
Sometimes i feel that static methods are not 100% object oriented.
Are there any performance differences between this two.
Can someone help?
In a perfect OO world there probably wouldn't be any need for static methods (I think Eiffel doesn't have them, either). But at the end of the day what matters is not OO-pureness of your code (C# has enough concepts that aren't strictly pure OO, like extension methods, for example) but rather what you're getting done.
You can use static methods for general helper methods (that don't need a general helper class or state on their own) or things like Color.FromARGB() which behave slightly contructor-like for value types.
In general, any method that doesn't touch an objects state (and therefore is more class-specific than object-specific) can be made static. Performance differences shouldn't really arise. Not very measurable, in any case. Jan Gray's great article Writing faster managed code: Know what things cost has some hard data on this, albeit to be taken with care.
The usefulness of a static method primarily comes when you need to call the method without ever instantiating the object. For example, maybe the static method is there to actually look up an existing instance and return it (an example being a singleton instance).
As others have stated, you can make any method static if it doesn't access state, and you'll get a tiny performance improvement.
If you actually want to be able to call the method on a specific instance though, and get the benefits of polymorphism (i.e. a derived class can override the behaviour of the method), then you should make the it an instance method.
If your classes implement interfaces, then the methods belonging to those interfaces must also be declared as instance methods.
Instance methods are tight to an instance. So you could see one advantage of static methods is not being tight to an instance. Static methods can (if visible) used by other objects to solve their problems. Sometimes this good and needed. Then you have to think about keeping your static methods in the same class or if you start building utility classes for broader use.
I wouldn't see the use of static methods of being "less OO". Static methods is one way to circumvent the shortcomings of OO (especially in single inheritance languages). You can call it a more functional approach (I know it isn't really).
Taking all this is just a bunch of questions that you should ask your code and that should determine if it is better an instance method, a static method of the same class or a static method of another class.
I wouldn't even think about performance issues. It will weaken your design and the difference isn't really that big. Performance is important if you have performance problems.
Instance methods require passing an implicit parameter (the this reference) which make them slightly slower than static methods. But that really should not be the reason to prefer them.
For a related discussion, look at:
Should C# methods that *can* be static be static?
If your method uses non-static data members, don't make it static (you "can't").
If your method does not use any non-static data members, you can make it static, but that mostly depends on your design rather than on whether it uses or not uses non-static members (there's not much difference in performance anyway as Mehrdad said).
If you have NO non-static data members in your class, sometimes it's a best practice to make all the methods static (like in the case of grouping helper functions under one class for the sake of good order).
I'm partially guessing based on the heritage of C# but I suspect it's the same as the other OO languages.
Static methods do not require an object to work on. A good example is something like:
Double pi = Math.PI.
Instance methods do require an object. An example is along the lines of:
Integer x = 9;
Integer y = x.sqrt();
Not all information belonging to a class should need an object instantiated for that class to get access to it. All those constants which can be used for creation of objects (Math.PI, Window.OVERLAPPED, etc) are prime examples of this.
No one is better than the other. It really depends on your requirement. Class methods are called when you want to apply a change to class as a whole. Whereas Instance methods are called when you are not applying change to the class but to a unique instance (object) of that class.
So I dont see a reason why one should be better than the other.

Categories