Related
I have an abstract class that is supposed to become the base of a huge hierarchy of classes. Among other things, it has a certain abstract method. During the execution, it will be called on all objects of this class constantly by a system that knows only about this abstract class and not it's children:
foreach( AbstractClass object in allTheObjects )
object.DoStuff();
However, a lot of it's children in fact don't override this method (which is empty in the base class); ones that use it and ones that don't a distributed among the class hierarchy. Will the C# take care of this empty method, or will I have to optimize it in some way?
P.S.: I know that premature optimization is evil, but it just got me really curious.
I wouldn't worry about that. In C#, method calls are really cheap. And even if you would optizmizie this, I doubt that you will see any differnce. See this post for reference.
I'd worry more about if you could avoid looping through everything, or what data strucutre allTheObjects is. I'd say there's much more potential for optimization there.
Also you might think about if you really need a big inheritance structure, or if you can achieve your goals with composition or interfaces.
You will also find more information here (interface methods vs. delegates vs. normal method calls)
However, a lot of it's children in fact will have this method empty
Maybe you need to declare this method as virtual instead of abstract? Thus you can provide default empty implementation.
In the comments of this answer it is stated that "checking whether the object has implemented the interface , rampant as it may be, is a bad thing"
Below is what I believe is an example of this practice:
public interface IFoo
{
void Bar();
}
public void DoSomething(IEnumerable<object> things)
{
foreach(var o in things)
{
if(o is IFoo)
((IFoo)o).Bar();
}
}
With my curiosity piqued as someone who has used variations of this pattern before, I searched for a good example or explanation of why it is a bad thing and was unable to find one.
While it is very possible that I misunderstood the comment, can someone provide me with an example or link to better explain the comment?
It depends on what you're trying to do. Sometimes it can be appropriate - examples could include:
LINQ to Objects, where it's used to optimize operations like Count which can be performed more efficiently on an IList<T> via the specialized members.
LINQ to XML, where it's used to provide a really friendly API which accepts a wide range of types, iterating over values where appropriate
If you wanted to find all the controls of a certain type under a particular control in Windows Forms, you would want to check whether each control was a container to determine whether or not to recurse.
In other cases it's less appropriate and you should consider whether you can change the parameter type instead. It's definitely a "smell" - normally you shouldn't concern yourself with the implementation details of whatever has been handed to you; you should just use the API provided by the declared parameter type. This is also known as a violation of the Liskov Substitution Principle.
Whatever the dogmatic developers around may say, there are times when you simply do want to check an object's execution time type. It's hard to override object.Equals(object) correctly without using is/as/GetType, for example :) It's not always a bad thing, but it should always make you consider whether there's a better approach. Use sparingly, only where it's genuinely the most appropriate design.
I would personally rather write the code you've shown like this, mind you:
public void DoSomething(IEnumerable<object> things)
{
foreach(var foo in things.OfType<IFoo>())
{
foo.Bar();
}
}
It accomplishes the same thing, but in a neater way :)
I would expect the method to look like this, it seems much safer:
public void DoSomething(IEnumerable<IFoo> things)
{
foreach(var o in things)
{
o.Bar();
}
}
To read about the referred violation of the Liskov Principle: What is the Liskov Substitution Principle?
If you want to know why the commenter made that comment, probably best to ask them to explain.
I would not consider the code you posted to be "bad". A more "genuinely" bad practice is to use interfaces as markers. That is, you're not planning on actually using a method of the interface; rather, you have declared the interface on a class as a way of describing it in some way. Use attributes, not interfaces, as markers on classes.
Marker interfaces are hazardous in a number of ways. A real-world situation I once ran into where an important product made a bad decision on the basis of a marker interface is here: http://blogs.msdn.com/b/ericlippert/archive/2004/04/05/108086.aspx
That said, the C# compiler itself uses a "marker interface" in one situation. Mads tells the story here: http://blogs.msdn.com/b/madst/archive/2006/10/10/what-is-a-collection_3f00_.aspx
A reason is that there will be a dependency on that interface that is not immediately visible without digging in the code.
The statement
checking whether the object has implemented the interface , rampant
as it may be, is a bad thing
Is overly dogmatic in my opinion. As other people have answered, you may well be able to pass a collection of IFoo to your method and achieve the same result.
However, interfaces can be useful to add optional features to classes. For example the .net framework provides the IDataErrorInfo interface*. When this is implemented it indicates to a consumer that in addition to the class' standard functionality, it can also provide error information.
In this case, the error information is optional. A WPF view model may or may not provide error information. Without querying for interfaces, this optional functionality would not be possible without base classes with huge surface area.
*We'll ignore for the moment the terrible design of the IDataErrorInfo interface.
If your method requires that you inject an instance of an interface, you should treat it the same regardless of the implementation.
In your example you generally wouldn't have a generic list of object, but a list of ISomething's and calling an ISomething.Bar() would be implemented by the concrete type, therefore calling it's implementaiton. If that implementation is to do nothing, then you don't have to do a check.
I dislike this whole "switch on type" style of coding for a couple of reasons. (Examples drawn in relation to my industry, game development. Apologies in advance. :) )
First and foremost, I think it's sloppy to have a heterogeneous collection of items. E.g. I could have a collection of "everything everywhere," but then when iterating the collection to apply bullet effects or fire damage or enemy AI, I have to walk this list which is mostly stuff I don't care about. It's much "cleaner" IMHO to have separate collections of bullets, raging fires, and enemies. Note that there's no reason why I can't have a single item in multiple collections; a single burning robotic missile could be referenced in all three of those lists to do parts of its "update" as appropriate for the three types of logic it needs to run. Outside of having "one single collection that references everything," I think a collection containing everything everywhere is not terribly useful; you can't do anything with anything in the list unless you query it for what it can do.
I hate doing unnecessary work. This really ties into the above, but when you create a given thing you know what its capabilities are (or can query them at that point), so you might as well take the opportunity at that time to put them in the right more specific collections. You have 16ms to process everything in the world, do you want to waste your time dealing with, querying, and selecting from generic things, or do you want to get down to business and operate only on the specific things you care about?
In my experience, transforming a codebase from generic operation on heterogeneous datasets to one that has homogeneous datasets has resulted in not only performance increases but also comprehension increases that come from simpler code doing more obvious work and in general a reduction in the amount of code required to do any given task.
So yeah, it's dogmatic to say that querying interfaces is bad, but it does seem to make things simpler if you can figure out how to avoid needing to query anything. As for my "performance" statements and the counter that "if you don't measure it, you can't say anything about it," it should be obvious that not doing something is faster than doing it. Whether or not this is important to an individual project, programmer, or function is up to the person with the editor, but if I can simplify code and while doing so make it do less work for the same results, I'm going to do it without bothering to measure.
I don’t see this as a “bad thing” at all, at least not in itself. The code is merely a literal transcription of “x all of the y in z”, and in a situation where you need to do that, it’s perfectly acceptable. You can of course use things.OfType<Foo>() for the sake of concision.
The main reason to recommend against it is that, according to OOP theology, interfaces are intended to model the different kinds of “black box” for which an object may substituted. Predicating an algorithm on fulfillment of an interface constitutes moving behaviour to the algorithm that should be in that interface.
Essentially, an interface is a behavioural role. If you think OOP is a good idea, then you should use interfaces only to model behaviours, so that algorithms don’t have to. I don’t think what passes for OOP these days is in fact a good idea, so this is as far as my answer can be useful.
This question already has answers here:
Why use getters and setters/accessors?
(37 answers)
Closed 8 years ago.
Getters and Setters are bad
Briefly reading over the above article I find that getters and setters are bad OO design and should be avoided as they go against Encapsulation and Data Hiding. As this is the case how can it be avoided when creating objects and how can one model objects to take this into account.
In cases where a getter or setter is required what other alternatives can be used?
Thanks.
You have missed the point. The valid, important bit of that article is:
Don't ask for the information you need
to do the work; ask the object that
has the information to do the work for
you.
Java-style getter and setter proliferation are symptoms of ignoring this advice.
Getters or setters by themselves are not bad OO design.
What is bad is coding practice which includes a getter AND a setter for EVERY single member automatically, whether that getter/setter is needed or not (coupled with making members public which should not be public) - because this basically exposes class's implementation to outside world violating the information hiding/abstraction. Sometimes this is done automatically by IDE, which means such practice is significantly more widespread than it's hoped for.
Yes, getters and setters is an anti-pattern in OOP: http://www.yegor256.com/2014/09/16/getters-and-setters-are-evil.html. In a nutshell, they don't fit into object-oriented paradigm because they encourage you to treat an object like a data structure. Which is a major misconception.
You may find more details in my book Elegant Objects.
I believe in including setters in the API only if they are really part of the class specification (i.e. its contract with the caller).
Any other data member related to inner representation should be hidden, and I see at least 2 major reasons for that:
1) If inner representation is exposed, design changes are more problematic in the future, and require API changes as well.
2) Exposing data members with setters/getters with no caution allows callers to ruin the class invariant very easily. Objects with inconsistent state can cause bugs which are very difficult to analyze.
Unfortunately there are some frameworks which encourage (sometimes require) adding setters/getters for everything.
Getters and setters are not bad by themselves. What is bad is the practice of making any field to be private and provide getters/setters for all of them no matter what.
The way I read it, the author argues that blindly putting getters and setters around fields is no better than just making the field public.
I believe that the author argues that getters and setters should be placed sparsely, and with care, because the idea of OO is that objects should limit their exposure to what is needed only.
I'll go a bit further and say only value types should have getters. Each value type should be immutable and come with a builder which is mutable and has setters.
If your value type has setters they should return a new instance after copying the other values.
Url.setAnchor(...)
Would return a new Url copying the host port etc but overwrite the anchor.
Your service type classes don't need setters ( set them in ctor) and definitely font need getters. Your Mailer should take the host/port/etc static stuff in it's ctor. If I wish to send an email then I cell it's send(), there is no reason why my code should need to know or want or require the host and other config values. That said it would make sense to create a MailServer class like the following
// value type
MsilServer{
String host
int port
String username
Strung password // all come ruth getters
}
// factory
Mailer create( MailServer)
Getters and setters are just methods. What do they do? They change a field into a property, and that's an important distinction.
A field is a bit of data, of state, in the object. A property is an externally observable characteristic, part of the contract of the object. Spraying the guts of an object all over the place, whether directly or through getters/setters, is always a bad idea. Poor design. But raising that to the point of saying that getters and setters are always bad? That's just some poor programmer without a sense of good taste claiming that a vague rule of thumb (which they didn't really understand in the first place) is a cast iron law.
Personally, I tend to go for trying to keep properties as being things that don't change unexpectedly underneath the clients' feet. (Clients of the class that is.) The only properties that I'll change dynamically are ones they can't write, and even then I'll try to avoid it. I feel that properties are in many ways values that control the behaviour of the instance which are set by the client, not arbitrary things under the control of the class. (That's what normal field is for...)
I suggest that you read the whole article carefully as it presents well thought out arguments and alternatives. The question itself is too open ended and the answers you get here will be just more opinions.
From the article:
When is an accessor okay? First, as I
discussed earlier, it's okay for a
method to return an object in terms of
an interface that the object
implements because that interface
isolates you from changes to the
implementing class. This sort of
method (that returns an interface
reference) is not really a "getter" in
the sense of a method that just
provides access to a field. If you
change the provider's internal
implementation, you just change the
returned object's definition to
accommodate the changes. You still
protect the external code that uses
the object through its interface.
In other words, use interfaces for both getter and setter methods when they are necessary that way you can maintain encapsulation. It is good practice in general to specify an interface for a return type or formal parameter.
There are other articles that say they are good design.
But the answer is this:
getXXX and setXXX are good or bad based on usage.
these mathods do make a lot of things easier. So if some article says it's a bad design I think; it's just trying to be too hard on the idea.
One of the biggest usage of these methods are in a lot of framework and used as a reflection mechanism.
So don't completely avoid using these, rather... use it judiciously.
thanks
Ayusman
Getters and Setters are bad OO design?
Only when used without thinking.
If you're to create a value-object to transport data, they're fine.
If you're creating a more important object then they shouldn't be there. Take a look at the ArrayList interface* for instance. You don't see Object[] getElementData() because that would break the encapsulation ( some one may temper the underlying array ) .
* Im referring to the public interface of the class ( the methods exposed by the class definition )
Getters/Setters are perfectly appropriate, in the right conditions. However, mass Get/Set is wrong.
However, I'm a tad confused. You tagged Java but the material isn't really particularly Java specific (only the examples). It's worth noting that in C++ (best done in 0x) many of these issues don't exist. If you wanted to change your interface from long to int, then between decltype, templates and auto this is easily achievable, which is why such constructs rock. Typedef works well too.
That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway.
So would not using this convention cause confusion?
Are there any c# projects or libraries of note that drop this convention?
Do any c# projects that mix conventions, as unfortunately Apache Wicket does?
The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.
The difference between Java and C# is that Java allows you to easily distinguish whether you implement an interface or extend a class since it has the corresponding keywords implements and extends.
As C# only has the : to express either an implementation or extension, I recommend following the standard and put an I before an interface's name.
It's bad practice in my opionion too. The reasons why, additional to yours are:
The whole purpose of interfaces is to abstract away implementation details. So it shouldn't matter if you call a method with a IParam or Param.
Elaborated tools have their own possibilities to mark interfaces with an icon.
If your eye is searching in a IDE for a name, the most significant part is the beginning of a string. Maybe your classes get sorted by alphabet, and now you have a block of similar names, all starting with I... together. They look similar, while it would be of advantage to distinguish them easily. It's ergonomical wrong to use an I-prefix.
Even more annoying: ImplList, ImplThat, AFoo for an abstract Foo, AImplFooBar for an abstract Foo, which implements Bar? SSomething as Singleton, or SMath for a static class? Stop it! :)
With respect, in your post you are only considering your needs (I, I, I), and not the needs of the readers of your code. If you are a one-man shop, then fair enough, but if your code if ever read by others, then consider that they will be expecting interfaces to have an I prefix--that is just the way it is in .Net, and too many people are used to it to change now.
Also, it would help if you used more readable names for classes. What is PSec? How can I tell whether IPSec is an interface, when I can't even tell what PSec is? If instead PSec was renamed to e.g., PersonalSecurity, then IPersonalSecurity is much more likely to be an interface.
Using I for interfaces goes against the whole point of an interface imo, that it is a connector that you can plug different concrete implementations in to dependencies.
An object that uses the database needs a DataStore, not an IDataStore, and it should be up to configuration whether that gets a DatabaseDataStore or a FileSystemDataStore or whatever plugged into it (or a MockDataStore for testing).
Read this and move on. If you're using Java, follow the Java naming conventions.
It's not a sin per se, it's best practice. It makes things a lot more readable all in all. Also, think about it. IMyClass is the interface to MyClass. It just makes sense, and stops unnecessary confusion. Also remember the : syntax vs. implements/extends. Lastly, you can bypass all of this by simply checking the tooltips/go to in VS, but for pure readability, the standard is important in my opinion.
Not that I'm aware of, but I'm sure they exist.
Haven't seen any, but I'm sure they exist.
I think the main reason for the I-Prefix is not that those using it can see it's an interface but that those implementing/deriving from existing classes and interfaces can see more easily wether it's an interface or base class.
Another advantage is that it prevents stupid things like (If my Java memory serves me correctly):
List foo = new List(); // Why does it fail?
The third advantage is refactoring. If you move through your objects and read the code you can see where you forgot to code-by-interface. "A method accepting something with a type not prefixed with I? Fix it!".
I used it even in Java and found it quite usefull, but it always depends on the guidelines for your company/team. Follow them, no matter how stupid you may think they are, some day you will be happy they exist.
Ask yourself: If my IDE could give me some hint in the text (e.g different colour, underline, italic...) that the type was an interface would I still bother?
Sounds like you are naming the types like that just so you can tell from the name something about parts of the definition other than the name.
Best practices override convention sometimes, in my opinion. While I may not personally like the convention, not using it goes against the best practice that has been in place for longer than I care to think about.
I would look at it more from the point of how other people do it, in this case. Since 99% of the common world will be prefacing with the "I", that is good enough to keep this best practice. If you have to bring in a contractor or on-board a new developer, you should be able to focus on the code and not have to explain/defend choices that you made.
It has been around long enough, and is ingrained well enough, that I don't expect it to change in my lifetime. It is just one of those "unwritten rules", better defined as an "unwritten best practice", that will probably outlive me.
I would say that not following this convention would get you down to .NET hell. It's a convention that's almost as important to me as using self in instance methods in Python.
I don't see any good reason to do this. 'Extends' vs 'implements' already tells you whether you are dealing with a class or an interface in the cases where it actually matters. In all other cases the whole idea is that you don't care.
In my opinion the biggest reason "I" is often prefixed is that the IDEs for both Java (Eclipse) and .NET (V Studio) do not make it extremely clear that the Class you are looking at is in fact an interface. The package browser in eclipse shows the same icon till you expand the class file and the font of an Interface declaration is not any different than a class.
An Example would be if I type:
ISomeInterface s = factory.create();
ISomeInterface should atleast have some sort of font modification to show that its an interface (like italics or underline).
The other big reason is in the Java world that people prefix with "I" is that it makes it easier in Eclipse to do a "Ctrl-Shift-R" and search for only interfaces.
This is important in the Java/Spring world where you need interfaces as your collaborators if you plan on using any AOP magic or some other Dynamic proxies.
Than you have the nasty choice of either prefixing your interface with "I" or suffixing your implementation class with "Impl" like ListImpl. I abhor the suffixing of classes with "Impl" to make the interface and concrete differ in name and prefix the prefix of "I".
In general I try to avoid making lots of interfaces.
In my own code I would never prefix with "I". I'm only give some reasons why people do it which is for old code consistency.
conventions exist to help all of us. If there is a chance another .net developer will be working with you then yes, follow the convention.
One idea is that the "I" part can be followed by a verb, stating what classes that implement the interface does; like ISaveXmlData, forming a nice human language name.
The key thing is consistency - as long you stick to having I prefixed to all interfaces or none at all, it's a matter of preference.
I use the I prefix for interfaces at work since the existing code already uses it for a naming convention for each interface. I find it more intuitive to quickly determine if a class implements an interface or another class simply by looking for the I prefix in the name of the base class.
On the other hand, some of the older projects at work don't use this naming convention and this makes the code slightly less readable, but it might just be that I'm used to the prefix.
Look at the BCL. In the Base Class Libraries you have IList<>, IQueryable, IDisposable.
If you don't prepend it with a 'I', how would people know it's an interface other than going to the definition?
Anyways, just my 2 cents
You can choose all names in your program how you like, but it's a good idea to hold naming conversion, if not you only will be read the program.
Usage of Interfaces is good not only if you design you own classes and interfaces. In some cases you makes other accents in your program it you use interfaces. For example, you can write code like
SqlDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
or like
IDataReader dr = cmd.ExecuteReader (CommandBehavior.SequentialAccess);
if (!dr.HasRows) {
// ...
}
while (dr.Read ()) {
string name = dr.GetString (0);
// ...
}
the last one have looks like the same, but if you are using IDataReader instead of SqlDataReader you can easier to place some parts which works with dr in a method, which works not only with SqlDataReader class (but with OleDbDataReader, OracleDataReader, OdbcDataReader etc). On the other hand your program stay working exactly quick as before.
Updated (based on questions from comments):
The advantage is, like I written before, if you'll separate some parts of you code which work with IDataReader. For example, you can define delegate T ReadRowFromDataReader<T> (IDataReader dr, ...) and use it inside of while (dr.Read ()) block. So you write code which is more general as the code working with SqlDataReader directly. Inside of while (dr.Read ()) block you call rowReader (dr, ...). Your different implementations of code reading rows of data can be placed in a method with signature ReadRowFromDataReader<T> rowReader and place it as a actual parameter.
With the way you can write more independent code working with database. At the first time probably usage of generic delegate looks a little complex, but all code will be really easy to read. I want to accentuate one more time, that you really receive some advantages of using interfaces in this case only if you separate some parts of the code in another method. If you don't separate the code, the only advantage which you receive is: you receive code parts which are written more independend and you could copy and paced this parts easier in another program.
Usage of names started with 'I' makes easier to understand that now we are working with something more general as with one class.
I stick to the convention only because I have to, if I am to use any interfaces in the BCL and maintain consistency.
I don't like the convention, either.
Cannot believe it that so many people hate the 'I' prefix. I love the prefix 'I'.
Here is why:
Are abstract and interface different? Yes
Do I care the difference as a developer? Yes, but not always.
When do I need to care?
Design discussion(When I draw on the board, prefix 'I' clearly telling everyone it's an interface)
Read existing code(When I see prefix 'I', clearly I know it's an interface. There'are exceptions for words start with 'I', but very few cases)
Do I always need 'I'? No. But I want consistency, so YES.
With just one prefix 'I', it avoids so much communication overhead.
I think the real question in case of .NET should be: why do we ever need to distinguish between a class and an interface in a client code?
And for the C# & .NET there is a shameful answer - because someone invented an explicit interface implementations language support. A thing that is in my opinion a complete mess, because it allows to break a Single Responsibility Principle in an invisible way to the caller. Lets assume we have an IList interface and a List class.
This is only by convention that List.Count() does the same thing as IList.Count() does for the class. Normally you can't be so sure. As for me explicit interface implementation is a hidden form of method overloading done in the most wrong way ever. Let's assume like in old native languages that the instance reference is a first argument of a called method.
Now we have int Count(IList list) and int Count(List list). From the language point of view these are two separate methods that clearly advertise their responsibility - one can work with a more abstract IList, and another with the specific implementation List. But this is clearly visible here! No one would expect that both methods return the same value, because the more specific method may retrieve extra properties etc. It is however non obvious in the C# language in an explicit interface implementation form, because the caller is non aware which form is actually used - compiler knows, but I as a programmer might be unaware.
Unless I know if I call a class method or an interface method! I think it is a source of this somehow stupid convention for interfaces. If you use types named without the "I" prefix - especially in method arguments and return types - you may be unaware of whether you call a class instance method or an interface method.
As a good programmer using SOLID principles you should work with interfaces all the time - as long it is possible, especially if you are aware of explicit implementations.
This is in my opinion a hidden purpose of naming C# interfaces is this way - to cover the bad design of explicit interface implementations. You may not agree, but think twice about it - how could you ever make a method overloading feature that is effectively hidden from the calling site without expecting that a naming convention will naturally appear in order to manage it?
There's a lot of advice out there that you shouldn't expose your fields publically, and instead use trivial properties. I see it over & over.
I understand the arguments, but I don't think it's good advice in most cases.
Does anyone have an example of a time when it really mattered? When writing a trivial property made something important possible in the future (or when failing to use one got them in to real trouble)?
EDIT: The DataBinding argument is correct, but not very interesting. It's a bug in the DataBinding code that it won't accept public fields. So, we have to write properties to work around that bug, not because properties are a wise class design choice.
EDIT: To be clear, I'm looking for real-world examples, not theory. A time when it really mattered.
EDIT: The ability to set a breakpoint on the setter seems valuable. Designing my code for the debugger is unfortunate: I'd rather the debugger get smarter, but given the debugger we have, I'll take this ability. Good stuff.
It may be hard to make code work in an uncertain future, but that's no excuse to be lazy. Coding a property over a field is convention and it's pragmatic. Call it defensive programming.
Other people will also complain that there's a speed issue, but the JIT'er is smart enough to make it just about as fast as exposing a public field. Fast enough that I'll never notice.
Some non-trivial things that come to mind
A public field is totally public, you can not impose read-only or write-only semantics
A property can have have different get versus set accessibility (e.g. public get, internal set)
You can not override a field, but you can have virtual properties.
Your class has no control over the public field
Your class can control the property. It can limit setting to allowable range of values, flag that the state was changed, and even lazy-load the value.
Reflection semantics differ. A public field is not a property.
No databinding, as others point out. (It's only a bug to you. - I can understand Why .net framework designers do not support patterns they are not in favour of.)
You can not put a field on an interface, but you can put a property on an interface.
Your property doesn't even need to store data. You can create a facade and dispatch to a contained object.
You only type an extra 13 characters for correctness. That hardly seems like speculative generality. There is a semantic difference, and if nothing else, a property has a different semantic meaning and is far more flexible than a public field.
public string Name { get; set; }
public string name;
I do recall one time when first using .net I coded a few classes as just fields, and then I needed to have them as properties for some reason, and it was a complete waste of time when I could have just done it right the first time.
So what reasons do you have for not following convention? Why do you feel the need to swim upstream? What has it saved you by not doing this?
I've had a trivial property save me a couple of times when debugging. .Net doesn't support the concept of a data break point (read or write). Occasionally when debugging a very complex scenario it was important to track read/writes to a particular property. This is easy with a property but impossible with a field.
If you're not working in a production environment, it's simple to refactor a field -> property for the purpose of debugging. Occasionally though you hit bugs that only reproduce in a production environment that is difficult to patch with a new binary. A property can save you here.
It's a fairly constrained scenario though.
I used to think the same thing, Jay. Why use a property if it's only there to provide direct access to a private member? If you can describe it as an autoproperty, having a property at all rather than a field seemed kind of silly. Even if you ever need to change the implementation, you could always just refactor into a real property later and any dependent code would still work, right?. Well, maybe not.
You see, I've recently seen the light on trivial properties, so maybe now I can help you do the same.
What finally convinced me was the fairly obvious point (in retrospect) that properties in .Net are just syntactic sugar for getter and setter methods, and those methods have a different name from the property itself. Code in the same assembly will still work, because you have to recompile it at the same time anyway. But any code in a different assembly that links to yours will fail if you refactor a field to a property, unless it's recompiled against your new version at the same time. If it's a property from the get-go, everything is still good.
I'll answer your question with another one: have you ever really benefited from not making all your types and members public? I suspect I haven't directly prevented any bugs by doing so. However, I've encapsulated my types properly, only exposing what's meant to be exposed. Properties are similar - good design more than anything else. I think of properties as conceptually different from fields; they're part of the contract rather than being fundamentally part of the implementation. Thinking about them as properties rather than fields helps me to think more clearly about my design, leading to better code.
Oh, and I've benefited occasionally from not breaking source compatibility, being able to set breakpoints, log access etc.
Part of the idea is that those properties may not be trivial in the future - if you bound external code to a field and later wanted to wrap that in a property, all of the dependent code would have to change, and you may not be able to do that, especially in the case where you are a control designer or have libraries that you can't control, etc.
Not to mention there are certain .NET practices that will not allow you to use a field - databinding particularly.
I am sure there are other good reasons as well. Why does it matter to you? Use an automatic property and be done with it. Seems like something not worth being concerned about to me...
It's much easier to debug a problem involving a field if it is wrapped by a property accessor. Placing breakpoints in the accessor can quite quickly help with finding re-entrancy and other issues that otherwise might not be caught. By marshaling all access to the field through an accessor, you can ascertain exactly who is changing what and when.
In .NET, from my understanding, you cannot databind to public fields; but only to properties. Thus, if you want to do databinding, you have no choice.
I once had fields that I wanted to expose from a windows from project which allowed the stats for the program (TotalItems and SuccessfulItems).
Later I decided to display the stats on the form and was able to add a call in the setter that updated the display when the property changed.
Obviously, if you're not creating a shared class library, and you're not using DataBinding, then using a field will cause you no problems whatsoever.
But if you're creating a shared class library, you'd be foolish IMHO to do otherwise than follow the guidelines, for the usual three reasons:
consumers of your shared class library may want to use DataBinding.
consumers of your shared class might want binary compatibility, which is not possible if you switch from a field to a property.
the principal of least surprise implies you should be consistent with other shared class libraries including the .NET Framework itself.
IMHO there is no such thing as a trivial property as people have been calling them. Via the way things such as databinding work, Microsoft has implied that any bit of data that is a part of the public interface of an object should be a property. I don't think they meant it merely to be a convention like it is in some other languages where property syntax is more about convention and convenience.
A more interesting question may be: "When should I use a public field instead of a property", or "When has a public field instead of a public property saved your bacon?"
Fields which are of structure types allow direct access to the members thereof, while properties of such types do not. Consequently, if Thing.Boz were a field of type Point, code which wants to modify its X value could simply say Thing.Boz.X += 5;; if Thing.Boz were a mutable property, it would be necessary to use var tmp = Thing.Boz; tmp.X += 5; Thing.Boz = tmp;. The ability to write things more cleanly with the exposed field is often, but not always, a blessing.
If it will always be possible for Boz to be a field, modifying its members directly will be cleaner, faster, and better than copying it to a temporary variable, modifying that, and copying it back. If the type of Boz exposes its mutable fields directly (as structures should) rather than wrapping them in trivial wrappers, it will also be possible to use things like Interlocked methods on them--something that's simply impossible with properties. There's really only one disadvantage to using fields in that way: if it's ever necessary to replace the field with a property, code which relied upon the thing being a field will break, and may be hard to fix.
In short, I would posit that in cases where one isn't concerned about being able to swap in different versions of code without having to recompile any consumers, the biggest effect of using properties rather than fields is to prevent consumers of the code from writing code which would take advantage of (and rely upon) the semantics of exposed fields which are of structure types.
Incidentally, an alternative to exposing a field of a structure type would be to expose an ActOnXXX method. For example:
delegate void ActionByRef<T1>(ref T1 p1);
delegate void ActionByRef<T1,T2>(ref T1 p1, ref T2 p2);
delegate void ActionByRef<T1,T2,T3>(ref T1 p1, ref T2 p2, ref T3 p3);
// Method within the type that defines property `Boz`
void ActOnBoz<T1>(ActionByRef<Point, T1> proc, ref T1 p1)
{
proc(ref _Boz, ref p1); // _Boz is the private backing field
}
Code which wanted to add some local variable q to Thing.Boz.X could call Thing.ActOnBoz((ref Point pt, ref int x) => {pt.X += x;}, ref q); and have the action performed directly on Thing._Boz, even though the field is not exposed.