Would you use regions within long switch/enum declarations? - c#

I've recently found myself needing (yes, needing) to define absurdly long switch statements and enum declarations in C# code, but I'm wondering what people feel is the best way to split them into logical subsections. In my situation, both the enum values and the cases (which are based on the enum values) have fairly clear groupings, yet I am slightly unsure how to reflect this in code.
Note that in my code, I have roughly 5 groups of between 10 and 30 enum values/cases each.
The three vaguely sensible options I can envisage are:
Define #region blocks around all logical groups of cases/enum values within the declaration (optionally separated by blank lines).
Comment each group with it's name, with a blank line before each group name comment.
Do nothing whatsoever - simply leave the switch/enum as a huge list of cases/values.
Which do you prefer? Would you treat enums and switches separately? (This would seem slightly odd to me.) Now, I wouldn't say that there is any right/wrong answer to this question, though I would nonetheless be quite interested in hearing what the general consenus of views is.
Note 1: This situation where I might potentially have an extremely long enum declaration of 50/100+ values is unfortunately unavoidable (and similarly with the switch), since I am attempting to write a lexer (tokeniser), and this would thus seem the most reasonable approach for several reasons.
Note 2: I am fully aware that several duplicate questions already exist on the question of whether to use regions in general code (for structuring classes, mainly), but I feel my question here is much more specific and hasn't yet been addressed.

Sure, region those things up. They probably don't change much, and when they do, you can expand the region, make your changes, collapse it, and move on to the rest of the file.
They are there for a reason, use them to your advantage.

You could also have a Dictionary<[your_enum_type], Action> (or Func instead of Action) or something like that (considering your functions have a similar signature). Then you could instead of using a switch, instead of:
switch (item)
{
case Enum1: func1(par1, par2)
break;
case Enum2: func2(par1, par2)
break;
}
you could have something like:
public class MyClass
{
Dictionary<int, Action<int, int>> myDictionary;
//These could have only static methods also
Group1Object myObject1;
Group2Object myObject2;
public MyClass()
{
//Again, you wouldn't have to initialize if the functions in them were static
myObject1 = new Group1Object();
myObject2 = new Group2Object();
BuildMyDictionary();
}
private Dictionary<int, Action<int, int>> BuildMyDictionary()
{
InsertGroup1Functions();
InsertGroup2Functions();
//...
}
private void InsertGroup2Functions()
{
myDictionary.Add(1, group2.AnAction2);
myDictionary.Add(2, group2.AnotherAction2);
}
private void InsertGroup1Functions()
{
myDictionary.Add(3, group1.AnAction1);
myDictionary.Add(4, group1.AnotherAction1);
}
public void DoStuff()
{
int t = 3; //Get it from wherever
//instead of switch
myDictionary[t](arg1, arg2);
}
}

I would leave it as a huge list of cases/ values.

If there are some cases that have the same code block, using the Strategy design pattern, could remove the switch block. This can create a lot of classes to you, but will show how complex it really is, and split the logic in smaller classes.

Get rid of the enums and make them into objects. You could then call methods on your objects and keep the code separated, maintainable, and not a nightmare.
There are very few cases when you would actually need to use an enum instead of an object and nobody likes long switch statements.

Here's a good shortcut for people who use regions.
I was switching between Eclipse and Visual Studio when I tried to go full screen in VS by pressing
Ctrl-M-M
and lo and behold, the region closed and expanded!

Related

How do I simplify my code?

I just finished creating my first major application in C#/Silverlight. In the end the total line count came out to over 12,000 lines of code. Considering this was a rewrite of a php/javascript application I created 2 years that was over 28,000 lines I am actually quite proud of my accomplishment.
After reading many questions and answers here on stackoverflow and other sites online, I followed many posters advice: I created classes, procedures, and such for things that I would have a year ago copied and pasted; I created logic charts to figure out complex functions; making sure there are no crazy hidden characters (used tabs instead of spaces); and a few others things; place comments where necessary (I have lots of comments).
My application consists of 4 tiles laid out horizontally that have user controls loaded into each slice. You can have between one and four slices loaded at anytime. If you have once slice loaded, the slice takes up the entire artboard...if you have 2 loaded, each take up half, 3 a third, 4 a quarter.
Each one of these slices represent (for the sake of this example) a light control. Each slice has 3 slider controls in it. Now when I coded the functionality of the sliders, I used a switch/case statement inside of a public function that would run the command on the specified slice/slider. This made for some duplicate code but I saw no way around it as each slice was named differently. So I would do slice1.my.commands(); slice2.my.commands(); etc.
My question to you is how do I clean up my code even futher? (Sadly I cannot post any of my code). Is there any way to take this repetion out of my code?
What you need is an interface with your friend the Strategy pattern. For example:
public interface ISlice
{
public Slider Slide {get;set;}
}
public class Slice1 : ISlice
{
public Slider Slide { get; set; }
}
public static class SliceSlider
{
public static void DoSomethingCoolWithTheSliceSlide(ISlice slice)
{
slice.Slide.LookitMeIAmLearningDesignPatterns();
}
}
Writing less code shouldn't be your goal. In the end it's all about TCO (Total cost of ownership).
While owning less code can improve the TCO, there is one factor that has a much greater impact for TCO: maintainability. You should write the most maintainable code. Start by reading Robert Martin's Clean Code.
Update:
Also you say “I have lots of comments”. This is a point where you might improve your code. As you will learn from Martin’s book, good code hardly needs any comments. Martin says that “comments are lies” and “should be reserved for technical notes about the code and design.”.
Update 2:
While I'm add it, here are my favorite quotes from Robert Martin's book:
"a class or module should have one, and only one, reason to change [Single Responsibility Principle]" [page 138]
"More than three [method arguments] is very questionable and should be avoided with prejudice." [page 288]
"The First rule of functions is that they should be small. The second rule of functions is that they should be smaller than that." [page 34]
"Functions should hardly ever be 20 lines long" [page 34]
"The statements in a function should all be written at the same level of abstraction" [page 304]
"Comments should be reserved for technical notes about the code and design." [page 286]
I tend to agree with Steven. Writing less code, or fewer lines, is not always the goal. Thinking back to some of the stories of Steve Wozniak he used to make very compact hardware, putting tons of logic into a very small package, but very few people could follow what he did, maintain it, or manufacture it.
That being said, I suggest you get very familiar with Design Patterns. They may not lessen your lines of code but they may make you code easier to write, maintain, and understand. And a lot of times they do reduce the number of lines you have. Here are some resources:
DoFactory Design Patterns Reference
Wikipedia Design Pattern Acticle
Interfaces and abstract classes are a very strong part of the .net platform.
An interface is nothing more than a contract requirement on a class. That is: an interface is a defined set of methods and/or properties that a class implementing that interface must have. An interface is just a contract declaration.
An abstract class is really powerful because you can carry logic 'into' classes that implement that abstract class. But that is a whole other ball game.
Consider:
public interface ISlice
{
bool DoStuff(string someParameter);
}
public class MySpecificSliceOfType : ISlice
{
// this must have a method implementation for the [bool DoStuff(string)] method
public bool DoStuff(string mySpecificParameter)
{
// LOGIC in the Specific class
return(true);
}
}
public class MyOtherSliceOfType : ISlice
{
// this must have a method implementation for the [bool DoStuff(string)] method
public bool DoStuff(string myOtherParameter)
{
// LOGIC in the Other class
return(true);
}
}
Whilst this is a heavily oversimplified example, declaring the Interface implentation of the ISlice interface on both the classes 'MySpecificSliceOfType' and 'MyOtherSliceOfType' means that the requisite DoStuff() method is regardless of which one you have because you can do things like:
bool sliceReturn = ((ISlice)currentSlice).DoStuff(currentStringParameterValue);
This can save you working through in things like:
bool sliceReturn = false;
switch(typeofSlice)
{
case "other" :
sliceReturn = MyOtherSliceOfType.DoStuff(currentStrignParamterValue);
break;
case "specific" :
sliceReturn = MySpecificSliceOfType.DoStuff(currentStrignParamterValue);
break;
}
The point being illustrated here is even stronger when you have > 2 different types.
And interfaces and abstract classes combine nicely with the C# type checking stuff too.
Interfaces are a fundamental in Reflection ... something to be used very sparingly but understodd because it can save so much in specific cases ... and in Serialisation (a.k.a. Serialization) which can really help you fly.
Since you can't really post any of your code, I might as well throw out a random thought. Can you put these slices into an array? If so you might be able to get rid of some of the redundant code by having each of the controls set a variable (I'll call it whichSlice). so the controls all set whichSlice to the proper number 1-4 and then you run a normal switch and call slices[whichSlice].my.commands();

C#: Why can't we have inner methods / local functions?

Very often it happens that I have private methods which become very big and contain repeating tasks but these tasks are so specific that it doesn't make sense to make them available to any other code part.
So it would be really great to be able to create 'inner methods' in this case.
Is there any technical (or even philosophical?) limitation that prevents C# from giving us this? Or did I miss something?
Update from 2016: This is coming and it's called a 'local function'. See marked answer.
Well, we can have "anonymous methods" defined inside a function (I don't suggest using them to organize a large method):
void test() {
Action t = () => Console.WriteLine("hello world"); // C# 3.0+
// Action t = delegate { Console.WriteLine("hello world"); }; // C# 2.0+
t();
}
If something is long and complicated than usually its good practise to refactor it to a separate class (either normal or static - depending on context) - there you can have private methods which will be specific for this functionality only.
I know a lot of people dont like regions but this is a case where they could prove useful by grouping your specific methods into a region.
Could you give a more concrete example? After reading your post I have the following impression, which is of course only a guess, due to limited informations:
Private methods are not available outside your class, so they are hidden from any other code anyway.
If you want to hide private methods from other code in the same class, your class might be to big and might violate the single responsibility rule.
Have a look at anonymous delegates an lambda expressions. It's not exactly what you asked for, but they might solve most of your problems.
Achim
If your method becomes too big, consider putting it in a separate class, or to create private helper methods. Generally I create a new method whenever I would normally have written a comment.
The better solution is to refactor this method to separate class. Create instance of this class as private field in your initial class. Make the big method public and refactor big method into several private methods, so it will be much clear what it does.
Seems like we're going to get exactly what I wanted with Local Functions in C# 7 / Visual Studio 15:
https://github.com/dotnet/roslyn/issues/2930
private int SomeMethodExposedToObjectMembers(int input)
{
int InnerMethod(bool b)
{
// TODO: Change return based on parameter b
return 0;
}
var calculation = 0;
// TODO: Some calculations based on input, store result in calculation
if (calculation > 0) return InnerMethod(true);
return InnerMethod(false);
}
Too bad I had to wait more than 7 years for this :-)
See also other answers for earlier versions of C#.

Programming against an enum in a switch statement, is this your way to do?

Look at the code snippet:
This is what I normally do when coding against an enum. I have a default escape with an InvalidOperationException (I do not use ArgumentException or one of its derivals because the coding is against a private instance field an not an incoming parameter).
I was wondering if you fellow developers are coding also with this escape in mind....
public enum DrivingState {Neutral, Drive, Parking, Reverse};
public class MyHelper
{
private DrivingState drivingState = DrivingState.Neutral;
public void Run()
{
switch (this.drivingState)
{
case DrivingState.Neutral:
DoNeutral();
break;
case DrivingState.Drive:
DoDrive();
break;
case DrivingState.Parking:
DoPark();
break;
case DrivingState.Reverse:
DoReverse();
break;
default:
throw new InvalidOperationException(
string.Format(CultureInfo.CurrentCulture,
"Drivestate {0} is an unknown state", this.drivingState));
}
}
}
In code reviews I encounter many implementations with only a break statement in the default escape. It could be an issue over time....
Your question was kinda vague, but as I understand it, you are asking us if your coding style is good. I usually judge coding style by how readable it is.
I read the code once and I understood it. So, in my humble opinion, your code is an example of good coding style.
There's an alternative to this, which is to use something similar to Java's enums. Private nested types allow for a "stricter" enum where the only "invalid" value available at compile-time is null. Here's an example:
using System;
public abstract class DrivingState
{
public static readonly DrivingState Neutral = new NeutralState();
public static readonly DrivingState Drive = new DriveState();
public static readonly DrivingState Parking = new ParkingState();
public static readonly DrivingState Reverse = new ReverseState();
// Only nested classes can derive from this
private DrivingState() {}
public abstract void Go();
private class NeutralState : DrivingState
{
public override void Go()
{
Console.WriteLine("Not going anywhere...");
}
}
private class DriveState : DrivingState
{
public override void Go()
{
Console.WriteLine("Cruising...");
}
}
private class ParkingState : DrivingState
{
public override void Go()
{
Console.WriteLine("Can't drive with the handbrake on...");
}
}
private class ReverseState : DrivingState
{
public override void Go()
{
Console.WriteLine("Watch out behind me!");
}
}
}
I don't like this approach because the default case is untestable. This leads to reduced coverage in your unit tests, which while isn't necessarily the end of the world, annoys obsessive-compulsive me.
I would prefer to simply unit test each case and have an additional assertion that there are only four possible cases. If anyone ever added new enum values, a unit test would break.
Something like
[Test]
public void ShouldOnlyHaveFourStates()
{
Assert.That(Enum.GetValues( typeof( DrivingState) ).Length == 4, "Update unit tests for your new DrivingState!!!");
}
That looks pretty reasonable to me. There are some other options, like a Dictionary<DrivingState, Action>, but what you have is simpler and should suffice for most simple cases. Always prefer simple and readable ;-p
This is probably going off topic, but maybe not. The reason the check has to be there is in case the design evolves and you have to add a new state to the enum.
So maybe you shouldn't be working this way in the first place. How about:
interface IDrivingState
{
void Do();
}
Store the current state (an object that implements IDrivingState) in a variable, and then execute it like this:
drivingState.Do();
Presumably you'd have some way for a state to transition to another state - perhaps Do would return the new state.
Now you can extend the design without invalidating all your existing code quite so much.
Update in response to comment:
With the use of enum/switch, when you add a new enum value, you now need to find each place in your code where that enum value is not yet handled. The compiler doesn't know how to help with that. There is still a "contract" between various parts of the code, but it is implicit and impossible for the compiler to check.
The advantage of the polymorphic approach is that design changes will initially cause compiler errors. Compiler errors are good! The compiler effectively gives you a checklist of places in the code you need to modify to cope with the design change. By designing your code that way, you gain the assistence of a powerful "search engine" that is able to understand your code and help you evolve it by finding problems at compile-time, instead of leaving the problems until runtime.
I would use the NotSupportedException.
The NotImplementedException is for features not implemented, but the default case is implemented. You just chose not to support it. I would only recommend throwing the NotImplementedException during development for stub methods.
I would suggest to use either NotImplementedException or better a custom DrivingStateNotImplementedException if you like to throw exceptions.
Me, I would use a default drivingstate for default (like neutral/stop) and log the missing driverstate (because it's you that missed the drivingstate, not the customer)
It's like a real car, cpu decides it misses to turn on the lights, what does it do, throw an exception and "break" all control, or falls back to a known state which is safe and gives a warning to the driver "oi, I don't have lights"
What you should do if you encounter an unhandled enum value of course depends on the situation. Sometimes it's perfectly legal to only handle some of the values.
If it's an error that you have an unhandles value you should definitely throw an exception just like you do in the example (or handle the error in some other way). One should never swallow an error condition without producing an indication that there is something wrong.
A default case with just a break doesn't smell very good. I would remove that to indicate the switch doesn't handle all values, and perhaps add a comment explaining why.
Clear, obvious and the right way to go. If DrivingState needs to change you may need to refactor.
The problem with all the complicated polymorphic horrors above is they force the encapsulation into a class or demand additional classes - it's fine when there's just a DrivingState.Drive() method but the whole thing breaks as soon as you have a DrivingState.Serialize() method that serializes to somewhere dependent on DrivingState, or any other real-world condition.
enums and switches are made for each other.
I'm a C programmer, not C#, but when I have something like this, I have my compiler set to warn me if not all enum cases are handled in the switch. After setting that (and setting warnings-as-errors), I don't bother with runtime checks for things that can be caught at compile time.
Can this be done in C#?
I never use switch. The code similar to what you show was always a major pain point in most frameworks I used -- unextensible and fixed to a limited number of pre-defined cases.
This is a good example of what can be done with simple polymorphism in a nice, clean and extensible way. Just declare a base DrivingStrategy and inherit all version of driving logic from it. This is not over-engineering -- if you had two cases it would be, but four already show a need for that, especially if each version of Do... calls other methods. At least that's my personal experience.
I do not agree with Jon Skeet solution that freezes a number of states, unless that is really necessary.
I think that using enum types and therefore switch statements for implementing State (also State Design Pattern) is not a particularly good idea. IMHO it is error-prone. As the State machine being implemented becomes complex the code will be progressively less readable by your fellow programmers.
Presently it is quite clean, but without knowing the exact intent of this enum it is hard to tell how it will develop with time.
Also, I'd like to ask you here - how many operations are going to be applicable to DrivingState along with Run()? If several and if you're going to basically replicate this switch statement a number of times, it would scream of questionable design, to say the least.

Large Switch statements: Bad OOP?

I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case.
I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable.
I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days.
Are there any patterns for this sort of problem? Any suggestions on possible implementations?
One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands.
However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go?
I'd appreciate your thoughts, opinions, or comments.
You may get some benefit out of a Command Pattern.
For OOP, you may be able to collapse several similar commands each into a single class, if the behavior variations are small enough, to avoid a complete class explosion (yeah, I can hear the OOP gurus shrieking about that already). However, if the system is already OOP, and each of the 100+ commands is truly unique, then just make them unique classes and take advantage of inheritance to consolidate the common stuff.
If the system is not OOP, then I wouldn't add OOP just for this... you can easily use the Command Pattern with a simple dictionary lookup and function pointers, or even dynamically generated function calls based on the command name, depending on the language. Then you can just group logically associated functions into libraries that represent a collection of similar commands to achieve manageable separation. I don't know if there's a good term for this kind of implementation... I always think of it as a "dispatcher" style, based on the MVC-approach to handling URLs.
I see having two switch statements as a symptom of non-OO design, where the switch-on-enum-type might be replaced with multiple types which provide different implementations of an abstract interface; for example, the following ...
switch (eFoo)
{
case Foo.This:
eatThis();
break;
case Foo.That:
eatThat();
break;
}
switch (eFoo)
{
case Foo.This:
drinkThis();
break;
case Foo.That:
drinkThat();
break;
}
... should perhaps be rewritten to as ...
IAbstract
{
void eat();
void drink();
}
class This : IAbstract
{
void eat() { ... }
void drink() { ... }
}
class That : IAbstract
{
void eat() { ... }
void drink() { ... }
}
However, one switch statement isn't imo such a strong indicator that the switch statement ought to be replaced with something else.
The command can be one of 100 different commands
If you need to do one out of 100 different things, you can't avoid having a 100-way branch. You can encode it in control flow (switch, if-elseif^100) or in data (a 100-element map from string to command/factory/strategy). But it will be there.
You can try to isolate the outcome of the 100-way branch from things that don't need to know that outcome. Maybe just 100 different methods is fine; there's no need to invent objects you don't need if that makes the code unwieldy.
I think this is one of the few cases where large switches are the best answer unless some other solution presents itself.
I see the strategy pattern. If I have 100 different strategies...so be it. The giant switch statement is ugly. Are all the Commands valid classnames? If so, just use the command names as class names and create the strategy object with Activator.CreateInstance.
There are two things that come to mind when talking about a large switch statement:
It violates OCP - you could be continuously maintaining a big function.
You could have bad performance: O(n).
On the other hand a map implementation can conform to OCP and could perform with potentially O(1).
I'd say that the problem is not the big switch statement, but rather the proliferation of code contained in it, and abuse of wrongly scoped variables.
I experienced this in one project myself, when more and more code went into the switch until it became unmaintainable. My solution was to define on parameter class which contained the context for the commands (name, parameters, whatever, collected before the switch), create a method for each case statement, and call that method with the parameter object from the case.
Of course, a fully OOP command dispatcher (based on magic such as reflection or mechanisms like Java Activation) is more beautiful, but sometimes you just want to fix things and get work done ;)
You can use a dictionary (or hash map if you are coding in Java) (it's called table driven development by Steve McConnell).
One way I see you could improve that would make your code driven by the data, so for example for each code you match something that handles it (function, object). You could also use reflection to map strings representing the objects/functions and resolve them at run time, but you may want to make some experiments to assess performance.
The best way to handle this particular problem: serialization and protocols cleanly is to use an IDL and generate the marshaling code with switch statements. Because whatever patterns (prototype factory, command pattern etc.) you try to use otherwise, you'll need to initialize a mapping between a command id/string and class/function pointer, somehow and it 'll runs slower than switch statements, since compiler can use perfect hash lookup for switch statements.
Yes, I think large case statements are a symptom that one's code can be improved... usually by implementing a more object oriented approach. For example, if I find myself evaluating the type of classes in a switch statement, that almost always mean I could probably use Generics to eliminate the switch statement.
You could also take a language approach here and define the commands with associated data in a grammar. You can then use a generator tool to parse the language. I have used Irony for that purpose. Alternatively you can use the Interpreter pattern.
In my opinion the goal is not to build the purest OO model, but to create a flexible, extensible, maintainable and powerful system.
I have recently a similar problem with a huge switch statement and I got rid off the ugly switch by the most simple solution a Lookup table and a function or method returning the value you expect. the command pattern is nice solution but having 100 classes is not nice I think.
so I had something like:
switch(id)
case 1: DoSomething(url_1) break;
case 2: DoSomething(url_2) break;
..
..
case 100 DoSomething(url_100) break;
and I've changed for :
string url = GetUrl(id);
DoSomthing(url);
the GetUrl can go to DB and return the url you are looking for, or could be a dictionary in memory holding the 100 urls.
I hope this could help anyone out there when replacing a huge monstrous switch statements.
Think of how Windows was originally written in the application message pump. It sucked. Applications would run slower with the more menu options you added. As the command searched for ended further and further towards the bottom of the switch statement, there was an increasingly longer wait for response. It's not acceptable to have long switch statements, period. I made an AIX daemon as a POS command handler that could handle 256 unique commands without even knowing what was in the request stream received over TCP/IP. The very first character of the stream was an index into a function array. Any index not used was set to a default message handler; log and say goodbye.

Implementing a "LazyProperty" class - is this a good idea?

I often find myself writing a property that is evaluated lazily. Something like:
if (backingField == null)
backingField = SomeOperation();
return backingField;
It is not much code, but it does get repeated a lot if you have a lot of properties.
I am thinking about defining a class called LazyProperty:
public class LazyProperty<T>
{
private readonly Func<T> getter;
public LazyProperty(Func<T> getter)
{
this.getter = getter;
}
private bool loaded = false;
private T propertyValue;
public T Value
{
get
{
if (!loaded)
{
propertyValue = getter();
loaded = true;
}
return propertyValue;
}
}
public static implicit operator T(LazyProperty<T> rhs)
{
return rhs.Value;
}
}
This would enable me to initialize a field like this:
first = new LazyProperty<HeavyObject>(() => new HeavyObject { MyProperty = Value });
And then the body of the property could be reduced to:
public HeavyObject First { get { return first; } }
This would be used by most of the company, since it would go into a common class library shared by most of our products.
I cannot decide whether this is a good idea or not. I think the solutions has some pros, like:
Less code
Prettier code
On the downside, it would be harder to look at the code and determine exactly what happens - especially if a developer is not familiar with the LazyProperty class.
What do you think ? Is this a good idea or should I abandon it ?
Also, is the implicit operator a good idea, or would you prefer to use the Value property explicitly if you should be using this class ?
Opinions and suggestions are welcomed :-)
Just to be overly pedantic:
Your proposed solution to avoid repeating code:
private LazyProperty<HeavyObject> first =
new LazyProperty<HeavyObject>(() => new HeavyObject { MyProperty = Value });
public HeavyObject First {
get {
return first;
}
}
Is actually more characters than the code that you did not want to repeat:
private HeavyObject first;
public HeavyObject First {
get {
if (first == null) first = new HeavyObject { MyProperty = Value };
return first;
}
}
Apart from that, I think that the implicit cast made the code very hard to understand. I would not have guessed that a method that simply returns first, actually end up creating a HeavyObject. I would at least have dropped the implicit conversion and returned first.Value from the property.
Don't do it at all.
Generally using this kind of lazy initialized properties is a valid design choice in one case: when SomeOperation(); is an expensive operation (in terms of I/O, like when it requires a DB hit, or computationally) AND when you are certain you will often NOT need to access it.
That said, by default you should go for eager initialization, and when profiler says it's your bottleneck, then change it to lazy initialization.
If you feel urge to create that kind of abstraction, it's a smell.
Surely you'd at least want the LazyPropery<T> to be a value type, otherwise you've added memory and GC pressure for every "lazily-loaded" property in your system.
Also, what about multiple-threaded scenarios? Consider two threads requesting the property at the same time. Without locking, you could potentially create two instances of the underlying property. To avoid locking in the common case, you would want to do a double-checked lock.
I prefer the first code, because a) it is such a common pattern with properties that I immediately understand it, and b) the point you raised: that there is no hidden magic that you have to go look up to understand where and when the value is being obtained.
I like the idea in that it is much less code and more elegant, but I would be very worried about the fact that it becomes hard to look at it and tell what is going on. The only way I would consider it is to have a convention for variables set using the "lazy" way, and also to comment anywhere it is used. Now there isn't going to be a compiler or anything that will enforce those rules, so still YMMV.
In the end, for me, decisions like this boil down to who is going to be looking at it and the quality of those programmers. If you can trust your fellow developers to use it right and comment well then go for it, but if not, you are better off doing it in a easily understood and followed way. /my 2cents
I don't think worrying about a developer not understanding is a good argument against doing something like this...
If you think that then you couldn't do anything for the fear of someone not understanding what you did
You could write a tutorial or something in a central repository, we have here a wiki for these kind of notes
Overall, I think it's a good implementation idea (not wanting to start a debate whether lazyloading is a good idea or not)
What I do in this case is I create a Visual Studio code snippet. I think that's what you really should do.
For example, when I create ASP.NET controls, I often times have data that gets stored in the ViewState a lot, so I created a code snippet like this:
public Type Value
{
get
{
if(ViewState["key"] == null)
ViewState["key"] = someDefaultValue;
return (Type)ViewState["key"];
}
set{ ViewState["key"] = value; }
}
This way, the code can be easily created with only a little work (defining the type, the key, the name, and the default value). It's reusable, but you don't have the disadvantage of a complex piece of code that other developers might not understand.
I like your solution as it is very clever but I don't think you win much by using it. Lazy loading a private field in a public property is definitely a place where code can be duplicated. However this has always struck me as a pattern to use rather than code that needs to be refactored into a common place.
Your approach may become a concern in the future if you do any serialization. Also it is more confusing initially to understand what you are doing with the custom type.
Overall I applaud your attempt and appreciate its cleverness but would suggest that you revert to your original solution for the reasons stated above.
Personally, I don't think the LazyProperty class as is offers enough value to justify using it especially considering the drawbacks using it for value types has (as Kent mentioned). If you needed other functionality (like making it multithreaded), it might be justified as a ThreadSafeLazyProperty class.
Regarding the implicit property, I like the "Value" property better. It's a little more typing, but a lot more clear to me.
I think this is an interesting idea. First I would recommend that you hide the Lazy Property from the calling code, You don't want to leak into your domain model that it is lazy. Which your doing with the implicit operator so keep that.
I like how you can use this approach to handle and abstract away the details of locking for example. If you do that then I think there is value and merit. If you do add locking watch out for the double lock pattern it's very easy to get it wrong.

Categories