Need a C# example of unintended consequences - c#

I am putting together a presentation on the benefits of Unit Testing and I would like a simple example of unintended consequences: Changing code in one class that breaks functionality in another class.
Can someone suggest a simple, easy to explain an example of this?
My plan is to write unit tests around this functionality to demonstrate that we know we broke something by immediately running the test.

A slightly simpler, and thus perhaps clearer, example is:
public string GetServerAddress()
{
return "172.0.0.1";
}
public void DoSomethingWithServer()
{
Console.WriteLine("Server address is: " + GetServerAddress());
}
If GetServerAddress is changes to return an array:
public string[] GetServerAddress()
{
return new string[] { "127.0.0.1", "localhost" };
}
The output from DoSomethingWithServer will be somewhat different, but it will all still compile, making for an even subtler bug.
The first (non-array) version will print Server address is: 127.0.0.1 and the second will print Server address is: System.String[], this is something I've also seen in production code. Needless to say it's no longer there!

Here's an example:
class DataProvider {
public static IEnumerable<Something> GetData() {
return new Something[] { ... };
}
}
class Consumer {
void DoSomething() {
Something[] data = (Something[])DataProvider.GetData();
}
}
Change GetData() to return a List<Something>, and Consumer will break.
This might seen somewhat contrived, but I've seen similar problems in real code.

Say you have a method that does:
abstract class ProviderBase<T>
{
public IEnumerable<T> Results
{
get
{
List<T> list = new List<T>();
using(IDataReader rdr = GetReader())
while(rdr.Read())
list.Add(Build(rdr));
return list;
}
}
protected abstract IDataReader GetReader();
protected T Build(IDataReader rdr);
}
With various implementations being used. One of them is used in:
public bool CheckNames(NameProvider source)
{
IEnumerable<string> names = source.Results;
switch(names.Count())
{
case 0:
return true;//obviously none invalid.
case 1:
//having one name to check is a common case and for some reason
//allows us some optimal approach compared to checking many.
return FastCheck(names.Single());
default:
return NormalCheck(names)
}
}
Now, none of this is particularly weird. We aren't assuming a particular implementaiton of IEnumerable. Indeed, this will work for arrays and very many commonly used collections (can't think of one in System.Collections.Generic that doesn't match off the top of my head). We've only used the normal methods, and the normal extension methods. It's not even unusual to have an optimised case for single-item collections. We could for instance change the list to be an array, or maybe a HashSet (to automatically remove duplicates), or a LinkedList or a few other things and it'll keep working.
Still, while we aren't depending on a particular implementation, we are depending on a particular feature, specifically that of being rewindable (Count() will either call ICollection.Count or else enumerate through the enumerable, after which the name-checking will take place.
Someone though sees Results property and thinks "hmm, that's a bit wasteful". They replace it with:
public IEnumerable<T> Results
{
get
{
using(IDataReader rdr = GetReader())
while(rdr.Read())
yield return Build(rdr);
}
}
This again is perfectly reasonable, and will indeed lead to a considerable performance boost in many cases. If CheckNames isn't hit in the immediate "tests" done by the coder in question (maybe it isn't hit in a lot of code paths), then the fact that CheckNames will error (and possibly return a false result in the case of more than 1 name, which may be even worse, if it opens a security risk).
Any unit test that hits on CheckNames with the more than zero results is going to catch it though.
Incidentally a comparable (if more complicated) change is a reason for a backwards-compatibility feature in NPGSQL. Not quite as simple as just replacing a List.Add() with a return yield, but a change to the way ExecuteReader worked gave a comparable change from O(n) to O(1) to get the first result. However, before then NpgsqlConnection allowed users to obtain another reader from a connection while the first was still open, and after it didn't. The docs for IDbConnection says you shouldn't do this, but that didn't mean there was no running code that did. Luckily one such piece of running code was an NUnit test, and a backwards-compatibility feature added to allow such code to continue to function with just a change to configuration.

Related

How to avoid convoluted logic for custom log messages in code?

I know the title is a little too broad, but I'd like to know how to avoid (if possible) this piece of code I've just coded on a solution of ours.
The problem started when this code resulted in not enough log information:
...
var users = [someRemotingProxy].GetUsers([someCriteria]);
try
{
var user = users.Single();
}
catch (InvalidOperationException)
{
logger.WarnFormat("Either there are no users corresponding to the search or there are multiple users matching the same criteria.");
return;
}
...
We have a business logic in a module of ours that needs there to be a single 'User' that matches some criteria. It turned out that, when problems started showing up, this little 'inconclusive' information was not enough for us to properly know what happened, so I coded this method:
private User GetMappedUser([searchCriteria])
{
var users = [remotingProxy]
.GetUsers([searchCriteria])
.ToList();
switch (users.Count())
{
case 0:
log.Warn("No user exists with [searchCriteria]");
return null;
case 1:
return users.Single();
default:
log.WarnFormat("{0} users [{1}] have been found"
users.Count(),
String.Join(", ", users);
return null;
}
And then called it from the main code like this:
...
var user = GetMappedUser([searchCriteria]);
if (user == null) return;
...
The first odd thing I see there is the switch statement over the .Count() on the list. This seems very strange at first, but somehow ended up being the cleaner solution. I tried to avoid exceptions here because these conditions are quite normal, and I've heard that it is bad to try and use exceptions to control program flow instead of reporting actual errors. The code was throwing the InvalidOperationException from Single before, so this was more of a refactor on that end.
Is there another approach to this seemingly simple problem? It seems to be kind of a Single Responsibility Principle violation, with the logs in between the code and all that, but I fail to see a decent or elegant way out of it. It's even worse in our case because the same steps are repeated twice, once for the 'User' and then for the 'Device', like this:
Get unique user
Get unique device of unique user
For both operations, it is important to us to know exactly what happened, what users/devices were returned in case it was not unique, things like that.
#AntP hit upon the answer I like best. I think the reason you are struggling is that you actually have two problems here. The first is that the code seems to have too much responsibility. Apply this simple test: give this method a simple name that describes everything it does. If your name includes the word "and", it's doing too much. When I apply that test, I might name it "GetUsersByCriteriaAndValidateOnlyOneUserMatches()." So it is doing two things. Split it up into a lookup function that doesn't care how many users are returned, and a separate function that evaluates your business rule regarding "I can handle only one user returned".
You still have your original problem, though, and that is the switch statement seems awkward here. The strategy pattern comes to mind when looking at a switch statement, although pragmatically I'd consider it overkill in this case.
If you want to explore it, though, think about creating a base "UserSearchResponseHandler" class, and three sub classes: NoUsersReturned; MultipleUsersReturned; and OneUserReturned. It would have a factory method that would accept a list of Users and return a UserSearchResponseHandler based on the count of users (encapsulating the logic of the switch inside the factory.) Each handler method would do the right thing: log something appropriate then return null, or return a single user.
The main advantage of the Strategy pattern comes when you have multiple needs for the data it identifies. If you had switch statements buried all over your code that all depended on the count of users found by a search, then it would be very appropriate. The factory can also encapsulate substantially more complex rules, such as "user.count must = 1 AND the user[0].level must = 42 AND it must be a Tuesday in September". You can also get really fancy with a factory and use a registry, allowing for dynamic changes to the logic. Finally, the factory nicely separates the "interpreting" of the business rule from the "handling" of the rule.
But in your case, probably not so much. I'm guessing you likely have only the one occurrence of this rule, it seems pretty static, and it's already appropriately located near the point where you acquired the information it's validating. While I'd still recommend splitting out the search from the response parser, I'd probably just use the switch.
A different way to consider it would be with some Goldilocks tests. If it's truly an error condition, you could even throw:
if (users.count() < 1)
{
throw TooFewUsersReturnedError;
}
if (users.count() > 1)
{
throw TooManyUsersReturnedError;
}
return users[0]; // just right
How about something like this?
public class UserResult
{
public string Warning { get; set; }
public IEnumerable<User> Result { get; set; }
}
public UserResult GetMappedUsers(/* params */) { }
public void Whatever()
{
var users = GetMappedUsers(/* params */);
if (!String.IsNullOrEmpty(users.Warning))
log.Warn(users.Warning);
}
Switch for a List<string> Warnings if required. This treats your GetMappedUsers method more like a service that returns some data and some metadata about the result, which allows you to delegate your logging to the caller - where it belongs - so your data access code can get on with just doing its job.
Although, to be honest, in this scenario I would prefer simply to return a list of user IDs from GetMappedUsers and then use users.Count to evaluate your "cases" in the caller and log as appropriate.

Enhancing testability by decomposing batch tasks

I can't seem to find much information on this so I thought I'd bring it up here. One of the issues I often find myself running into is unit testing the creation of a single object while processing a list. For example, I'd have a method signature such as IEnumerable<Output> Process(IEnumerable<Input> inputs). When unit testing a single input I would create a list of one input and simply call First() on the results and ensure it is what I expect it to be. This would lead to something such as:
public class BatchCreator
{
public IEnumerable<Output> Create(IEnumerable<Input> inputs)
{
foreach (var input in inputs)
{
Console.WriteLine("Creating Output...");
yield return new Output();
}
}
}
My current thinking is that maybe one class should be responsible for the objects creation while another class be responsible for orchestrating my list of inputs. See example below.
public interface ICreator<in TInput, out TReturn>
{
TReturn Create(TInput input);
}
public class SingleCreator : ICreator<Input, Output>
{
public Output Create(Input input)
{
Console.WriteLine("Creating Output...");
return new Output();
}
}
public class CompositeCreator : ICreator<IEnumerable<Input>, IEnumerable<Output>>
{
private readonly ICreator<Input, Output> _singleCreator;
public CompositeCreator(ICreator<Input, Output> singleCreator)
{
_singleCreator = singleCreator;
}
public IEnumerable<Output> Create(IEnumerable<Input> inputs)
{
return inputs.Select(input => _singleCreator.Create(input));
}
}
With what's been posted above, I can easily test that I'm able to create one single instance of Output given an Input. Note that I do not need to call SingleCreator anywhere else in the code base other than from CompositeCreator. Creating ICreator would also give me the benefit of reusing it for other times I need to do similar tasks, which I currently do 2-3 other times in my current project
Anyone have any experience with this that could shed some light? Am I simply overthinking this? Suggestions are greatly appreciated.
Generally speaking, there's nothing inherently wrong with your reasoning. More or less that's how the issue can be solved.
However, your CompositeCreator isn't actually composite, since it uses precisely one "creation method".
It's difficult to say anything more, because we don't know your project internals, but if it integrates well into your use cases, then it's fine. What I'd try is stay with ICreator<Tin, Tout> only and make an extension method IEnumerable<Tout> CreateMany(this IEnumerable<Tin> c) to deal with collections. You can test both easily, independently (fake ICreator and check whether collection of inputs is processed). This way you get rid of ICreator<IEnumerable, ...>, which is usually good, because operating on collection as a whole and operating on individual items often don't go well together.
I'm not entirely sure why you need the IEnumerable input/output option, the composite creator, unless it is more than just a collection, as that's a problem solved by LINQ, which would look something like:
var singleCreator = new SingleCreator();
var outputs = InputEnumerable.Select(singleCreator.Create);
I think this is subjective, and depends on the complexity of the classes you are passing around - if it's not just an IEnumerable then it's worthwhile having some sort of multiple creator, which may or may not need to be a class.

Should I created class or create if?

I have a situation:
I nee to do something with a class.
What should be more efficiente, modify the method this way witf IFs or created methos for each action?
public value Value(int command)
{
if (command == 1)
{
DoSomething1();
}
if (command == 2)
{
DoSomething2();
}
else
{
return empty();
}
}
There are going to be like 50 o more of this commands.
Whats isbetter in terms of performance on execution and size of the exectuable?
At a high-level, it looks like you're trying to implement some kind of dynamic-dispatch system? Or are you just wanting to perform a specified operation without any polymorphism? It's hard to tell.
Anyway, based on the example you've given, switch block would be the most performant, as the JIT compiler converts it into an efficient hashtable lookup instead of a series of comparisons, so just do this:
enum Command { // observe how I use an enum instead "magic" integers
DoSomethingX = 1,
DoSomethingY = 2
}
public Value GetValue(Command command) {
switch(command) {
case Command.DoSomethingX: return DoSomethingX();
case Command.DoSomethingY: return DoSomethingY();
default: return GetEmpty();
}
}
I also note that the switch block also means you get more compact code.
This isn't a performance problem as much as it is a paradigm problem.
In C# a method should be an encapsulation of a task. What you have here is a metric boatload of tasks, each unrelated. That should not be in a single method. Imagine trying to maintain this method in the future. Imagine trying to debug this, wondering where you are in the method as each bit is called.
Your life will be much easier if you split this out, though the performance will probably make no difference.
Although separate methods will nearly certainly be better in terms of performance, it is highly unlikely that you should notice the difference. However, having separate methods should definitely improve readability a lot, which is a lot more important.

Coming out of the habit of writing ifs/elseifs for every possible condition

When parsing an xml document for its nodes or attributes, if the document is large, I would have a bunch of ifs and else statements.
Obviously, 100+ ifs does not make up maintainable code in the long run.
Rather than doing this, is there another better way? I read on Hanselman's blog about a friend of his who had the same situation and wrote loads of ifs/else if and generally poor code. Hanselman provided some snippets of a more maintainable way but the entire code isn't available so it's a little hard to understand exactly what (the whole picture) is going on. Life after if, else
I am using .NET 3.5 SO I have the full power of extension methods and LINQ available to me. However, I use .NET 2.0 a work so would also appreciate any solutions in v2.0. :)
My code looks very similar to the problem on Hanselman's site:
if (xmlNode.Attributes["a"].Value == "abc"
{
}
else if (xmlNode.Attributes["b"].Value == "xyz"
{
wt = MyEnum.Haze;
}
I could just have a dictionary storing the values I am looking for as keys and perhaps a delegate in the value (or whatever I want to happen on finding a required value), so I could say if (containskey) get delegate and execute it, in pseudocode.
This sort of thing goes on and on. Obviously very naive way of coding. I have the same problem with parsing a text document for values, etc.
Thanks
If you need to map <condition on xml node> to <change of state> there's no way to avoid defining that mapping somewhere. It all depends on how many assumptions you can make about the conditions and what you do under those conditions. I think the dictionary idea is a good one. To offer as much flexibility as possible, I'd start like this:
Dictionary<Predicate<XmlNode>, Action> mappings;
Then start simplifying where you can. For example, are you often just setting wt to a value of MyEnum like in the example? If so, you want something like this:
Func<MyEnum, Action> setWt = val =>
() => wt = val;
And for the presumably common case that you simply check if an attribute has a specific value, you'd want some convenience there too:
Func<string, string, Predicate<XmlNode>> checkAttr = (attr, val) =>
node => node.Attributes[attr] == val;
Now your dictionary can contain items like:
...
{checkAttr("a", "abc"), setWt(MyEnum.Haze)},
...
Which is nice and terse, but also isn't restricted to the simple <attribute, value> to <enum> mapping. OK, so now you have a big dictionary of these condition-action pairs, and you just say:
foreach(DictionaryEntry<Predicate<XmlNode>, Action> mapping in mappings)
{
if (mapping.Key(xmlNode))
{
mapping.Value();
break;
}
}
If you avoid the lambda syntax and the dictionary initializers, you should be able to do that in 2.0.
What you're doing here is executing a list of tests. For each test, if a predicate is true, execute an action. When a test passes, stop processing the list. Right?
A couple of people have suggested using a dictionary, but the problem with using a dictionary is that you don't control the order of the items in it. If you want to perform the tests in a specific order (which, as stated, you do), that's not going to work. So a list seems like the way to go.
Here's a functional way to do this, assuming that the predicates are examining an XmlElement.
Your tests are instances of a class:
class Test
{
string Predicate { get; set; }
Action Verb { get; set; }
Test(string predicate, Action verb)
{
Predicate = predicate;
Verb = verb;
}
bool Execute(XmlElement e)
{
if (e.SelectSingleNode(Predicate) != null)
{
Verb();
return true;
}
return false;
}
}
To populate the list of tests:
List<Test> tests = new List<Test>();
tests.Add(new Test("#foo = 'bar'", Method1));
tests.Add(new Test("#foo = 'baz'", Method2));
tests.Add(new Test("#foo = 'bat'", Method3));
To execute the tests:
foreach (Test t in tests)
{
if (t.Execute()) break;
}
You've eliminated a lot of if/else clutter, but you've replaced it with this:
void Method1()
{
... do something here
}
void Method2()
{
... do something else here
}
If your method naming is good, though, this results in pretty clean code.
To use .NET 2.0, I think you need to add this to the code:
public delegate void Action();
because I think that type was defined in 3.0. I could be wrong.
The link you are referring to spells out one of my favorite approaches - populating a dictionary and using it as a map from your xml attributes to the values you're setting, etc.
Another "trick" I've used is taking an extension on that. If your logic around containing a specific attribute is more than just setting a value, you can make a dictionary of attribute names (or values) to delegates, where the delegate sets your value and optionally performs some logic.
This is nice because it works in .net 2 and .net3/3.5. The delegates can be nicer to setup in .net 3.5, though.
Once you have the map, then you can do a foreach loop on all of your attributes, and just lookup the delegate, if it exists, call it, if it doens't, move on/throw/etc - all up to you.
Well, I would use LINQ in 3.5. However, have you thought about using a typed dataset; is this a possibility or is the schema too loose? You could infer the schema and still reduce a lot of the gobbeldy-gook code. This is one approach.
Depending on the document and your scenario and what you use the if/elses for... if it's for validation of your XML document, validate it against a schema. If it validates, you can assume safely that certain elements are present...
It's a bit hard to tell what you need to do. If it's to set one variable based on an XML attribute then the one line approach Hanselman alluded to is the most elegant.
MyEnum wt = (MyEnum)Enum.Parse(typeof(MyEnum), xmlNode.Attributes["a"].Value, true);
From the brief example you provided it looks like you may need to set the variable based on different XML attributes and if that's the case you may not be able to get around the need for a big if/else block.
If you do have a semblance of structure in the XML you are processing it can sometimes be easier to process an XML node as a DataRow. If your XML is all over the place then this approach isn't much good.
DataSet xmlDerivedSet = new DataSet();
xmlDerivedSet.ReadXml(xmlFilename);
foreach (DataRow row in xmlDerivedSet.Tables[0].Rows)
{
MyClass xmlDerivedClass = new MyClass(row);
}
If you are doing processing for each big node, you might also want to have some specific typed xml reader classes to more cleanly integrate separate it from the actual logic of the application.
Lets say the processing you are doing is for some good amount of customer data you are receiving in Xml. You could define a class like:
public class CustomerXmlReader
{
public class CustomerXmlReader(XmlReader xml){}
public Customer Read()
{ // read the next customer
}
}
This way the rest of the application just keep working with the Customer object, and you avoid mixing it with the Xml processing.
What Scott Hanselman is describing, once you clear away the implementation details, is a straightforward table-driven method. These are discussed in many books, such as Steve McConnell's "Code Complete", chapter 12 (either edition.) Also discussed on Stack Overflow,
here.

Programming against an enum in a switch statement, is this your way to do?

Look at the code snippet:
This is what I normally do when coding against an enum. I have a default escape with an InvalidOperationException (I do not use ArgumentException or one of its derivals because the coding is against a private instance field an not an incoming parameter).
I was wondering if you fellow developers are coding also with this escape in mind....
public enum DrivingState {Neutral, Drive, Parking, Reverse};
public class MyHelper
{
private DrivingState drivingState = DrivingState.Neutral;
public void Run()
{
switch (this.drivingState)
{
case DrivingState.Neutral:
DoNeutral();
break;
case DrivingState.Drive:
DoDrive();
break;
case DrivingState.Parking:
DoPark();
break;
case DrivingState.Reverse:
DoReverse();
break;
default:
throw new InvalidOperationException(
string.Format(CultureInfo.CurrentCulture,
"Drivestate {0} is an unknown state", this.drivingState));
}
}
}
In code reviews I encounter many implementations with only a break statement in the default escape. It could be an issue over time....
Your question was kinda vague, but as I understand it, you are asking us if your coding style is good. I usually judge coding style by how readable it is.
I read the code once and I understood it. So, in my humble opinion, your code is an example of good coding style.
There's an alternative to this, which is to use something similar to Java's enums. Private nested types allow for a "stricter" enum where the only "invalid" value available at compile-time is null. Here's an example:
using System;
public abstract class DrivingState
{
public static readonly DrivingState Neutral = new NeutralState();
public static readonly DrivingState Drive = new DriveState();
public static readonly DrivingState Parking = new ParkingState();
public static readonly DrivingState Reverse = new ReverseState();
// Only nested classes can derive from this
private DrivingState() {}
public abstract void Go();
private class NeutralState : DrivingState
{
public override void Go()
{
Console.WriteLine("Not going anywhere...");
}
}
private class DriveState : DrivingState
{
public override void Go()
{
Console.WriteLine("Cruising...");
}
}
private class ParkingState : DrivingState
{
public override void Go()
{
Console.WriteLine("Can't drive with the handbrake on...");
}
}
private class ReverseState : DrivingState
{
public override void Go()
{
Console.WriteLine("Watch out behind me!");
}
}
}
I don't like this approach because the default case is untestable. This leads to reduced coverage in your unit tests, which while isn't necessarily the end of the world, annoys obsessive-compulsive me.
I would prefer to simply unit test each case and have an additional assertion that there are only four possible cases. If anyone ever added new enum values, a unit test would break.
Something like
[Test]
public void ShouldOnlyHaveFourStates()
{
Assert.That(Enum.GetValues( typeof( DrivingState) ).Length == 4, "Update unit tests for your new DrivingState!!!");
}
That looks pretty reasonable to me. There are some other options, like a Dictionary<DrivingState, Action>, but what you have is simpler and should suffice for most simple cases. Always prefer simple and readable ;-p
This is probably going off topic, but maybe not. The reason the check has to be there is in case the design evolves and you have to add a new state to the enum.
So maybe you shouldn't be working this way in the first place. How about:
interface IDrivingState
{
void Do();
}
Store the current state (an object that implements IDrivingState) in a variable, and then execute it like this:
drivingState.Do();
Presumably you'd have some way for a state to transition to another state - perhaps Do would return the new state.
Now you can extend the design without invalidating all your existing code quite so much.
Update in response to comment:
With the use of enum/switch, when you add a new enum value, you now need to find each place in your code where that enum value is not yet handled. The compiler doesn't know how to help with that. There is still a "contract" between various parts of the code, but it is implicit and impossible for the compiler to check.
The advantage of the polymorphic approach is that design changes will initially cause compiler errors. Compiler errors are good! The compiler effectively gives you a checklist of places in the code you need to modify to cope with the design change. By designing your code that way, you gain the assistence of a powerful "search engine" that is able to understand your code and help you evolve it by finding problems at compile-time, instead of leaving the problems until runtime.
I would use the NotSupportedException.
The NotImplementedException is for features not implemented, but the default case is implemented. You just chose not to support it. I would only recommend throwing the NotImplementedException during development for stub methods.
I would suggest to use either NotImplementedException or better a custom DrivingStateNotImplementedException if you like to throw exceptions.
Me, I would use a default drivingstate for default (like neutral/stop) and log the missing driverstate (because it's you that missed the drivingstate, not the customer)
It's like a real car, cpu decides it misses to turn on the lights, what does it do, throw an exception and "break" all control, or falls back to a known state which is safe and gives a warning to the driver "oi, I don't have lights"
What you should do if you encounter an unhandled enum value of course depends on the situation. Sometimes it's perfectly legal to only handle some of the values.
If it's an error that you have an unhandles value you should definitely throw an exception just like you do in the example (or handle the error in some other way). One should never swallow an error condition without producing an indication that there is something wrong.
A default case with just a break doesn't smell very good. I would remove that to indicate the switch doesn't handle all values, and perhaps add a comment explaining why.
Clear, obvious and the right way to go. If DrivingState needs to change you may need to refactor.
The problem with all the complicated polymorphic horrors above is they force the encapsulation into a class or demand additional classes - it's fine when there's just a DrivingState.Drive() method but the whole thing breaks as soon as you have a DrivingState.Serialize() method that serializes to somewhere dependent on DrivingState, or any other real-world condition.
enums and switches are made for each other.
I'm a C programmer, not C#, but when I have something like this, I have my compiler set to warn me if not all enum cases are handled in the switch. After setting that (and setting warnings-as-errors), I don't bother with runtime checks for things that can be caught at compile time.
Can this be done in C#?
I never use switch. The code similar to what you show was always a major pain point in most frameworks I used -- unextensible and fixed to a limited number of pre-defined cases.
This is a good example of what can be done with simple polymorphism in a nice, clean and extensible way. Just declare a base DrivingStrategy and inherit all version of driving logic from it. This is not over-engineering -- if you had two cases it would be, but four already show a need for that, especially if each version of Do... calls other methods. At least that's my personal experience.
I do not agree with Jon Skeet solution that freezes a number of states, unless that is really necessary.
I think that using enum types and therefore switch statements for implementing State (also State Design Pattern) is not a particularly good idea. IMHO it is error-prone. As the State machine being implemented becomes complex the code will be progressively less readable by your fellow programmers.
Presently it is quite clean, but without knowing the exact intent of this enum it is hard to tell how it will develop with time.
Also, I'd like to ask you here - how many operations are going to be applicable to DrivingState along with Run()? If several and if you're going to basically replicate this switch statement a number of times, it would scream of questionable design, to say the least.

Categories