Strategy pattern for modifying internals of the caller? - c#

Perhaps strategy pattern isn't what I'm after. Say my code looks like this (pseudo version):
class Machine
{
private Stack<State> _internals;
public void DoOperation(Thingy x)
{
switch (x.operation)
{
case Op.Foo:
DoFoo();
break;
case Op.Bar:
DoBar();
break;
case Op.Baz:
DoBaz();
}
}
private void DoFoo()
{
// pushing and popping things from _internals, doing things to those States
}
private void DoBar()
{
// similarly large method to foo, but doing something much different to _internals
}
private void DoBaz()
{
// you get the idea...
}
}
Foo, Bar, and Baz are rather complex methods (not extremely long, just deserve separating) so I want to break them into classes with a common interface, a la strategy pattern. The problem is, I can't encapsulate _internals in those classes. I mean, I could pass it into the Execute method on those classes, but that seems like a bad way to go. The internals persist longer than the single operation, so the strategy classes can't "own" the internals themselves. Multiple different operations could be done on this Machine, with different Thingy's passed in.
Is there a different route you can suggest?
edit
This is kind of a state machine, but not in the sense that one operation is only valid in a particular state. _internals is a stack of states instead of just the current state. Any of the three operations can be done at any time.

Your strategy 'strategy' seems sound. Code looks good so far, you need to actually declare an interface, but I think you got that.
I dot't see why you can't pass the _internals. That would be part of the interface definition. That members be able to accept a type of " : _internals_data" or whatever.
You could wrap it up a bit, my defining the interface to be like
Execute
sendinlimitedsubsetofinternals
Returnsmodifiedsubsetofinternals
Then the two data methods could be like just an array of strings or something to really tighten down the interaction. Then you could use a serialization in the middle sometime later or something.

Related

Passing constructor delegate or object for unmanaged resources

In my (simplified) problem I have a method "Reading" that can use many different implementation of some IDisposableThing. I am passing delegates to the constructor right now so I can use the using statement.
Is this approach of passing a delegate of the constructor of my object appropriate?
My problem is that things like List<Func<IDisposable>> etc start looking bit scary (because delegates look like crap in c#) and passing in a object seems more usual and a clearer statement of intent.
Is there a better/different way of managing this situation without delegates?
public void Main()
{
Reading(() => new DisposableThingImplementation());
Reading(() => new AnotherDisposableThingImplementation());
}
public void Reading(Func<IDisposableThing> constructor)
{
using (IDisposableThing streamReader = constructor())
{
//do things
}
}
As I said in the comment, it's difficult to say what's best for your situation, so instead I'll just list your options so you can make an informed decision:
Continue doing what you're doing
Having to use around objects with an unpleasantly complicated-looking type is maybe not ideal visually, but in your situation it may well be perfectly appropriate
Use a custom delegate type
You can define a delegate like:
public delegate IDisposableThing DisposableThingConstructor();
Then anywhere you would write Func<IDisposableThing>, you can just write DisposableThingConstructor instead. For a commonly used delegate type, this may improve code readability, though this too is a matter of taste.
Move the using statements out of Reading
This really depends on whether it's sensible for the lifecycle management of these objects to be a responsibility of the Reading method or not. Given what we have of your code at the moment, we can't really judge this for you. An implementation with the lifecycle management moved out would look like:
public void Main()
{
using(var disposableThing = new DisposableThingImplementation())
Reading(disposableThing);
}
public void Reading(IDisposableThing disposableThing)
{
//do things
}
Use a factory pattern
In this option, you create a class which returns new IDisposableThing implementations. Lots of information can be found on the factory pattern which you may well already know, so I won't repeat it all here. This option may well be overkill for your purposes here, adding a lot of pointless complexity, but depending on how those DisposableThings are constructed, it may have additional benefits which make it worthwhile.
Use a generic argument
This option will only work if all of your IDisposableThing implementations have a parameterless constructor. I'm guessing that's not the case, but in case it is, it's a relatively straightforward approach:
public void Reading<T>() where T : IDisposableThing, new()
{
using(var disposableThing = new T())
{
//do things
}
}
Use an Inversion of Control container
This is another option which would certainly be overkill if used for this purpose alone. I include it mostly for completeness. Inversion of control containers like Ninject will give you easy ways to manage the lifecycles of objects passed into others.
I very much doubt this would be an appropriate solution in your case, especially since the disposable objects are not being used in another class's constructor. If you later run into a situation where you're trying to manage object lifecycle in a larger, complex object graph, this option might be worth revisiting.
Construct the objects outside of the using statement
This is specifically described as "not a best practice" in the MSDN documentation, but it is an option. You can do:
public void Main()
{
Reading(new DisposableThingImplementation());
}
public void Reading(IDisposableThing disposableThing)
{
using (disposableThing)
{
//do things
}
}
At the end of the using statement, the Dispose method will be called, but the object will not be garbage collected because it is still in scope. Trying to use the object after that would be very likely to cause problems because it is not fully initialized. So again, while this is an option, it's unlikely to be a good one.
Is this approach of passing a delegate of the constructor of my object appropriate? My problem is that things like List<Func<IDisposable>> etc start looking bit scary (because delegates look like crap in c#) and passing in a object seems more usual and a clearer statement of intent.
Yes, it's fine. However I understand your concern about passing a list of those things... Perhaps creating a custom delegate with the same signature as Func<IDisposable> and a more explicit name (e.g. SomethingFactory) would be clearer.
Is there a better/different way of managing this situation without delegates?
You could pass a factory or a list of factories to the method. I don't think it's really "better", though; it's mostly the same, since your factory would typically be represented as an interface with a single method, which is essentially the same as a delegate.

C# Object construction outside the constructor

When it comes to designing classes and "communication" between them, I always try to design them in such way that all object construction and composing take place in object constructor. I don't like the idea of object construction and composition taking place from outside, like other objects setting properties and calling methods on my object to initialize it. This especially gets ugly when multiple object try to do thisto your object and you never know in what order your props\methods will be executed.
Unforunatly I stumbl on such situations quite often, especially now with the growing popularity of dependecy injection frameworks, lots of libraries and frameworks rely on some kind of external object initialization, and quite often require not only constructor injection on our object but property injection too.
My question are:
Is it ok to have objects that relly on some method, or property to be called on them after which they can consider them initialzied?
Is ther some kind of pattern for situations when your object acting is receiver, and must support multiple interfaces that call it, and the order of these calls does matter? (something better than setting flags, like ThisWasDone, ThatWasCalled)
Is it ok to have objects that relly on some method, or property to be called on them after which they can consider them initialzied?
No. Init methods are a pain since there is no guarantee that they will get called. A simple solution is to switch to interfaces and use factory or builder pattern to compose the implementation.
#Mark Seemann has written a article about it: http://blog.ploeh.dk/2011/05/24/DesignSmellTemporalCoupling.aspx
Is there some kind of pattern for situations when your object acting is receiver, and must support multiple interfaces that call it, and the order of these calls does matter? (something better than setting flags, like ThisWasDone, ThatWasCalled)
Builder pattern.
I think it is OK, but there are implications. If this is an object to be used by others, you need to ensure that an exception is thrown any time a method or property is set or accessed and the initialization should have been called but isn't.
Obviously it is much more convenient and intuitive if you can take care of this in the constructor, then you don't have to implement these checks.
I don't see anything wrong in this. It may be not so convinient, but you can not ALWAYS use initialization in ctor, like you can not alwats drive under green light. These are dicisions that you made based on your app requirements.
It's ok. Immagine if your object, for example, need to read data from TCP stream or a file that ciuld be not present or corrupted. Raise an exception from ctor is baaad.
It's ok. If you think, for example, about some your DSL language compiler, it can looks like:
A) find all global variables and check if there mem allocation sum sutisfies your device requierements
B) parse for errors
C) check for self cycling
And so on...
Hoe this helps.
Answering (1)
Why not? An engine needs the driver because this must enter the key for the car, and later power-on. Will a car do things like detecting current speed if engine is stopeed? Or Will the car show remaining oil without powering-on it?
Some programming goals won't be able to have their actors initialized during its object construction, and this isn't because it's a non-proper way of doing things but because it's the natural, regular and/or semantically-wise way of representing its whole behavior.
Answering (2)
A decent class usage documentation will be your best friend. Like answer to (1), there're some things in this world that should be done in order to get them done rightly, and it's not a problem but a requirement.
Checking objects' state using flags isn't a problem too, it's a good way of adding reliability to your object models, because its own behaviors and consumers of them will be aware about if things got done as expected or not.
First of all, Factory Method.
public class MyClass
{
private MyClass()
{
}
public Create()
{
return new MyClass();
}
}
Second of all, why do you not want another class creating an object for you? (Factory)
public class MyThingFactory
{
IThing CreateThing(Speed speed)
{
if(speed == Speed.Fast)
{
return new FastThing();
}
return new SlowThing();
}
}
Third, why do multiple classes have side effects on new instances of your class? Don't you have declarative control over what other classes have access to your object?

C# On extending a large class in favor of readability

I have a large abstract class that handles weapons in my game. Combat cycles through a list of basic functions:
OnBeforeSwing
OnSwing
OnHit || OnMiss
What I have in mind is moving all combat damage-related calculations to another folder that handles just that. Combat damage-related calculations.
I was wondering if it would be correct to do so by making the OnHit method an extension one, or what would be the best approach to accomplish this.
Also. Periodically there are portions of the OnHit code that are modified, the hit damage formula is large because it takes into account a lot of conditions like resistances, transformation spells, item bonuses, special properties and other, similar, game elements.
This ends with a 500 line OnHit function, which kind of horrifies me. Even with region directives it's pretty hard to go through it without getting lost in the maze or even distracting yourself.
If I were to extend weapons with this function instead of just having the OnHit function, I could try to separate the different portions of the attack into other functions.
Then again, maybe I could to that by calling something like CombatSystem.HandleWeaponHit from the OnHit in the weapon class, and not use extension methods. It might be more appropriate.
Basically my question is if leaving it like this is really the best solution, or if I could (should?) move this part of the code into an extension method or a separate helper class that handles the damage model, and whether I should try and split the function into smaller "task" functions to improve readability.
I'm going to go out on a limb and suggest that your engine may not be abstracted enough. Mind you, I'm suggesting this without knowing anything else about your system aside from what you've told me in the OP.
In similar systems that I've designed, there were Actions and Effects. These were base classes. Each specific action (a machine gun attack, a specific spell, and so on) was a class derived from Action. Actions had an list of one or more specific effects that could be applied to Targets. This was achieved using Dependency Injection.
The combat engine didn't do all the math itself. Essentially, it asked the Target to calculate its defense rating, then cycled through all the active Actions and asked them to determine if any of its Effects applied to the Target. If they applied, it asked the Action to apply its relevant Effects to the Target.
Thus, the combat engine is small, and each Effect is very small, and easy to maintain.
If your system is one huge monolithic structure, you might consider a similar architecture.
OnHit should be an event handler, for starters. Any object that is hit should raise a Hit event, and then you can have one or more event handlers associated with that event.
If you cannot split up your current OnHit function into multiple event handlers, you can split it up into a single event handler but refactor it into multiple smaller methods that each perform a specific test or a specific calculation. It will make your code much more readable and maintainable.
IMHO Mike Hofer gives the leads.
The real point is not whether it's a matter of an extension method or not. The real point is that speaking of a single (extension or regular) method is unconceivable for such a complicated bunch of calculations.
Before thinking about the best implementation, you obviously need to rethink the whole thing to identify the best possible dispatch of responsibilities on objects. Each piece of elemental calculation must be done by the object it applies to. Always keep in mind the GRASP design patterns, especially Information Expert, Low Coupling and High Cohesion.
In general, each method in your project should always be a few lines of code long, no more. For each piece of calculation, think of which are all the classes on which this calculation is applicable. Then make this calculation a method of the common base class of them.
If there is no common base class, create a new interface, and make all these classes implement this interface. The interface might have methods or not : it can be used as a simple marker to identify the mentioned classes and make them have something in common.
Then you can build an elemental extension method like in this fake example :
public interface IExploding { int ExplosionRadius { get; } }
public class Grenade : IExploding { public int ExplosionRadius { get { return 30; } } ... }
public class StinkBomb : IExploding { public int ExplosionRadius { get { return 10; } } ... }
public static class Extensions
{
public static int Damages(this IExploding explosingObject)
{
return explosingObject.ExplosionRadius*100;
}
}
This sample is totally cheesy but simply aims to give leads to re-engineer your system in a more abstracted and maintenable way.
Hope this will help you !

c# vb: when they say static classes should not have state

when they say static classes should not have state/side effects does that mean:
static void F(Human h)
{
h.Name = "asd";
}
is violating it?
Edit:
i have a private variable now called p which is an integer. It's never read at all throughout the entire program, so it can't affect any program flow.
is this violating "no side effects"?:
int p;
static void F(Human h)
{
p=123;
h.Name = "asd";
}
the input and output is still always the same in this case..
When you say "they", who are you refering to?
Anyways, moving on. A method such as what you presented is completely fine - if that's what you want it to do, then OK. No worries.
Similarly, it is completely valid for a static class to have some static state. Again, it could be that you would need that at some point.
The real thing to watch out for is something like
static class A
{
private static int x = InitX();
static A()
{
Console.WriteLine("A()");
}
private static int InitX()
{
Console.out.WriteLine("InitX()");
return 0;
}
...
}
If you use something along these lines, then you could easily be confused about when the static constructor is called and when InitX() is called. If you had some side effects / state changing that occurs like in this example, then that would be bad practice.
But as far as your actual question goes, those kind of state changes and side effects are fine.
Edit
Looking at your second example, and taking the rule precisely as it is stated, then, yes, you are in violation of it.
But...
Don't let that rule necessarily stop you from things like this. It can be very useful in some cases, e.g. when a method does intensive calculation, memoization is an easy way to reduce performance cost. While memoization technically has state and side-effects, the output is always the same for every input, which is the really important .
Side effects of a static member mean that it change the value of some other members in its container class. The static member in your case does not effect other members of its class and it is not violating the sentence you have mentioned.
EDIT
In the second example you've added by editting your question you are violating it.
It is perfectly acceptable for methods of a static class to change the state of objects that are passed to them. Indeed, that is the primary use for non-function static methods (since a non-function method which doesn't change the state of something would be pretty useless).
The pattern to be avoided is having a static class where methods have side-effects that are not limited to the passed-in objects or objects referenced by them. Suppose, for example, one had an embroidery-plotting class which had functions to select an embroidery module, and to scale, translate, or rotate future graphic operations. If multiple routines expect to do some drawing, it could be difficult to prevent device-selections or transformations done by one routine from affecting other routines. There are two common ways to resolve this problem:
Have all the static graphic routines accept a parameter which will hold a handle to the current device and world transform.
Have a non-static class which holds a device handle and world transform, and have it expose a full set of graphic methods.
In many cases, the best solution will be to have a class which uses the second approach for its external interface, but possibly uses the first method internally. The first approach is somewhat better with regard to the Single Responsibility Principle, but from an external calling standpoint, using class methods is often nicer than using static ones.

Programming against an enum in a switch statement, is this your way to do?

Look at the code snippet:
This is what I normally do when coding against an enum. I have a default escape with an InvalidOperationException (I do not use ArgumentException or one of its derivals because the coding is against a private instance field an not an incoming parameter).
I was wondering if you fellow developers are coding also with this escape in mind....
public enum DrivingState {Neutral, Drive, Parking, Reverse};
public class MyHelper
{
private DrivingState drivingState = DrivingState.Neutral;
public void Run()
{
switch (this.drivingState)
{
case DrivingState.Neutral:
DoNeutral();
break;
case DrivingState.Drive:
DoDrive();
break;
case DrivingState.Parking:
DoPark();
break;
case DrivingState.Reverse:
DoReverse();
break;
default:
throw new InvalidOperationException(
string.Format(CultureInfo.CurrentCulture,
"Drivestate {0} is an unknown state", this.drivingState));
}
}
}
In code reviews I encounter many implementations with only a break statement in the default escape. It could be an issue over time....
Your question was kinda vague, but as I understand it, you are asking us if your coding style is good. I usually judge coding style by how readable it is.
I read the code once and I understood it. So, in my humble opinion, your code is an example of good coding style.
There's an alternative to this, which is to use something similar to Java's enums. Private nested types allow for a "stricter" enum where the only "invalid" value available at compile-time is null. Here's an example:
using System;
public abstract class DrivingState
{
public static readonly DrivingState Neutral = new NeutralState();
public static readonly DrivingState Drive = new DriveState();
public static readonly DrivingState Parking = new ParkingState();
public static readonly DrivingState Reverse = new ReverseState();
// Only nested classes can derive from this
private DrivingState() {}
public abstract void Go();
private class NeutralState : DrivingState
{
public override void Go()
{
Console.WriteLine("Not going anywhere...");
}
}
private class DriveState : DrivingState
{
public override void Go()
{
Console.WriteLine("Cruising...");
}
}
private class ParkingState : DrivingState
{
public override void Go()
{
Console.WriteLine("Can't drive with the handbrake on...");
}
}
private class ReverseState : DrivingState
{
public override void Go()
{
Console.WriteLine("Watch out behind me!");
}
}
}
I don't like this approach because the default case is untestable. This leads to reduced coverage in your unit tests, which while isn't necessarily the end of the world, annoys obsessive-compulsive me.
I would prefer to simply unit test each case and have an additional assertion that there are only four possible cases. If anyone ever added new enum values, a unit test would break.
Something like
[Test]
public void ShouldOnlyHaveFourStates()
{
Assert.That(Enum.GetValues( typeof( DrivingState) ).Length == 4, "Update unit tests for your new DrivingState!!!");
}
That looks pretty reasonable to me. There are some other options, like a Dictionary<DrivingState, Action>, but what you have is simpler and should suffice for most simple cases. Always prefer simple and readable ;-p
This is probably going off topic, but maybe not. The reason the check has to be there is in case the design evolves and you have to add a new state to the enum.
So maybe you shouldn't be working this way in the first place. How about:
interface IDrivingState
{
void Do();
}
Store the current state (an object that implements IDrivingState) in a variable, and then execute it like this:
drivingState.Do();
Presumably you'd have some way for a state to transition to another state - perhaps Do would return the new state.
Now you can extend the design without invalidating all your existing code quite so much.
Update in response to comment:
With the use of enum/switch, when you add a new enum value, you now need to find each place in your code where that enum value is not yet handled. The compiler doesn't know how to help with that. There is still a "contract" between various parts of the code, but it is implicit and impossible for the compiler to check.
The advantage of the polymorphic approach is that design changes will initially cause compiler errors. Compiler errors are good! The compiler effectively gives you a checklist of places in the code you need to modify to cope with the design change. By designing your code that way, you gain the assistence of a powerful "search engine" that is able to understand your code and help you evolve it by finding problems at compile-time, instead of leaving the problems until runtime.
I would use the NotSupportedException.
The NotImplementedException is for features not implemented, but the default case is implemented. You just chose not to support it. I would only recommend throwing the NotImplementedException during development for stub methods.
I would suggest to use either NotImplementedException or better a custom DrivingStateNotImplementedException if you like to throw exceptions.
Me, I would use a default drivingstate for default (like neutral/stop) and log the missing driverstate (because it's you that missed the drivingstate, not the customer)
It's like a real car, cpu decides it misses to turn on the lights, what does it do, throw an exception and "break" all control, or falls back to a known state which is safe and gives a warning to the driver "oi, I don't have lights"
What you should do if you encounter an unhandled enum value of course depends on the situation. Sometimes it's perfectly legal to only handle some of the values.
If it's an error that you have an unhandles value you should definitely throw an exception just like you do in the example (or handle the error in some other way). One should never swallow an error condition without producing an indication that there is something wrong.
A default case with just a break doesn't smell very good. I would remove that to indicate the switch doesn't handle all values, and perhaps add a comment explaining why.
Clear, obvious and the right way to go. If DrivingState needs to change you may need to refactor.
The problem with all the complicated polymorphic horrors above is they force the encapsulation into a class or demand additional classes - it's fine when there's just a DrivingState.Drive() method but the whole thing breaks as soon as you have a DrivingState.Serialize() method that serializes to somewhere dependent on DrivingState, or any other real-world condition.
enums and switches are made for each other.
I'm a C programmer, not C#, but when I have something like this, I have my compiler set to warn me if not all enum cases are handled in the switch. After setting that (and setting warnings-as-errors), I don't bother with runtime checks for things that can be caught at compile time.
Can this be done in C#?
I never use switch. The code similar to what you show was always a major pain point in most frameworks I used -- unextensible and fixed to a limited number of pre-defined cases.
This is a good example of what can be done with simple polymorphism in a nice, clean and extensible way. Just declare a base DrivingStrategy and inherit all version of driving logic from it. This is not over-engineering -- if you had two cases it would be, but four already show a need for that, especially if each version of Do... calls other methods. At least that's my personal experience.
I do not agree with Jon Skeet solution that freezes a number of states, unless that is really necessary.
I think that using enum types and therefore switch statements for implementing State (also State Design Pattern) is not a particularly good idea. IMHO it is error-prone. As the State machine being implemented becomes complex the code will be progressively less readable by your fellow programmers.
Presently it is quite clean, but without knowing the exact intent of this enum it is hard to tell how it will develop with time.
Also, I'd like to ask you here - how many operations are going to be applicable to DrivingState along with Run()? If several and if you're going to basically replicate this switch statement a number of times, it would scream of questionable design, to say the least.

Categories