I am currently evaluating Spec Explorer, but I am stuck with a problem concerning abstract specifications of function behaviour.
I have something like :
[TypeBinding("Implementation.ImplementationElement")]
public class ModelElement
{ /*... */ }
public class ModelBehaviour
{
[Rule]
public static void doSomething()
{
ModelElement sel = SelectElement(elements);
// ... do something with sel
}
private static Set<ModelElement> elements = new Set<ModelElement>();
}
Now I do not want to define SelectElement(Set<ModelElement> e) explicitly in the model program. I would prefer to specify it with a postcondition like elements.contains(\result);. Is this somehow possible ?
The problem with the explicit definition is that I would enforce a selection strategy.
I tried to avoid the problem in the following way (maybe I am just missing something small and someone could give me a hint to do it correctly):
Add a parameter ModelElement e to doSomething
Add condition Condition.IsTrue(elements.Contains(e)) to doSomething
Define an action in the config-script SelectElement
Define a machine SelectAndDo in the config-Script as follows:
machine SelectAndDo() : Main
{
let ImplementationElement e
Where {.Condition.IsTrue(e.Equals(SelectElement()));.}
in doSomething(e)
}
Use SelectAndDo instead of doSomething
However, this does not work, because the exploration of the corresponding model enters an error state.
If this does not work at all, is there a good alternative to Spec Explorer on Windows, preferably stable? Can FsCheck be recommended for testing stateful systems?
I figured out what the problem was.
The solution sketched above worked actually, but I returned null from SelectElement() if elements was empty, so the condition in the where-clause could not be fulfilled. So instead of returning null, I decided to return an "illegal" element, similar to a Null Object.
So my whole solutions looks something like this:
The machine:
machine Full() : Main
{
Init(); CreateElement();CreateOtherElement();CreateIllegal(); SelectAndDo* || ModelProgram
}
The CreateIllegal() is afaik needed, such that the Condition in SelectAndDo can be fulfilled.
Beside that, I added checks for this illegal value in the model program.
EDIT:
There is actually a nicer, straight-forward way using Choice.Some<T>, which I did not know.
Related
I know the CallerMemberName-attribute, which replaces the null parameter with, for example, the property name you are calling the method from.
This is very useful for things like PropertyChanged-Notifications. Currently we have a different scenario, where we would like to have a parameter-attribute which replaces the null parameter with the method name you're calling.
Generally speaking, is it possible to do something like this?
To be honest, I havent dealt much with custom attribute yet, but in our case, it would be kinda interesting to have something like this.
Is there any helpful Information I can start with?
There is no such attribute, but you could use C# 6 nameof operator:
public void SomeMethod ()
{
Console.WriteLine(nameof(SomeMethod));
}
Of course this does not dynamically and automatically inserts the name of the method you are in, but requires you to have an actual reference to the method. However, it supports full IntelliSense and will also update automatically when you refactor the method name. And the name is inserted at compile time, so you don’t get any performance downside.
If you wanted to place this code in a more central place, like you do with e.g. INPC implementations in base view models, then your idea is a bit flawed anyway. If you had a common method you call to figure out the method name you’re in, then it would always report the method name of the common method:
public void SomeMethod ()
{
Console.WriteLine(GetMethodName());
}
// assuming that a CallingMemberNameAttribute existed
public string GetMethodName([CallingMemberName] string callingMember = null)
{
return callingMember; // would be always "GetMethodName"
}
But instead, you could use the CallerMemberNameAttribute here again, which will then correctly get the method name calling the GetMethodName function:
public void SomeMethod ()
{
Console.WriteLine(GetMethodName());
}
public string GetMethodName([CallerMemberNamed] string callerMember = null)
{
return callerMember;
}
Apologies but couldn't think of a better way to describe this in the title. I'm also not a real developer so please excuse if I get my fields, variables, objects and methods confused.
Anyway I have some C# code that declares a private variable for my class, that's available throughout the entire class. Within the code however I actually decide what it is declared as. When I jump into another method some of the capabilities of the object/variable are not available due to the original declaration.
private NetPeer _peer; //initially declared here so it's visible in the entire class
....
public void Initialise()
{
if("some arbitrary validation")
{
_peer = new NetServer(_config); // This is now a NetServer object and not NetPeer, but works fine
}
else
{
_peer = new NetClient(_config); // This is now a NetClient object and not NetPeer, but works fine
}
_peer.Start(); // This works fine as NetClient or NetServer. The method is available to both
}
public void SendIt()
{
_peer.SendToAll(_message); //Now this "SendToAll" is only available with the NetServer and not NetClient. At runtime this fails miserably as you would expect
}
So is there a way to declare the private "_peer" variable without defining it as NetServer or NetClient till later on, or do I need to just revisit the rest of my code and run with two separate variables.
It's not overtly relevant to the issue, but I'm using the Lidgren library which is where NetServer and NetClient come from. I suppose it could easily be any other class or method being referenced here.
I've also removed a lot of other logic and code to show this example.
**Edit: So I didn't realise asking a generic question would kick off such a battle. The code is working fine now and I've used the suggestion from Damien as that's the simplest for me to understand:
((NetServer)_peer).SendToAll(_message);
Thanks for everyone who offered positive help to my issue...
You can write something like:
public void SendIt()
{
var server = _peer as NetServer;
if(_peer == null) throw new InvalidOperationException("SendIt called when we're not the server");
server.SendToAll(_message);
}
And similarly in other methods that will only work for one type or the other. I could have left out the if(_peer ... line, but that would have left it as a NullReferenceException - I prefer to switch to a more meaningful exception type and at the same time be able to provide a hint on what, specifically, went wrong.
Similarly, I could have done a direct cast ((NetServer)_peer).SendToAll(... but that would have thrown an InvalidCastException that would take more digging to diagnose the issue. I prefer to use as and try to offer as rich a diagnostic experience as possible.
You could use an interface and have both NetServer or NetClient derive from it
You could add two wrapper properties for use inside your class:
private NetServer _peerServer
{
get { return _peer as NetServer; }
}
private NetClient _peerClient
{
get { return _peer as NetClient; }
}
Any time you use these two properties, you'd need to null check. For example:
public void SendIt()
{
if(_peerServer == null) throw new InvalidOperationException("_peer is not a NetServer");
_peerServer.SendToAll(_message);
}
This is not terribly elegant, though. What you should really do is a bit of refactoring. Splitting the class in to separate server/client dependent parts, perhaps with the common functionality in a base class, would be the cleanest option.
It depends on wether you want or not your Client to also send a message when you hit SendIt().
If you do, you could do as following:
public void SendIt()
{
if (_peer is NetServer)
(_peer as NetServer).SendToAll(_message);
else
(_peer as NetClient).SendMessage(_message); //as radarbob says it is defined in the Lidgren library, I wouldn't know
}
If you don't, you could do this (or follow Damien's suggestion):
public void SendIt()
{
if (!(_peer is NetServer)) throw new InvalidOperationException("SendIt called when we're not the server");
(_peer as NetServer).SendToAll(_message);
}
NetServer and NetClient both inherit from NetPeer. SendToAll() is not defined in NetPeer, so you must use a SendMessage() method that is defined in NetPeer. That will work for any inheriting classes.
Edit
The question says it uses something called the Lidgren library. That is always the first place to look for an answer.
I'm trying to explain to my team why this is bad practice, and am looking for an anti-pattern reference to help in my explanation. This is a very large enterprise app, so here's a simple example to illustrate what was implemented:
public void ControlStuff()
{
var listOfThings = LoadThings();
var listOfThingsThatSupportX = new string[] {"ThingA","ThingB", "ThingC"};
foreach (var thing in listOfThings)
{
if(listOfThingsThatSupportX.Contains(thing.Name))
{
DoSomething();
}
}
}
I'm suggesting that we add a property to the 'Things' base class to tell us if it supports X, since the Thing subclass will need to implement the functionality in question. Something like this:
public void ControlStuff()
{
var listOfThings = LoadThings();
foreach (var thing in listOfThings)
{
if (thing.SupportsX)
{
DoSomething();
}
}
}
class ThingBase
{
public virtual bool SupportsX { get { return false; } }
}
class ThingA : ThingBase
{
public override bool SupportsX { get { return true; } }
}
class ThingB : ThingBase
{
}
So, it's pretty obvious why the first approach is bad practice, but what's this called? Also, is there a pattern better suited to this problem than the one I'm suggesting?
Normally a better approach (IMHO) would be to use interfaces instead of inheritance
then it is just a matter of checking whether the object has implemented the interface or not.
I think the anti-pattern name is hard-coding :)
Whether there should be a ThingBase.supportsX depends at least somewhat on what X is. In rare cases that knowledge might be in ControlStuff() only.
More usually though, X might be one of set of things in which case ThingBase might need to expose its capabilities using ThingBase.supports(ThingBaseProperty) or some such.
IMO the fundamental design principle at play here is encapsulation. In your proposed solution you have encapsulated the logic inside of the Thing class, where as in the original code the logic leaks out into the callers.
It also violates the Open-Closed principle, since if you want to add new subclasses that support X you now need to go and modify anywhere that contains that hard-coded list. With your solution you just add the new class, override the method and you're done.
Don't know about a name (doubt such exists) but think of each "Thing" as a car - some cars have Cruise Control system and others do not have.
Now you have fleet of cars you manage and want to know which have cruise control.
Using the first approach is like finding list of all car models which have cruise control, then go car by car and search for each in that list - if there it means the car has cruise control, otherwise it doesn't have. Cumbersome, right?
Using the second approach means that each car that has cruise control come with a sticker saying "I has cruise control" and you just have to look for that sticker, without relying on external source to bring you information.
Not very technical explanation, but simple and to the point.
There is a perfectly reasonable situation where this coding practice makes sense. It might not be an issue of which things actually support X (where of course an interface on each thing would be better), but rather which things that support X are ones that you want to enable. The label for what you see is then simply configuration, presently hard-coded, and the improvement on this is to move it eventually to a configuration file or otherwise. Before you persuade your team to change it I would check this is not the intention of the code you have paraphrased.
The Writing Too Much Code Anti-Pattern. It makes it harder to read and understand.
As has been pointed out already it would be better to use an interface.
Basically the programmers are not taking advantage of Object-Oriented Principles and instead doing things using procedural code. Every time we reach for the 'if' statement we should ask ourselves if we shouldn't be using an OO concept instead of writing more procedural code.
It is just a bad code, it does not have a name for it (it doesn't even have an OO design). But the argument could be that the first code does not fallow Open Close Principle. What happens when list of supported things change? You have to rewrite the method you're using.
But the same thing happens when you use the second code snippet. Lets say the supporting rule changes, you'd have to go to the each of the methods and rewrite them. I'd suggest you to have an abstract Support Class and pass different support rules when they change.
I don't think it has a name but maybe check the master list at http://en.wikipedia.org/wiki/Anti-pattern knows? http://en.wikipedia.org/wiki/Hard_code probably looks the closer.
I think that your example probably doesn't have a name - whereas your proposed solution does it is called Composite.
http://www.dofactory.com/Patterns/PatternComposite.aspx
Since you don't show what the code really is for it's hard to give you a robust sulotion. Here is one that doesn't use any if clauses at all.
// invoked to map different kinds of items to different features
public void BootStrap
{
featureService.Register(typeof(MyItem), new CustomFeature());
}
// your code without any ifs.
public void ControlStuff()
{
var listOfThings = LoadThings();
foreach (var thing in listOfThings)
{
thing.InvokeFeatures();
}
}
// your object
interface IItem
{
public ICollection<IFeature> Features {get;set;}
public void InvokeFeatues()
{
foreach (var feature in Features)
feature.Invoke(this);
}
}
// a feature that can be invoked on an item
interface IFeature
{
void Invoke(IItem container);
}
// the "glue"
public class FeatureService
{
void Register(Type itemType, IFeature feature)
{
_features.Add(itemType, feature);
}
void ApplyFeatures<T>(T item) where T : IItem
{
item.Features = _features.FindFor(typof(T));
}
}
I would call it a Failure to Encapsulate. It's a made up term, but it is real and seen quite often
A lot of people forget that encasulation is not just the hiding of data withing an object, it is also the hiding of behavior within that object, or more specifically, the hiding of how the behavior of an object is implemented.
By having an external DoSomething(), which is required for the correct program operation, you create a lot of issues. You cannot reasonably use inheritence in your list of things. If you change the signature of the "thing", in this case the string, the behavior doesn't follow. You need to modify this external class to add it's behaviour (invoking DoSomething() back to the derived thing.
I would offer the "improved" solution, which is to have a list of Thing objects, with a method that implements DoSomething(), which acts as a NOOP for the things that do nothing. This localizes the behavior of the thing within itself, and the maintenance of a special matching list becomes unnecessary.
If it were one string, I might call it a "magic string". In this case, I would consider "magic string array".
I don't know if there is a 'pattern' for writing code that is not maintainable or reusable. Why can't you just give them the reason?
In order to me the best is to explain that in term of computational complexity. Draw two chart showing the number of operation required in term of count(listOfThingsThatSupportX ) and count(listOfThings ) and compare with the solution you propose.
Instead of using interfaces, you could use attributes. They would probably describe that the object should be 'tagged' as this sort of object, even if tagging it as such doesn't introduce any additional functionality. I.e. an object being described as 'Thing A' doesn't mean that all 'Thing A's have a specific interface, it's just important that they are a 'Thing A'. That seems like the job of attributes more than interfaces.
I currently have
if A {
//code
return;
}
if B {
//code
return;
}
...
Is there a simple way to express this for a large number of conditions?
The goal in this case is validation of something that can fail in different ways, which all require different handling.
I then expect this block of code to be called again later, when whatever condition has just been resolved will fail, and it will slip through the tests until it meets another condition and is rejected in a new and exciting way.
I was really just hoping for something on the level of a switch statement (in terms of simplicity and ease of use) but I guess that doesn't exist...
Possibly, but it's really hard to guide you without knowing more about the types of conditions you have and what the code in each one does... if there are similarities, you can probably find ways to abstract those out instead of repeating them (using 'switch' statements, delegates, etc). If these things are totally unrelated, it won't get any better than what you have shown - except to change latter 'if's to 'else if' and then put a single 'return' at the very end.
If your conditions are testing the same value for equality with other values, you can use the switch statement. Note that in C#, unlike C++ or Java, you can use a string as the switch value.
You can use the ?: operator but only if your "code" can be expressed as a single expression and they all return the same type. So for example,
var user = conditionA ? expressionA :
conditionB ? expressionB :
conditionC ? expressionC;
It would probably help most though if you say what your actual problem is. It's possible a cleaner approach would be possible through polymorphism, array/dictionary lookups, etc.
You could use class that inherit an abstract base class ConditionalAction, which could look like this:
public abstract class ConditionalAction
{
public abstract bool Condition();
public abstract void Action();
}
A sample class that inherits ConditionalAction:
public class SampleConditionalAction : ConditionalAction
{
public override bool Condition()
{
// Condition
}
public override void Action()
{
// Code
}
}
Sample implementation:
List<ConditionalAction> conditionalActions = new List<ConditionalAction>();
conditionalActions.Add(new SampleConditionalAction());
// Add more ConditionalActions...
foreach(ConditionalAction conditionalAction in conditionalActions)
{
if (conditionalAction.Condition())
conditionalAction.Action();
}
The main place you'd get stuck with this approach is if you need information for your conditions or your actions, but you can build that in by passing in parameters to your constructors of your ConditionalActions.
You can create a Dictionary<Func<bool>, Action>. The keys will be the conditions (every one of them a bool Method()) and the values will be the pieces of code to execute.
Then you can easily iterate through the keys that meet the codition and execute their values:
foreach (pair in dictionary.Where(pair => pair.Key()))
{
pair.Value();
}
It depends what the conditionals are, the code is doing and the design of the component.
For example, if it's a configuration class that stores a load of settings then I'd have no problem with the above. Conversely if this defines or controls paths of execution in your application then that might suggest a design deficiency. Lots of conditionals or switch statements can be refactored using inheritance or dependency injection for example.
If you have a serious number of conditions and it isn't a config class I would think about your code at a design level, rather than a syntactic one.
What I am looking for is a way to call a method after another method has been invoked but before it is entered. Example:
public class Test {
public void Tracer ( ... )
{
}
public int SomeFunction( string str )
{
return 0;
}
public void TestFun()
{
SomeFunction( "" );
}
}
In the example above I would like to have Tracer() called after SomeFunction() has been invoked by TestFun() but before SomeFunction() is entered. I'd also like to get reflection data on SomeFunction().
I found something interesting in everyone's answers. The best answer to the question is to use Castle's DynamicProxy; however, this is not that I'm going to use to solve my problem because it requires adding a library to my project. I have only a few methods that I need to "trace" so I've chosen to go with a modified "core" methodology mixed with the way Dynamic Proxy is implemented. I explain this in my answer to my own question below.
Just as a note I'm going to be looking into AOP and the ContextBoundObject class for some other applications.
You can use a dynamic proxy (Castle's DynamicProxy for example) to intercept the call, run whatever code you wish, and then either invoke your method or not, depending on your needs.
Use a *Core method:
public int SomeFunction(string str)
{
Tracer();
return SomeFunctionCore(str);
}
private int SomeFunctionCore(string str)
{
return 0;
}
A number of the .NET APIs use this (lots do in WPF).
Use delegates!
delegate void SomeFunctionDelegate(string s);
void Start()
{
TraceAndThenCallMethod(SomeFunction, "hoho");
}
void SomeFunction(string str)
{
//Do stuff with str
}
void TraceAndThenCallMethod(SomeFunctionDelegate sfd, string parameter)
{
Trace();
sfd(parameter);
}
You want to look into Aspect Oriented Programming. Here's a page I found for AOP in .NET: http://www.postsharp.org/aop.net/
Aspect Oriented Programming involves separating out "crosscutting concerns" from code. One example of this is logging - logging exists (hopefully) across all of your code. Should these methods all really need to know about logging? Maybe not. AOP is the study of separating these concerns from the code they deal with, and injecting them back in, either at compile-time or run-time. The link I posted contains links to several tools that can be used for both compile-time and run-time AOP.
.NET has a class called ContextBoundObject that you can use to setup message sinks to do call interception as long as you don't mind deriving from a base class this will give you what you are looking for without taking an library dependency.
You would have to use some form of AOP framework like SpringFramework.NET to do that.
If you need to do this on large scale (i.e. for every function in a program) and you don't want to hugely alter the source, you might look into using the .NET Profiling API. Its a little hairy to use since you have to build free-threaded COM objects to do so, but it gives you an enormous amount of control over the execution of the program.
This is the solution I've choosen to solve my problem. Since there is no automatic (attribute like) way to make this work I feel it is the least obtrusive and allows the functionality to be turned on and off by choosing what class get instantiated. Please note that this is not the best answer to my question but it is the better answer for my particular situation.
What's going on is that we're simply deriving a second class that sometimes or always be instantiated in place of its parent. The methods that we want to trace (or otherwise track) are declared virtual and reimplemented in the derived class to perform whatever actions we want to trace and then the function is called in the parent class.
public class TestClass {
public virtual void int SomeFunction( string /*str*/ )
{
return 0;
}
public void TestFun()
{
SomeFunction( "" );
}
}
public class TestClassTracer : TestClass {
public override void int SomeFunction( string str )
{
// do something
return base.SomeFunction( str );
}
}