Sounds pretty basic, but I did not find an existing answer. Sorry if it's a duplicate.
I have never used assertions much in the past and I may not have understood the spirit behind them yet. Is it recommended or standard practice to write something like the following in method Remove()?
public class Model
{
public Model ParentModel {get; private set}
readonly List<Model> submodels = new List<Model>();
public Model AddSubmodel(Model m)
{
submodels.Add(m);
m.ParentModel = this;
}
public void RemoveSubmodel(Model m)
{
submodel.Remove(m);
m.ParentModel = null;
}
public void Remove()
{
Debug.Assert(ParentModel != null);
if (ParentModel != null) ParentModel.RemoveSubmodel(this);
}
// ...
}
The condition is something I am in control of (i.e. it doesn't depend on user interaction or something) and it determines the correctness of my code, so exceptions are out.
My rationale behind this is, I want it to fail in Debug mode, but I want to repair it as much as possible in Release mode.
Edit: as already mentioned in the comments,
The condition violation is not terribly fatal as such. If there is no parent, other than the the method call failing without the null check, nothing much will happen (in release build).
But what is more important, it indicates that the client of the class Model is doing something that is illogical. I want to be pointed to the fact (at least during debug) that there is something in my client code that I obviously haven't thought about, because if i had, the condition would have never happened.
Assertions are used when some really erroneous condition is true. It is so wrong that an exception is not enough. It is usually use to detect logic errors in your code.
Whether or not to use it in your specific example depends. Are you 100% sure that ParentModel should be non-null, and that if it is null, it implies that something really unexpected and wrong has happened and you must stop the program? If yes, then you can use Debug.Assert. Otherwise, do a null-check and if it is null, handle it by some other way than terminating the program.
Also note that the assert won't work in Release mode. To assert in Release mode, use Trace.Assert.
Learn more here and here.
Related
So, I have a piece of code using WeakReferences. I know of the common .IsAlive race condition, so I didn't use that. I basically have something like this:
WeakReference lastString=new WeakReference(null);
public override string ToString()
{
if(lastString!=null)
{
var s=lastString.Target as string; //error here
if(s!=null)
{
return s;
}
}
var str=base.ToString();
lastString=new WeakReference(str);
return str;
}
Somehow, I'm getting a null reference exception at the marked line. By debugging it, I can confirm that lastString is indeed null, despite being wrapped in a null check and lastString never actually being set to null.
This only happens in a complex flow as well, which makes me think garbage collection is somehow taking my actual WeakReference object, and not just it's target.
Can someone enlighten me as to how this is happening and what the best course of action is?
EDIT:
I can't determine at all the cause of this. I ended up wrapping the error code in a try-catch just fix it for now. I'm very interested in the root cause of this though. I've been trying to reproduce this in a simple test case, but it's proven very difficult to do. Also, this only appears to happen when running under a unit test runner. If I take the code and trim it down to the minimum, it will continue to crash when running using TestDriven and Gallio, but will not fail when put into a console application
This ended up being a very hard to spot logic bug that was in plain sight.
The offending if statement really was more like this:
if(lastString!=null && limiter==null || limiter=lastLimiter)
The true grouping of this is more like this:
if((lastString!=null && limiter==null) || limiter=lastLimiter)
And as Murphy's law would dictate, somehow, in this one unrelated test case, lastLimiterand lastString got set to null by a method used no where but this one single test case.
So yea, no bug in the CLR, just my own logic bug that was very hard to spot
I am trying to find the right way to write code regarding checking for the invalid values. Invalid value, in my case, would be null. The thing with other questions in SO is that they fall under specific circumstances and I am interested in a more general solutions.
I have a code like this:
public class SomeClass
{
private readonly object m_internallyUsedObject;
private ThirdPartyObject m_user; // Third party object.
public SomeClass(object internallyUsedObject)
{
m_internallyUsedObject = internallyUsedObject; // We just want to ensure that the object will remain the same throught the life time of SomeClass object.
m_user = new ThirdPartyObject(); // This object is not yet needed here.
}
public void DoSomething()
{
m_user.DoSomethingElse(m_internallyUsedObject); // Now we're using it and we are not sure whether null value is tolerated.
}
}
Since we take the internallyUsedObject in constructor, we probably know the semantics of this object and how it should be used. On the other hand, we just relay this object to a third party object during calls.
Our SomeClass object will work just fine regardless of whether the value is null or not.
Now, the problem is that we do not know whether null will always work for the ThirdPartyObject - it might work in one version (in which case it's OK to omit null check) and do not work in another.
One could say that we should not care for checking for null if our class can handle it. But when I write the code documentation, I would like to tell the user the behavior and expectations of our own class.
As I mentioned above, this simple check might not be useful or even invalid in particular third party object's versions:
if (internallyUsedObject == null)
{
throw new ArgumentNullException("internallyUsedObject");
}
Is it valid according to the OOP to take the internallyUsedObject from the code above inside the constructor when we are not going to use it directly? Doesn't it violate the "fail fast" principle, since it might appear that we are just deferring the problem to the later stage of the object's life time?
As a rule I've tended to use the method you've described in the second code block, but in combination with sensible groupings of checks. By this I mean that certain checks for particular methods should be performed and actions taken before an logic is performed e.g.
public void MyMethod(MyObject input) {
// 1. perhaps security checks, if an in method security is required
if(security.AccessLevel < requiredLevel)
throw new CustomSecurityException("Insufficent Access");
// 2. data checks
if(input.Property == null)
throw new ArgumentNullException("Input didn't contain the value I wanted");
if(input.Property2 < someImportantLevel)
throw new CustomBLException("Input didn't meet required level for something");
// 3. perform BL
// ...do whatever
}
It's probably not ideal and often things like method attributes could be used for the security side of things but in quite a few of our apps this has seemed like a sensible grouping of checks. You know that all the checks are done up front rather than lengthy blocks like this:
if(input != null) {
//do logic
} else {
throw new Exception();
}
Where the checks could be hidden or nested and harder to locate.
I'm thinking of building some generic extensions that will take a way all these null, throw checks and asserts and instead use fluent APIs to handle this.
So I'm thinking of doing something like this.
Shall() - Not quite sure about this one yet
.Test(...) - Determines whether the contained logic executed without any errors
.Guard(...) - Guards the contained logic from throwing any exception
.Assert(...) - Asserts before the execution of the code
.Throw(...) - Throws an exception based on a certain condition
.Assume(...) - Similar to assert but calls to Contract.Assume
Usage: father.Shall().Guard(f => f.Shop())
The thing is that I don't want these extra calls at run-time and I know AOP can solve this for me, I want to inline these calls directly to the caller, if you have a better way of doing that please do tell.
Now, before I'm researching or doing anything I wonder whether someone already done that or know of a tool that is doing it ?
I really want to build something like that and release it to public because I think that it can save a lot of time and headache.
Some examples.
DbSet<TEntity> set = Set<TEntity>();
if (set != null)
{
if (Contains(entity))
{
set.Remove(entity);
}
else
{
set.Attach(entity);
set.Remove(entity);
}
}
Changes to the following.
Set<TEntity>().Shall().Guard(set =>
{
if (Contains(entity))
{
set.Remove(entity);
}
else
{
set.Attach(entity);
set.Remove(entity);
}
});
Instead of being funny and try to make fun of other people, some people can really learn something about maturity, you can share your experience and tell me what's so good or bad about it, that I'll accept.
I'm not trying to recreate Code Contracts, I know what it is I'm using it everyday, I'm trying to move the boilerplate code that is written to one place.
Sometimes you have methods that for each call you have to check the returned object and is not your code so you can't ensure that the callee won't result a null so in the caller you have to perform null checks on the returned object so I thought of something that may allow me to perform these checks easily when chaining calls.
Update: I'll have to think about it some more and change the API to make the intentions clear and the code more readable.
I think that the idea is not polished at all and that indeed I went too far with all these methods.
Anyways, I'll leave it for now.
It sounds like you're describing something like Code Contracts: http://msdn.microsoft.com/en-us/devlabs/dd491992
If I understand what you're looking for, then the closest thing I've come up with is the extension method:
public static Chain<T>(this T obj, Action<T> act)
{
act(obj);
return obj;
}
This allows you to do the following:
Set.Remove(Set.FirstOrDefault(entity) ?? entity.Chain(a => Set.Add(a)));
As you can see, though, this isn't the most readable code. This isn't to say that Chain extension method is bad (and it certainly has its uses), but that Chain extension method can definitely be abused, so use cautiously or the ghost of programming past will come back to haunt you.
Look at the code snippet:
This is what I normally do when coding against an enum. I have a default escape with an InvalidOperationException (I do not use ArgumentException or one of its derivals because the coding is against a private instance field an not an incoming parameter).
I was wondering if you fellow developers are coding also with this escape in mind....
public enum DrivingState {Neutral, Drive, Parking, Reverse};
public class MyHelper
{
private DrivingState drivingState = DrivingState.Neutral;
public void Run()
{
switch (this.drivingState)
{
case DrivingState.Neutral:
DoNeutral();
break;
case DrivingState.Drive:
DoDrive();
break;
case DrivingState.Parking:
DoPark();
break;
case DrivingState.Reverse:
DoReverse();
break;
default:
throw new InvalidOperationException(
string.Format(CultureInfo.CurrentCulture,
"Drivestate {0} is an unknown state", this.drivingState));
}
}
}
In code reviews I encounter many implementations with only a break statement in the default escape. It could be an issue over time....
Your question was kinda vague, but as I understand it, you are asking us if your coding style is good. I usually judge coding style by how readable it is.
I read the code once and I understood it. So, in my humble opinion, your code is an example of good coding style.
There's an alternative to this, which is to use something similar to Java's enums. Private nested types allow for a "stricter" enum where the only "invalid" value available at compile-time is null. Here's an example:
using System;
public abstract class DrivingState
{
public static readonly DrivingState Neutral = new NeutralState();
public static readonly DrivingState Drive = new DriveState();
public static readonly DrivingState Parking = new ParkingState();
public static readonly DrivingState Reverse = new ReverseState();
// Only nested classes can derive from this
private DrivingState() {}
public abstract void Go();
private class NeutralState : DrivingState
{
public override void Go()
{
Console.WriteLine("Not going anywhere...");
}
}
private class DriveState : DrivingState
{
public override void Go()
{
Console.WriteLine("Cruising...");
}
}
private class ParkingState : DrivingState
{
public override void Go()
{
Console.WriteLine("Can't drive with the handbrake on...");
}
}
private class ReverseState : DrivingState
{
public override void Go()
{
Console.WriteLine("Watch out behind me!");
}
}
}
I don't like this approach because the default case is untestable. This leads to reduced coverage in your unit tests, which while isn't necessarily the end of the world, annoys obsessive-compulsive me.
I would prefer to simply unit test each case and have an additional assertion that there are only four possible cases. If anyone ever added new enum values, a unit test would break.
Something like
[Test]
public void ShouldOnlyHaveFourStates()
{
Assert.That(Enum.GetValues( typeof( DrivingState) ).Length == 4, "Update unit tests for your new DrivingState!!!");
}
That looks pretty reasonable to me. There are some other options, like a Dictionary<DrivingState, Action>, but what you have is simpler and should suffice for most simple cases. Always prefer simple and readable ;-p
This is probably going off topic, but maybe not. The reason the check has to be there is in case the design evolves and you have to add a new state to the enum.
So maybe you shouldn't be working this way in the first place. How about:
interface IDrivingState
{
void Do();
}
Store the current state (an object that implements IDrivingState) in a variable, and then execute it like this:
drivingState.Do();
Presumably you'd have some way for a state to transition to another state - perhaps Do would return the new state.
Now you can extend the design without invalidating all your existing code quite so much.
Update in response to comment:
With the use of enum/switch, when you add a new enum value, you now need to find each place in your code where that enum value is not yet handled. The compiler doesn't know how to help with that. There is still a "contract" between various parts of the code, but it is implicit and impossible for the compiler to check.
The advantage of the polymorphic approach is that design changes will initially cause compiler errors. Compiler errors are good! The compiler effectively gives you a checklist of places in the code you need to modify to cope with the design change. By designing your code that way, you gain the assistence of a powerful "search engine" that is able to understand your code and help you evolve it by finding problems at compile-time, instead of leaving the problems until runtime.
I would use the NotSupportedException.
The NotImplementedException is for features not implemented, but the default case is implemented. You just chose not to support it. I would only recommend throwing the NotImplementedException during development for stub methods.
I would suggest to use either NotImplementedException or better a custom DrivingStateNotImplementedException if you like to throw exceptions.
Me, I would use a default drivingstate for default (like neutral/stop) and log the missing driverstate (because it's you that missed the drivingstate, not the customer)
It's like a real car, cpu decides it misses to turn on the lights, what does it do, throw an exception and "break" all control, or falls back to a known state which is safe and gives a warning to the driver "oi, I don't have lights"
What you should do if you encounter an unhandled enum value of course depends on the situation. Sometimes it's perfectly legal to only handle some of the values.
If it's an error that you have an unhandles value you should definitely throw an exception just like you do in the example (or handle the error in some other way). One should never swallow an error condition without producing an indication that there is something wrong.
A default case with just a break doesn't smell very good. I would remove that to indicate the switch doesn't handle all values, and perhaps add a comment explaining why.
Clear, obvious and the right way to go. If DrivingState needs to change you may need to refactor.
The problem with all the complicated polymorphic horrors above is they force the encapsulation into a class or demand additional classes - it's fine when there's just a DrivingState.Drive() method but the whole thing breaks as soon as you have a DrivingState.Serialize() method that serializes to somewhere dependent on DrivingState, or any other real-world condition.
enums and switches are made for each other.
I'm a C programmer, not C#, but when I have something like this, I have my compiler set to warn me if not all enum cases are handled in the switch. After setting that (and setting warnings-as-errors), I don't bother with runtime checks for things that can be caught at compile time.
Can this be done in C#?
I never use switch. The code similar to what you show was always a major pain point in most frameworks I used -- unextensible and fixed to a limited number of pre-defined cases.
This is a good example of what can be done with simple polymorphism in a nice, clean and extensible way. Just declare a base DrivingStrategy and inherit all version of driving logic from it. This is not over-engineering -- if you had two cases it would be, but four already show a need for that, especially if each version of Do... calls other methods. At least that's my personal experience.
I do not agree with Jon Skeet solution that freezes a number of states, unless that is really necessary.
I think that using enum types and therefore switch statements for implementing State (also State Design Pattern) is not a particularly good idea. IMHO it is error-prone. As the State machine being implemented becomes complex the code will be progressively less readable by your fellow programmers.
Presently it is quite clean, but without knowing the exact intent of this enum it is hard to tell how it will develop with time.
Also, I'd like to ask you here - how many operations are going to be applicable to DrivingState along with Run()? If several and if you're going to basically replicate this switch statement a number of times, it would scream of questionable design, to say the least.
I'm interested in hearing what technique(s) you're using to validate the internal state of an object during an operation that, from it's own point of view, only can fail because of bad internal state or invariant breach.
My primary focus is on C++, since in C# the official and prevalent way is to throw an exception, and in C++ there's not just one single way to do this (ok, not really in C# either, I know that).
Note that I'm not talking about function parameter validation, but more like class invariant integrity checks.
For instance, let's say we want a Printer object to Queue a print job asynchronously. To the user of Printer, that operation can only succeed, because an asynchronous queue result with arrive at another time. So, there's no relevant error code to convey to the caller.
But to the Printer object, this operation can fail if the internal state is bad, i.e., the class invariant is broken, which basically means: a bug. This condition is not necessarily of any interest to the user of the Printer object.
Personally, I tend to mix three styles of internal state validation and I can't really decide which one's the best, if any, only which one is absolutely the worst. I'd like to hear your views on these and also that you share any of your own experiences and thoughts on this matter.
The first style I use - better fail in a controllable way than corrupt data:
void Printer::Queue(const PrintJob& job)
{
// Validate the state in both release and debug builds.
// Never proceed with the queuing in a bad state.
if(!IsValidState())
{
throw InvalidOperationException();
}
// Continue with queuing, parameter checking, etc.
// Internal state is guaranteed to be good.
}
The second style I use - better crash uncontrollable than corrupt data:
void Printer::Queue(const PrintJob& job)
{
// Validate the state in debug builds only.
// Break into the debugger in debug builds.
// Always proceed with the queuing, also in a bad state.
DebugAssert(IsValidState());
// Continue with queuing, parameter checking, etc.
// Generally, behavior is now undefined, because of bad internal state.
// But, specifically, this often means an access violation when
// a NULL pointer is dereferenced, or something similar, and that crash will
// generate a dump file that can be used to find the error cause during
// testing before shipping the product.
}
The third style I use - better silently and defensively bail out than corrupt data:
void Printer::Queue(const PrintJob& job)
{
// Validate the state in both release and debug builds.
// Break into the debugger in debug builds.
// Never proceed with the queuing in a bad state.
// This object will likely never again succeed in queuing anything.
if(!IsValidState())
{
DebugBreak();
return;
}
// Continue with defenestration.
// Internal state is guaranteed to be good.
}
My comments to the styles:
I think I prefer the second style, where the failure isn't hidden, provided that an access violation actually causes a crash.
If it's not a NULL pointer involved in the invariant, then I tend to lean towards the first style.
I really dislike the third style, since it will hide lots of bugs, but I know people that prefers it in production code, because it creates the illusion of a robust software that doesn't crash (features will just stop to function, as in the queuing on the broken Printer object).
Do you prefer any of these or do you have other ways of achieving this?
You can use a technique called NVI (Non-Virtual-Interface) together with the template method pattern. This probably is how i would do it (of course, it's only my personal opinion, which is indeed debatable):
class Printer {
public:
// checks invariant, and calls the actual queuing
void Queue(const PrintJob&);
private:
virtual void DoQueue(const PringJob&);
};
void Printer::Queue(const PrintJob& job) // not virtual
{
// Validate the state in both release and debug builds.
// Never proceed with the queuing in a bad state.
if(!IsValidState()) {
throw std::logic_error("Printer not ready");
}
// call virtual method DoQueue which does the job
DoQueue(job);
}
void Printer::DoQueue(const PrintJob& job) // virtual
{
// Do the actual Queuing. State is guaranteed to be valid.
}
Because Queue is non-virtual, the invariant is still checked if a derived class overrides DoQueue for special handling.
To your options: I think it depends on the condition you want to check.
If it is an internal invariant
If it is an invariant, it should not
be possible for a user of your class
to violate it. The class should care
about its invariant itself. Therefor,
i would assert(CheckInvariant()); in
such a case.
It's merely a pre-condition of a method
If it's merely a pre-condition that
the user of the class would have to
guarantee (say, only printing after
the printer is ready), i would throw
std::logic_error as shown above.
I would really discourage from check a condition, but then doing nothing.
The user of the class could itself assert before calling a method that the pre-conditions of it are satisfied. So generally, if a class is responsible for some state, and it finds a state to be invalid, it should assert. If the class finds a condition to be violated that doesn't fall in its responsibility, it should throw.
The question is best considered in combination with how you test your software.
It's important that hitting a broken invariant during testing is filed as a high severity bug, just as a crash would be. Builds for testing during development can be made to stop dead and output diagnostics.
It can be appropriate to add defensive code, rather like your style 3: your DebugBreak would dump diagnostics in test builds, but just be a break point for developers. This makes less likely the situation where a developer is prevented from working by a bug in unrelated code.
Sadly, I've often seen it done the other way round, where developers get all the inconvenience, but test builds sail through broken invariants. Lots of strange behaviour bugs get filed, where in fact a single bug is the cause.
It's a fine and very relevant question. IMHO, any application architecture should provide a strategy to report broken invariants. One can decide to use exceptions, to use an 'error registry' object, or to explicitly check the result of any action. Maybe there are even other strategies - that's not the point.
Depending on a possibly loud crash is a bad idea: you cannot guarantee the application is going to crash if you don't know the cause of the invariant breach. In case it doesn't, you still have corrupt data.
The NonVirtual Interface solution from litb is a neat way to check invariants.
Tough question this one :)
Personally, I tend to just throw an exception since I'm usually too much into what I'm doing when implementing stuff to take care of what should be taken care of by your design. Usually this comes back and bites me later on...
My personal experience with the "Do-some-logging-and-then-don't-do-anything-more"-strategy is that it too comes back to bite you - especially if it's implemented like in your case (no global strategy, every class could potentially do it different ways).
What I would do, as soon as I discover a problem like this, would be to speak to the rest of my team and tell them that we need some kind of global error-handling. What the handling will do depends on your product (you don't want to just do nothing and log something in a subtle developer-minded file in an Air Traffic Controller-system, but it would work fine if you were making a driver for, say, a printer :) ).
I guess what Im saying is, that imho, this question is something you should resolve on a design-level of your application rather than at implementation level. - And sadly there's no magic solutions :(