Is it possible to get run-time information about where a method has returned?
I mean, if the method returned after running all its lines of code, or because of an earlier
return statement that occurred due to some condition.
The scenario is using interceptor for creating UnitOfWork that should exists in method scope.
For example, lets consider this code:
[UnitOfWork]
public void Foo()
{
// insert some values to the database, using repositories interfaces...
DoSomeChangesInTheDataBaseUsingRepositories();
var result = DoSomethingElse();
if (!result) return;
DoMoreLogicBecuseTheResultWasTrue();
}
I have interceptor class that opens thread static unit of work for methods that are flagged with [UnitOfWork] and when the scope of the method ends it run commit on the UoW and dispose it.
This is fine, but lets consider the scenario above, where for some reason a programmer decided to return in the middle of the method due to some condition, and in that scenario the changes made by the repositories should not be persisted.
I know that this can indicate wrong design of the method, but be aware that it is a possible scenario to be written by a programmer and I want to defend my database from these kind of scenarios.
Also, I don't want to add code to the method itself that will tell me where it ended. I want to infer by the method info somehow its returned point, and if it is not at the end of its scope the interceptor will know not to commit.
The simple answer is use BREAKPOINTS and Debugging.
Edit:- As mentioned by Mels in the comments. This could be a useful suggestion.
If your application is very timing-sensitive, set conditional breakpoints such that they never actually stop the flow of execution. They do keep track of Hit Count, which you can use to backtrace the flow of execution.
Just for your attention. From the microsoft site:-
For those out there who have experience debugging native C++ or VB6
code, you may have used a feature where function return values are
provided for you in the Autos window. Unfortunately, this
functionality does not exist for managed code. While you can work
around this issue by assigning the return values to a local variable,
this is not as convenient because it requires modifying your code. In
managed code, it’s a lot trickier to determine what the return value
of a function you’ve stepped over. We realized that we couldn’t do the
right thing consistently here and so we removed the feature rather
than give you incorrect results in the debugger. However, we want to
bring this back for you and our CLR and Debugger teams are looking at
a number potential solutions to this problem. Unfortunately this is
will not be part of Visual Studio 11.
There are a couple ways that normally indicate that a method exited early for some reason, one is to use the actual return value, if the value is a valid result that then your method probably finished correctly, if its another value then probably not, this is the pattern that most TryXXX methods follow
int i;
//returns false as wasn't able to complete
bool passed = int.TryParse("woo", out i);
the other is to catch/trhow an exception, if an exception is found, then the method did not complete as you'd expect
try
{
Method();
}
catch(Exception e)
{
//Something went wrong (e.StackTrace)
}
Note: Catching Exception is a bad idea, the correct exceptions should be caught, i.e NullReferenceException
EDIT:
In answer to your update, if your code is dependant on the success of your method you should change the return type to a boolean or otherwise, return false if unsuccessful
Generally you should use trace logs to watch you code flow if you cant debug it.
You could always do something like this:
private Tuple<int, MyClass> MyMethod()
{
if (condition)
{
return new Tuple<int, MyClass>(0,new MyClass());
}
else if(condition)
{
return new Tuple<int, MyClass>(1, new MyClass());
}
return new Tuple<int, MyClass>(2,new MyClass());
}
This way you´ll have an index of which return was returning your MyClass object. All depends on what you are trying to accomplish and why - which is at best unclear. As someone else mentioned - that is what return values are for.
I am curios to know what you are trying to do...
Related
I'm thinking of building some generic extensions that will take a way all these null, throw checks and asserts and instead use fluent APIs to handle this.
So I'm thinking of doing something like this.
Shall() - Not quite sure about this one yet
.Test(...) - Determines whether the contained logic executed without any errors
.Guard(...) - Guards the contained logic from throwing any exception
.Assert(...) - Asserts before the execution of the code
.Throw(...) - Throws an exception based on a certain condition
.Assume(...) - Similar to assert but calls to Contract.Assume
Usage: father.Shall().Guard(f => f.Shop())
The thing is that I don't want these extra calls at run-time and I know AOP can solve this for me, I want to inline these calls directly to the caller, if you have a better way of doing that please do tell.
Now, before I'm researching or doing anything I wonder whether someone already done that or know of a tool that is doing it ?
I really want to build something like that and release it to public because I think that it can save a lot of time and headache.
Some examples.
DbSet<TEntity> set = Set<TEntity>();
if (set != null)
{
if (Contains(entity))
{
set.Remove(entity);
}
else
{
set.Attach(entity);
set.Remove(entity);
}
}
Changes to the following.
Set<TEntity>().Shall().Guard(set =>
{
if (Contains(entity))
{
set.Remove(entity);
}
else
{
set.Attach(entity);
set.Remove(entity);
}
});
Instead of being funny and try to make fun of other people, some people can really learn something about maturity, you can share your experience and tell me what's so good or bad about it, that I'll accept.
I'm not trying to recreate Code Contracts, I know what it is I'm using it everyday, I'm trying to move the boilerplate code that is written to one place.
Sometimes you have methods that for each call you have to check the returned object and is not your code so you can't ensure that the callee won't result a null so in the caller you have to perform null checks on the returned object so I thought of something that may allow me to perform these checks easily when chaining calls.
Update: I'll have to think about it some more and change the API to make the intentions clear and the code more readable.
I think that the idea is not polished at all and that indeed I went too far with all these methods.
Anyways, I'll leave it for now.
It sounds like you're describing something like Code Contracts: http://msdn.microsoft.com/en-us/devlabs/dd491992
If I understand what you're looking for, then the closest thing I've come up with is the extension method:
public static Chain<T>(this T obj, Action<T> act)
{
act(obj);
return obj;
}
This allows you to do the following:
Set.Remove(Set.FirstOrDefault(entity) ?? entity.Chain(a => Set.Add(a)));
As you can see, though, this isn't the most readable code. This isn't to say that Chain extension method is bad (and it certainly has its uses), but that Chain extension method can definitely be abused, so use cautiously or the ghost of programming past will come back to haunt you.
I have a piece of software written with fluent syntax. The method chain has a definitive "ending", before which nothing useful is actually done in the code (think NBuilder, or Linq-to-SQL's query generation not actually hitting the database until we iterate over our objects with, say, ToList()).
The problem I am having is there is confusion among other developers about proper usage of the code. They are neglecting to call the "ending" method (thus never actually "doing anything")!
I am interested in enforcing the usage of the return value of some of my methods so that we can never "end the chain" without calling that "Finalize()" or "Save()" method that actually does the work.
Consider the following code:
//The "factory" class the user will be dealing with
public class FluentClass
{
//The entry point for this software
public IntermediateClass<T> Init<T>()
{
return new IntermediateClass<T>();
}
}
//The class that actually does the work
public class IntermediateClass<T>
{
private List<T> _values;
//The user cannot call this constructor
internal IntermediateClass<T>()
{
_values = new List<T>();
}
//Once generated, they can call "setup" methods such as this
public IntermediateClass<T> With(T value)
{
var instance = new IntermediateClass<T>() { _values = _values };
instance._values.Add(value);
return instance;
}
//Picture "lazy loading" - you have to call this method to
//actually do anything worthwhile
public void Save()
{
var itemCount = _values.Count();
. . . //save to database, write a log, do some real work
}
}
As you can see, proper usage of this code would be something like:
new FluentClass().Init<int>().With(-1).With(300).With(42).Save();
The problem is that people are using it this way (thinking it achieves the same as the above):
new FluentClass().Init<int>().With(-1).With(300).With(42);
So pervasive is this problem that, with entirely good intentions, another developer once actually changed the name of the "Init" method to indicate that THAT method was doing the "real work" of the software.
Logic errors like these are very difficult to spot, and, of course, it compiles, because it is perfectly acceptable to call a method with a return value and just "pretend" it returns void. Visual Studio doesn't care if you do this; your software will still compile and run (although in some cases I believe it throws a warning). This is a great feature to have, of course. Imagine a simple "InsertToDatabase" method that returns the ID of the new row as an integer - it is easy to see that there are some cases where we need that ID, and some cases where we could do without it.
In the case of this piece of software, there is definitively never any reason to eschew that "Save" function at the end of the method chain. It is a very specialized utility, and the only gain comes from the final step.
I want somebody's software to fail at the compiler level if they call "With()" and not "Save()".
It seems like an impossible task by traditional means - but that's why I come to you guys. Is there an Attribute I can use to prevent a method from being "cast to void" or some such?
Note: The alternate way of achieving this goal that has already been suggested to me is writing a suite of unit tests to enforce this rule, and using something like http://www.testdriven.net to bind them to the compiler. This is an acceptable solution, but I am hoping for something more elegant.
I don't know of a way to enforce this at a compiler level. It's often requested for objects which implement IDisposable as well, but isn't really enforceable.
One potential option which can help, however, is to set up your class, in DEBUG only, to have a finalizer that logs/throws/etc. if Save() was never called. This can help you discover these runtime problems while debugging instead of relying on searching the code, etc.
However, make sure that, in release mode, this is not used, as it will incur a performance overhead since the addition of an unnecessary finalizer is very bad on GC performance.
You could require specific methods to use a callback like so:
new FluentClass().Init<int>(x =>
{
x.Save(y =>
{
y.With(-1),
y.With(300)
});
});
The with method returns some specific object, and the only way to get that object is by calling x.Save(), which itself has a callback that lets you set up your indeterminate number of with statements. So the init takes something like this:
public T Init<T>(Func<MyInitInputType, MySaveResultType> initSetup)
I can think of three a few solutions, not ideal.
AIUI what you want is a function which is called when the temporary variable goes out of scope (as in, when it becomes available for garbage collection, but will probably not be garbage collected for some time yet). (See: The difference between a destructor and a finalizer?) This hypothetical function would say "if you've constructed a query in this object but not called save, produce an error". C++/CLI calls this RAII, and in C++/CLI there is a concept of a "destructor" when the object isn't used any more, and a "finaliser" which is called when it's finally garbage collected. Very confusingly, C# has only a so-called destructor, but this is only called by the garbage collector (it would be valid for the framework to call it earlier, as if it were partially cleaning the object immediately, but AFAIK it doesn't do anything like that). So what you would like is a C++/CLI destructor. Unfortunately, AIUI this maps onto the concept of IDisposable, which exposes a dispose() method which can be called when a C++/CLI destructor would be called, or when the C# destructor is called -- but AIUI you still have to call "dispose" manually, which defeats the point?
Refactor the interface slightly to convey the concept more accurately. Call the init function something like "prepareQuery" or "AAA" or "initRememberToCallSaveOrThisWontDoAnything". (The last is an exaggeration, but it might be necessary to make the point).
This is more of a social problem than a technical problem. The interface should make it easy to do the right thing, but programmers do have to know how to use code! Get all the programmers together. Explain simply once-and-for-all this simple fact. If necessary have them all sign a piece of paper saying they understand, and if they wilfully continue to write code which doesn't do anythign they're worse than useless to the company and will be fired.
Fiddle with the way the operators are chained, eg. have each of the intermediateClass functions assemble an aggregate intermediateclass object containing all of the parameters (you mostly do it this was already (?)) but require an init-like function of the original class to take that as an argument, rather than have them chained after it, and then you can have save and the other functions return two different class types (with essentially the same contents), and have init only accept a class of the correct type.
The fact that it's still a problem suggests that either your coworkers need a helpful reminder, or they're rather sub-par, or the interface wasn't very clear (perhaps its perfectly good, but the author didn't realise it wouldn't be clear if you only used it in passing rather than getting to know it), or you yourself have misunderstood the situation. A technical solution would be good, but you should probably think about why the problem occurred and how to communicate more clearly, probably asking someone senior's input.
After great deliberation and trial and error, it turns out that throwing an exception from the Finalize() method was not going to work for me. Apparently, you simply can't do that; the exception gets eaten up, because garbage collection operates non-deterministically. I was unable to get the software to call Dispose() automatically from the destructor either. Jack V.'s comment explains this well; here was the link he posted, for redundancy/emphasis:
The difference between a destructor and a finalizer?
Changing the syntax to use a callback was a clever way to make the behavior foolproof, but the agreed-upon syntax was fixed, and I had to work with it. Our company is all about fluent method chains. I was also a fan of the "out parameter" solution to be honest, but again, the bottom line is the method signatures simply could not change.
Helpful information about my particular problem includes the fact that my software is only ever to be run as part of a suite of unit tests - so efficiency is not a problem.
What I ended up doing was use Mono.Cecil to Reflect upon the Calling Assembly (the code calling into my software). Note that System.Reflection was insufficient for my purposes, because it cannot pinpoint method references, but I still needed(?) to use it to get the "calling assembly" itself (Mono.Cecil remains underdocumented, so it's possible I just need to get more familiar with it in order to do away with System.Reflection altogether; that remains to be seen....)
I placed the Mono.Cecil code in the Init() method, and the structure now looks something like:
public IntermediateClass<T> Init<T>()
{
ValidateUsage(Assembly.GetCallingAssembly());
return new IntermediateClass<T>();
}
void ValidateUsage(Assembly assembly)
{
// 1) Use Mono.Cecil to inspect the codebase inside the assembly
var assemblyLocation = assembly.CodeBase.Replace("file:///", "");
var monoCecilAssembly = AssemblyFactory.GetAssembly(assemblyLocation);
// 2) Retrieve the list of Instructions in the calling method
var methods = monoCecilAssembly.Modules...Types...Methods...Instructions
// (It's a little more complicated than that...
// if anybody would like more specific information on how I got this,
// let me know... I just didn't want to clutter up this post)
// 3) Those instructions refer to OpCodes and Operands....
// Defining "invalid method" as a method that calls "Init" but not "Save"
var methodCallingInit = method.Body.Instructions.Any
(instruction => instruction.OpCode.Name.Equals("callvirt")
&& instruction.Operand is IMethodReference
&& instruction.Operand.ToString.Equals(INITMETHODSIGNATURE);
var methodNotCallingSave = !method.Body.Instructions.Any
(instruction => instruction.OpCode.Name.Equals("callvirt")
&& instruction.Operand is IMethodReference
&& instruction.Operand.ToString.Equals(SAVEMETHODSIGNATURE);
var methodInvalid = methodCallingInit && methodNotCallingSave;
// Note: this is partially pseudocode;
// It doesn't 100% faithfully represent either Mono.Cecil's syntax or my own
// There are actually a lot of annoying casts involved, omitted for sanity
// 4) Obviously, if the method is invalid, throw
if (methodInvalid)
{
throw new Exception(String.Format("Bad developer! BAD! {0}", method.Name));
}
}
Trust me, the actual code is even uglier looking than my pseudocode.... :-)
But Mono.Cecil just might be my new favorite toy.
I now have a method that refuses to be run its main body unless the calling code "promises" to also call a second method afterwards. It's like a strange kind of code contract. I'm actually thinking about making this generic and reusable. Would any of you have a use for such a thing? Say, if it were an attribute?
What if you made it so Init and With don't return objects of type FluentClass? Have them return, e.g., UninitializedFluentClass which wraps a FluentClass object. Then calling .Save(0 on the UnitializedFluentClass object calls it on the wrapped FluentClass object and returns it. If they don't call Save they don't get a FluentClass object.
In Debug mode beside implementing IDisposable you can setup a timer that will throw a exception after 1 second if the resultmethod has not been called.
Use an out parameter! All the outs must be used.
Edit: I am not sure of it will help, tho...
It would break the fluent syntax.
Following method shall only be called if it has been verified that there are invalid digits (by calling another method). How can I test-cover the throw-line in the following snippet? I know that one way could be to merge together the VerifyThereAreInvalidiDigits and this method. I'm looking for any other ideas.
public int FirstInvalidDigitPosition {
get {
for (int index = 0; index < this.positions.Count; ++index) {
if (!this.positions[index].Valid) return index;
}
throw new InvalidOperationException("Attempt to get invalid digit position whene there are no invalid digits.");
}
}
I also would not want to write a unit test that exercises code that should never be executed.
If the "throw" statement in question is truly unreachable under any possible scenario then it should be deleted and replaced with:
Debug.Fail("This should be unreachable; please find and fix the bug that caused this to be reached.");
If the code is reachable then write a unit test that tests that scenario. Error-reporting scenarios for public-accessible methods are perfectly valid scenarios. You have to handle all inputs correctly, even bad inputs. If the correct thing to do is to throw an exception then test that you are throwing an exception.
UPDATE: according to the comments, it is in fact impossible for the error to be hit and therefore the code is unreachable. But now the Debug.Fail is not reachable either, and it doesn't compile because the compiler notes that a method that returns a value has a reachable end point.
The first problem should not actually be a problem; surely the code coverage tool ought to be configurable to ignore unreachable debug-only code. But both problem can be solved by rewriting the loop:
public int FirstInvalidDigitPosition
{
get
{
int index = 0;
while(true)
{
Debug.Assert(index < this.positions.Length, "Attempt to get invalid digit position but there are no invalid digits!");
if (!this.positions[index].Valid) return index;
index++;
}
}
}
An alternative approach would be to reorganize the code so that you don't have the problem in the first place:
public int? FirstInvalidDigitPosition {
get {
for (int index = 0; index < this.positions.Count; ++index) {
if (!this.positions[index].Valid) return index;
}
return null;
}
}
and now you don't need to restrict the callers to call AreThereInvalidDigits first; just make it legal to call this method any time. That seems like the safer thing to do. Methods that blow up when you don't do some expensive check to verify that they are safe to call are fragile, dangerous methods.
I don't understand why you wouldn't want to write a unit test that exercises "code that should not happen".
If it "should not happen", then why are you writing the code at all? Because you think it might happen of course - perhaps an error in a future refactoring will break your other validation and this code will be executed.
If you think it's worth writing the code, it's worth unit testing it.
It's sometimes necessary to write small bits of code that can't execute, just to appease a compiler (e.g. if a function which always throw an exception, exits the application, etc. is called from a function which is supposed to return a value, the compiler may insist that the caller include a "return 0", "Return Nothing", etc. statement which can never execute. I wouldn't think one should be required to test such "code", since it may be impossible to make it execute without seriously breaking other parts of the system.
A more interesting question comes with code that the processor should never be able to execute under normal circumstances, but which exists to minimize harm from abnormal circumstances. For example, some safety-critical applications I've written start with code something like (real application is machine code; given below is pseudo-code)
register = 0
test register for zero
if non-zero goto dead
register = register - 1
test register for zero
if non-zero goto okay1
dead:
shut everything down
goto dead
okay1:
register = register + 1
if non-zero goto dead
I'm not sure how exactly one would go about testing that code in the failure case. The purpose of the code is to ensure that if the register has suffered electrostatic or other damage, or is otherwise not working, the system will shut down safely. Absent some very fancy equipment, though, I have no idea how to test such code.
Part of the purpose of testing is to test both things that should happen and things that can happen.
If you have code that should never execute, but could under the right (or wrong) conditions, you can verify that the exception occurs in the right conditions (in MS's Unit Test Framework) by decorating your test method with:
[ExpectedException(typeof(InvalidOperationException))]
To rephrase the issue, basically: how do you test an invalid state of your system when you've already gone to a lot of trouble to make sure that the system can't get into that invalid state in the first place?
It will make for slightly slower tests, but I would suggest using reflection to put your system into the invalid state in order to test it.
Also notice that reflection is precisely one of the possible ways for your system to get into that invalid state, which will execute your supposedly impossible to reach code.
If your code was truly impossible to reach, which is not quite the case in your example as far as I see, then you should remove the code. Eg:
public int GetValue() {
return 5;
throw new InvalidOperationException("Somehow the return statement did not work!");
}
I also would not want to write a unit test that exercises code that should never be executed.
What do you mean by "should" here? Like, that it ought not to be executed? Surely it should be ok for the code to be executed in a test. It should be executed in a test.
I have a simple question for you (i hope) :)
I have pretty much always used void as a "return" type when doing CRUD operations on data.
Eg. Consider this code:
public void Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
}
and then considen this code:
public bool Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
return true;
}
It actually just comes down to whether you should notify that something was inserted (and went well) or not ?
I typically go with the first option there.
Given your code, if something goes wrong with the insert there will be an Exception thrown.
Since you have no try/catch block around the Data Access code, the calling code will have to handle that Exception...thus it will know both if and why it failed. If you just returned true/false, the calling code will have no idea why there was a failure (it may or may not care).
I think it would make more sense if in the case where "item == null" that you returned "false". That would indicate that it was a case that you expect to happen not infrequently, and that therefore you don't want it to raise an exception but the calling code could handle the "false" return value.
As it standards, you'll return "true" or there'll be an exception - that doesn't really help you much.
Don't fight the framework you happen to be in. If you are writing C code, where return values are the most common mechanism for communicating errors (for lack of a better built in construct), then use that.
.NET base class libraries use Exceptions to communicate errors and their absence means everything is okay. Because almost all code uses the BCL, much of it will be written to expect exceptions, except when it gets to a library written as if C# was C with no support for Exceptions, each invocation will need to be wrapped in a if(!myObject.DoSomething){ System.Writeline("Damn");} block.
For the next developer to use your code (which could be you after a few years when you've forgotten how you originally did it), it will be a pain to start writing all the calling code to take advantage of having error conditions passed as return values, as changes to values in an output parameter, as custom events, as callbacks, as messages to queue or any of the other imaginable ways to communicate failure or lack thereof.
I think it depends. Imaging that your user want to add a new post onto a forum. And the adding fail by some reason, then if you don't tell the user, they will never know that something wrong. The best way is to throw another exception with a nice message for them
And if it does not relate to the user, and you already logged it out to database log, you shouldn't care about return or not any more
I think it is a good idea to notify the user if the operation went well or not. Regardless how much you test your code and try to think out of the box, it is most likely that during its existence the software will encounter a problem you did not cater for, thus making it behave incorrectly. The use of notifications, to my opinion, allow the user to take action, a sort of Plan B if you like when the program fails. This action can either be a simple work around or else, inform people from the IT department so that they can fix it.
I'd rather click that extra "OK" button than learn that something went wrong when it is too late.
You should stick with void, if you need more data - use variables for it, as either you'll need specific data (And it can be more than one number/string) and an excpetion mechanism is a good solution for handling errors.
so.. if you want to know how many rows affected, if a sp returned something ect... - a return type will limit you..
Does C# have a way to temporarily change the value of a variable in a specific scope and revert it back automatically at the end of the scope/block?
For instance (not real code):
bool UpdateOnInput = true;
using (UpdateOnInput = false)
{
//Doing my changes without notifying anyone
Console.WriteLine (UpdateOnInput) // prints false
}
//UpdateOnInput is true again.
EDIT:
The reason I want the above is because I don't want to do this:
UpdateOnInput = false
//Doing my changes without notifying anyone
Console.WriteLine (UpdateOnInput) // prints false
UpdateOnInput = true
No, there's no way to do this directly. There are a few different schools of thought on how to do this sort of thing. Compare and contrast these two:
originalState = GetState();
SetState(newState);
DoSomething();
SetState(originalState);
vs
originalState = GetState();
SetState(newState);
try
{
DoSomething();
}
finally
{
SetState(originalState);
}
Many people will tell you that the latter is "safer".
It ain't necessarily so.
The difference between the two is of course the the latter restores the state even if DoSomething() throws an exception. Is that better than keeping the state mutated in an exception scenario? What makes it better? You have an unexpected, unhandled exception reporting that something awful and unexpected has happened. Your internal state could be completely inconsistent and arbitrarily messed up; no one knows what might have been happening at the point of the exception. All we know is that DoSomething probably was trying to do something to the mutated state.
Is it really the right thing to do in the scenario where something terrible and unknown has happened to keep on stirring that particular pot and trying to mutate the state that just caused an exception again?
Sometimes that is going to be the right thing to do, and sometimes its going to make matters worse. Which scenario you're actually in depends on what exactly the code is doing, so think carefully about what the right thing to do is before just blindly choosing one or the other.
Frankly, I would rather solve the problem by not getting into the situation in the first place. Our existing compiler design uses this design pattern, and frankly, it is freakin' irritating. In the existing C# compiler the error reporting mechanism is "side effecting". That is, when part of the compiler gets an error, it calls the error reporting mechanism which then displays the error to the user.
This is a major problem for lambda binding. If you have:
void M(Func<int, int> f) {}
void M(Func<string, int> f) {}
...
M(x=>x.Length);
then the way this works is we try to bind
M((int x)=>{return x.Length;});
and
M((string x)=>{return x.Length;});
and we see which one, if any, gives us an error. In this case, the former gives an error, the latter compiles without error, so this is a legal lambda conversion and overload resolution succeeds. What do we do with the error? We cannot report it to the user because this program is error free!
Therefore what we do when we bind the body of a lambda is exactly what you say: we tell the error reporter "don't report your errors to the user; save them in this buffer over here instead". Then we bind the lambda, restore the error reporter to its earlier state, and look at the contents of the error buffer.
We could avoid this problem entirely by changing the expression analyzer so that it returned the errors along with the result, rather than making errors a state-related side effect. Then the need for mutation of the error reporting state goes away entirely and we don't even have to worry about it.
So I would encourage you to revisit your design. Is there a way you can make the operation you are performing not dependent upon the state you are mutating? If so, then do that, and then you don't need to worry about how to restore the mutated state.
(And of course in our case we do want to restore the state upon an exception. If something inside the compiler throws during lambda binding, we want to be able to report that to the user! We don't want the error reporter to stay in the "suppress reporting errors" state.)
No, but it is pretty simple to just do this:
bool UpdateOnTrue = true;
// ....
bool temp = UpdateOnTrue;
try
{
UpdateOnTrue = false;
// do stuff
}
finally
{
UpdateOnTrue = temp;
}
Try:
public void WithAssignment<T>(ref T var, T val, Action action)
{
T original = var;
var = val;
try
{
action();
}
finally
{
var = original;
}
}
Now you can say:
bool flag = false;
WithAssignment(ref flag, true, () =>
{
// flag is true during this block
});
// flag is false again
No, you have to do it manually with a try/finally block. I dare say you could write an IDisposable implementation which would do something hacky in conjunction with lambda expressions, but I suspect a try/finally block is simpler (and doesn't abuse the using statement).
Sounds like you really want
Stack<bool>
No , there is not standard way, you should implement it manually. Generic implementation of IEditableObject via TypeDescriptor and Reflection can be helpfull
Canned... I doubt it. Given your example is a simple use of a temporary bool value I'm assuming you've got something wacky in mind :-) You can implement some kind of Stack stucture:
1) Push old value onto stack
2) Load new value
3) Do Stuff
4) Pop from stack and replace used value.
Rough (AKA Untested) example (Can't look up stack syntax right now)
bool CurrentValue = true;
Stack<bool> Storage= new Stack<bool>
Storage.Push(CurrentValue);
CurrentValue=false;
DoStuff();
CurrentValue = Storage.Pop();
//Continue
You should refactor your code to use a separate function, like so:
bool b = GetSomeValue();
DoSomething(ModifyValue(b));
//b still has the original value.
For this to work for a reference type, you need to copy it before messing with it:
ICloneable obj = GetSomeValue();
DoSomething(ModifyValue(obj.Clone()));
//obj still has the original value.
It's hard to write correct code when the values of your variables change around a lot. Strive to have as few reassignments in your code as possible.