I was checking some of the code that make up LINQ extensions in Reflector, and this is the kind of code I come across:
private bool MoveNext()
{
bool flag;
try
{
switch (this.<>1__state)
{
case 0:
this.<>1__state = -1;
this.<set>5__7b = new Set<TSource>(this.comparer);
this.<>7__wrap7d = this.source.GetEnumerator();
this.<>1__state = 1;
goto Label_0092;
case 2:
this.<>1__state = 1;
goto Label_0092;
default:
goto Label_00A5;
}
Label_0050:
this.<element>5__7c = this.<>7__wrap7d.Current;
if (this.<set>5__7b.Add(this.<element>5__7c))
{
this.<>2__current = this.<element>5__7c;
this.<>1__state = 2;
return true;
}
Label_0092:
if (this.<>7__wrap7d.MoveNext())
{
goto Label_0050;
}
this.<>m__Finally7e();
Label_00A5:
flag = false;
}
fault
{
this.System.IDisposable.Dispose();
}
return flag;
}
Was there a reason for Microsoft to write it this way?
Also what does the <> syntax mean, in lines like:
switch (this.<>1__state)
I have never seen it written before a variable, only after.
The MSIL is still valid 2.x code and the <> names you're seeing are auto generated by the C# 3.x compilers.
For example:
public void AttachEvents()
{
_ctl.Click += (sender,e) => MessageBox.Show( "Hello!" );
}
Translates to something like:
public void AttachEvents()
{
_ctl.Click += new EventHandler( <>b_1 );
}
private void <>b_1( object sender, EventArgs e )
{
MessageBox.Show( "Hello!" );
}
I should also note that the reason you're seeing it like that in Reflector is that you don't have .NET 3.5 optimization turned on. Go to View | Options and change Optimization to .NET 3.5 and it will do a better job of translating the generated identifiers back to their lamda expressions.
You're seeing the internal guts of the finite state machines that the C# compiler emits on your behalf when it handles iterators.
Jon Skeet has some great articles (Iterator block implementation details and Iterators, iterator blocks and data pipelines) on this subject. See also Chapter 6 of his book.
There was previously an SO post on this subject.
And, finally, Microsoft Research has a nice paper on the subject.
Read until your heart is content.
Identifiers starting with <> aren't valid C# identifiers, so I suspect they use them to mangle the names without fear of conflict, as no identifier in the C# code could be the same.
As to why it's hard to read, I suspect that it's more down to the fact it's easy to generate.
This is code that is automatically generated when you use iterators. The <> is used to ensure there are no collisions, and also to prevent you from accessing the compiler-generator classes directly in your code.
See the following for more information:
Using C# Yield for Readability and Performance
C# Iterators
These are types that have been auto-generated by the compiler from iterator methods.
The compiler will do exactly the same sort of thing to your own iterators. For example, write something like this and then take a look at the actual generated code in Reflector:
public IEnumerable<int> GetRandom()
{
Random rng = new Random();
while (true)
{
yield return rng.Next();
}
}
This is state machine that is automatically generated from an iterator, such as the following:
static IEnumerable<Func<KeyValuePair<int, int>>> FunnyMethod() {
for (var i = 0; i < 10; i++) {
var localVar = i;
yield return () => new KeyValuePair(localVar, i);
}
}
This method will return 10 for all of the values.
The compiler transforms these methods into state machines that store their state in the <>1__state field and call each part of the iterator for a different value of the field.
The <> part is part of the generated field name, and is chosen so as not to conflict with anything.
You must understand what Reflector does. It is not getting the source code back. That's not what a developer at MS wrote. :) It takes Intermediate Language (IL) and systematically converts it back to C# (or VB.NET). In doing so, it must come up with an approach. As you know there are many ways to skin a cat in code that will eventually lead to the same IL. Reflector has to pick a way to move back wards from IL to a higher level language and use that way every time.
(Fixed per comment, thank you.)
Related
While going through new C# 7.0 features, I stuck up with discard feature. It says:
Discards are local variables which you can assign but cannot read
from. i.e. they are “write-only” local variables.
and, then, an example follows:
if (bool.TryParse("TRUE", out bool _))
What is real use case when this will be beneficial? I mean what if I would have defined it in normal way, say:
if (bool.TryParse("TRUE", out bool isOK))
The discards are basically a way to intentionally ignore local variables which are irrelevant for the purposes of the code being produced. It's like when you call a method that returns a value but, since you are interested only in the underlying operations it performs, you don't assign its output to a local variable defined in the caller method, for example:
public static void Main(string[] args)
{
// I want to modify the records but I'm not interested
// in knowing how many of them have been modified.
ModifyRecords();
}
public static Int32 ModifyRecords()
{
Int32 affectedRecords = 0;
for (Int32 i = 0; i < s_Records.Count; ++i)
{
Record r = s_Records[i];
if (String.IsNullOrWhiteSpace(r.Name))
{
r.Name = "Default Name";
++affectedRecords;
}
}
return affectedRecords;
}
Actually, I would call it a cosmetic feature... in the sense that it's a design time feature (the computations concerning the discarded variables are performed anyway) that helps keeping the code clear, readable and easy to maintain.
I find the example shown in the link you provided kinda misleading. If I try to parse a String as a Boolean, chances are I want to use the parsed value somewhere in my code. Otherwise I would just try to see if the String corresponds to the text representation of a Boolean (a regular expression, for example... even a simple if statement could do the job if casing is properly handled). I'm far from saying that this never happens or that it's a bad practice, I'm just saying it's not the most common coding pattern you may need to produce.
The example provided in this article, on the opposite, really shows the full potential of this feature:
public static void Main()
{
var (_, _, _, pop1, _, pop2) = QueryCityDataForYears("New York City", 1960, 2010);
Console.WriteLine($"Population change, 1960 to 2010: {pop2 - pop1:N0}");
}
private static (string, double, int, int, int, int) QueryCityDataForYears(string name, int year1, int year2)
{
int population1 = 0, population2 = 0;
double area = 0;
if (name == "New York City")
{
area = 468.48;
if (year1 == 1960) {
population1 = 7781984;
}
if (year2 == 2010) {
population2 = 8175133;
}
return (name, area, year1, population1, year2, population2);
}
return ("", 0, 0, 0, 0, 0);
}
From what I can see reading the above code, it seems that the discards have a higher sinergy with other paradigms introduced in the most recent versions of C# like tuples deconstruction.
For Matlab programmers, discards are far from being a new concept because the programming language implements them since very, very, very long time (probably since the beginning, but I can't say for sure). The official documentation describes them as follows (link here):
Request all three possible outputs from the fileparts function:
helpFile = which('help');
[helpPath,name,ext] = fileparts('C:\Path\data.txt');
The current workspace now contains three variables from fileparts: helpPath, name, and ext. In this case, the variables are small. However, some functions return results that use much more memory. If you do not need those variables, they waste space on your system.
Ignore the first output using a tilde (~):
[~,name,ext] = fileparts(helpFile);
The only difference is that, in Matlab, inner computations for discarded outputs are normally skipped because output arguments are flexible and you can know how many and which one of them have been requested by the caller.
I have seen discards used mainly against methods which return Task<T> but you don't want to await the output.
So in the example below, we don't want to await the output of SomeOtherMethod() so we could do something like this:
//myClass.cs
public async Task<bool> Example() => await SomeOtherMethod()
// example.cs
Example();
Except this will generate the following warning:
CS4014 Because this call is not awaited, execution of the
current method continues before the call is completed. Consider
applying the 'await' operator to the result of the call.
To mitigate this warning and essentially ensure the compiler that we know what we are doing, you can use a discard:
//myClass.cs
public async Task<bool> Example() => await SomeOtherMethod()
// example.cs
_ = Example();
No more warnings.
To add another use case to the above answers.
You can use a discard in conjunction with a null coalescing operator to do a nice one-line null check at the start of your functions:
_ = myParam ?? throw new MyException();
Many times I've done code along these lines:
TextBox.BackColor = int32.TryParse(TextBox.Text, out int32 _) ? Color.LightGreen : Color.Pink;
Note that this would be part of a larger collection of data, not a standalone thing. The idea is to provide immediate feedback on the validity of each field of the data they are entering.
I use light green and pink rather than the green and red one would expect--the latter colors are dark enough that the text becomes a bit hard to read and the meaning of the lighter versions is still totally obvious.
(In some cases I also have a Color.Yellow to flag something which is not valid but neither is it totally invalid. Say the parser will accept fractions and the field currently contains "2 1". That could be part of "2 1/2" so it's not garbage, but neither is it valid.)
Discard pattern can be used with a switch expression as well.
string result = shape switch
{
Rectangule r => $"Rectangule",
Circle c => $"Circle",
_ => "Unknown Shape"
};
For a list of patterns with discards refer to this article: Discards.
Consider this:
5 + 7;
This "statement" performs an evaluation but is not assigned to something. It will be immediately highlighted with the CS error code CS0201.
// Only assignment, call, increment, decrement, and new object expressions can be used as a statement
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-messages/cs0201?f1url=%3FappId%3Droslyn%26k%3Dk(CS0201)
A discard variable used here will not change the fact that it is an unused expression, rather it will appear to the compiler, to you, and others reviewing your code that it was intentionally unused.
_ = 5 + 7; //acceptable
It can also be used in lambda expressions when having unused parameters:
builder.Services.AddSingleton<ICommandDispatcher>(_ => dispatcher);
I mostly develop using C#, but I think this question might be suitable for other languages as well.Also, it seems like there is a lot of code here but the question is very simple.
Inlining, as I understand it, is the compiler (in the case of C# the Virtual Machine) replacing a method call by inserting the body of the method in every place the method was called from.
Let's say I have the following program:
static Main()
{
int number = 7;
bool a;
a = IsEven(number);
Console.WriteLine(a);
}
... the body of the method IsEven:
bool IsEven(int n)
{
if (n % 2 == 0) // Two conditional return paths
return true;
else
return false;
}
I could understand how code will look like after inlining the method:
static Main()
{
int number = 7;
bool a;
if (number % 2 == 0)
a = true;
else
a = false;
Console.WriteLine(a); // Will print true if 'number' is even, otherwise false
}
An obviously simple and correct program.
But if I tweak the body of IsEven a little bit to include an absolute return path...
bool IsEven(int n)
{
if (n % 2 == 0)
return true;
return false; // <- Absolute return path!
}
I personally like this style a bit more in some situations. Some refractoring tools might even suggest that I do change the first version to look like this one - but when I tried to imagine how this method would look like when it's inlined I was stumped.
If we inline the second version of the method:
static Main()
{
int number = 7;
bool a;
if (number % 2 == 0)
a = true;
a = false;
Console.WriteLine(a); // Will always print false!
}
The question to be asked:
How does the Compiler / Virtual Machine deal with inlining a method that has an absolute return path?It seems extremely unlikely that something like this would really prevent method inlining, so I wonder how such things are dealt with. Perhaps the process of inlining isn't as simple as this? Maybe one version is more likely to being inlined by the VM?
Edit:
Profiling both methods (and manual inlining of the first one) showed no difference in performance, so I can only assume that both methods get inlined and work in the same or similar manner (at least on my VM).
Also, these methods are extremely simple and seem almost interchangeable, but complex methods with absolute return paths might be much more difficult to change into versions without absolute return paths.
It's can be kind of hard to explain what the JITter does when it inlines - it does not chage the C# code to do the inlining - it will (always?) work on the produced bytes (the compiled version) - and the "tools" you have when producing assembly-code (the actual machine code bytes) are much more fine grained than what you have in C# (or IL for that matter).
That said, you can have an idea of how it works, in C# terms by considering the break keyword..
Consider the possiblity that every inline function that's not trivial is enclosed in a while (true) loop (or do while(false) loop) - and that every source return is translated to a localVar = result; break; set of statements. Then you get something like this:
static Main()
{
int number = 7;
bool a;
while (true)
{
if (number % 2 == 0)
{
a = true;
break;
}
a = false;
break;
}
Console.WriteLine(a); // Will always print the right thing! Yey!
}
Similarly, when producing assembly, you will see A LOT of jmps being generated - those are the moral equivalent of the break statements, but they are MUCH more flexible (think about them as anonymous gotos or something).
So you can see, the jitter (and any compiler that compiles to native) has A LOT of tools at hand it can use to do "the right thing".
The return statement indicates:
The result value
Destruction of local variables (This applies to C++, not C#) In C#, finally blocks and Dispose calls in using blocks will run.
A jump out of the function
All these things still happen after inlining. The jump will be local instead of cross-functions after inlining, but it will still be there.
Inlining is NOT textual substitution like C/C++ macros are.
Other things that are different between inlining and textual substitution are treatment of variables with the same name.
The machine code that the cpu executes is a very simple language. It doesn't have the notion of the return statement, a subroutine has a single point of entry and a single point of exit. So your IsEven() method like this:
bool IsEven(int n)
{
if (n % 2 == 0)
return true;
return false;
}
needs to be rewritten by the jitter to something that resembles this (not valid C#):
void IsEvent(int n)
{
if (n % 2 == 0) {
$retval = true;
goto exit;
}
$retval = false;
exit:
} // $retval becomes the function return value
The $retval variable might look fake here. It is not, it is the EAX register on an x86 core. You'll now see that this code is simple to inline, it can be transplanted directly into the body of Main(). The $retval variable can be equated to the a variable with a simple logical substitution.
class Program
{
static void Main(string[] args)
{
var x = new Program();
Console.Write(x.Text);
Console.Write(x.Num);
//Console.Write(x.Num);//line A
}
private string Text_;
public string Text
{
get
{
return Text_ ?? (Text_ = "hello");//line B
}
}
private int? Num_;
public int Num
{
get
{
return (int)(Num_ ?? (Num_ = 42));//line C
}
}
}
I'm using Visual Studio 2010 to get code coverage results. It shows that line B is totally covered while line C is partially covered. I would expect that line B is partially covered as well instead of being totally covered. Why is it the case that the code coverage results shows line B as being totally covered?
To demonstrate that it's working "correctly" for Num property, uncomment line A and run the coverage. It should show line C as totally being covered.
When I rewrite the code to a more verbose form (see below), it works correctly and reports that Text_ is partially covered. I prefer to use the former for its brevity, and would like to know if the two forms are equivalent too. Thanks in advance.
if (Text_ != null)
{
return Text_;
}
else
{
return Text_ = "hello";
}
The ?? on Nullable<T> expands to something more complex at the compiler, i.e.
a ?? b
is really
a.HasValue ? a.GetValueOrDefault() : b
Now, since a was null/empty for the only time it was executed, the a.GetValueOrDefault() has never been called. The underlying code when using a string (or any flat reference) is simpler.
In reality, just call it twice to make this go away:
Console.Write(x.Text); // first call; performs init
Console.Write(x.Text); // test once initialized
Console.Write(x.Num); // first call; performs init
Console.Write(x.Num); // test once initialized
Without going into the IL generated by this line I'd suspect that the compiler has done some internal optimisation that causes your "line B" to be regarded as a single statement.
The principal difference between the two statements is that line B refers to an object, whereas line C uses a shorthand reference to Num_.Value.
In general, it's definitely not worth getting stressed about "100% code coverage" -- as this example indicates, sometimes the instrumentation is wrong, anyway. What's important is that most of the code is covered, at least the principal execution paths through business logic.
Coverage is no replacement for code reviews by another human.
This one's really an offshoot of this question, but I think it deserves its own answer.
According to section 15.13 of the ECMA-334 (on the using statement, below referred to as resource-acquisition):
Local variables declared in a
resource-acquisition are read-only, and shall include an initializer. A
compile-time error occurs if the
embedded statement attempts to modify
these local variables (via assignment
or the ++ and -- operators) or
pass them as ref or out
parameters.
This seems to explain why the code below is illegal.
struct Mutable : IDisposable
{
public int Field;
public void SetField(int value) { Field = value; }
public void Dispose() { }
}
using (var m = new Mutable())
{
// This results in a compiler error.
m.Field = 10;
}
But what about this?
using (var e = new Mutable())
{
// This is doing exactly the same thing, but it compiles and runs just fine.
e.SetField(10);
}
Is the above snippet undefined and/or illegal in C#? If it's legal, what is the relationship between this code and the excerpt from the spec above? If it's illegal, why does it work? Is there some subtle loophole that permits it, or is the fact that it works attributable only to mere luck (so that one shouldn't ever rely on the functionality of such seemingly harmless-looking code)?
I would read the standard in such a way that
using( var m = new Mutable() )
{
m = new Mutable();
}
is forbidden - with reason that seem obious.
Why for the struct Mutable it is not allowed beats me. Because for a class the code is legal and compiles fine...(object type i know..)
Also I do not see a reason why changing the contents of the value type does endanger the RA. Someone care to explain?
Maybe someone doing the syntx checking just misread the standard ;-)
Mario
I suspect the reason it compiles and runs is that SetField(int) is a function call, not an assignment or ref or out parameter call. The compiler has no way of knowing (in general) whether SetField(int) is going to mutate the variable or not.
This appears completely legal according to the spec.
And consider the alternatives. Static analysis to determine whether a given function call is going to mutate a value is clearly cost prohibitive in the C# compiler. The spec is designed to avoid that situation in all cases.
The other alternative would be for C# to not allow any method calls on value type variables declared in a using statement. That might not be a bad idea, since implementing IDisposable on a struct is just asking for trouble anyway. But when the C# language was first developed, I think they had high hopes for using structs in lots of interesting ways (as the GetEnumerator() example that you originally used demonstrates).
To sum it up
struct Mutable : IDisposable
{
public int Field;
public void SetField( int value ) { Field = value; }
public void Dispose() { }
}
class Program
{
protected static readonly Mutable xxx = new Mutable();
static void Main( string[] args )
{
//not allowed by compiler
//xxx.Field = 10;
xxx.SetField( 10 );
//prints out 0 !!!! <--- I do think that this is pretty bad
System.Console.Out.WriteLine( xxx.Field );
using ( var m = new Mutable() )
{
// This results in a compiler error.
//m.Field = 10;
m.SetField( 10 );
//This prints out 10 !!!
System.Console.Out.WriteLine( m.Field );
}
System.Console.In.ReadLine();
}
So in contrast to what I wrote above, I would recommend to NOT use a function to modify a struct within a using block. This seems wo work, but may stop to work in the future.
Mario
This behavior is undefined. In The C# Programming language at the end of the C# 4.0 spec section 7.6.4 (Member Access) Peter Sestoft states:
The two bulleted points stating "if the field is readonly...then
the result is a value" have a slightly surprising effect when the
field has a struct type, and that struct type has a mutable field (not
a recommended combination--see other annotations on this point).
He provides an example. I created my own example which displays more detail below.
Then, he goes on to say:
Somewhat strangely, if instead s were a local variable of struct type
declared in a using statement, which also has the effect of making s
immutable, then s.SetX() updates s.x as expected.
Here we see one of the authors acknowledge that this behavior is inconsistent. Per section 7.6.4, readonly fields are treated as values and do not change (copies change). Because section 8.13 tells us using statements treat resources as read-only:
the resource variable is read-only in the embedded statement,
resources in using statements should behave like readonly fields. Per the rules of 7.6.4 we should be dealing with a value not a variable. But surprisingly, the original value of the resource does change as demonstrated in this example:
//Sections relate to C# 4.0 spec
class Test
{
readonly S readonlyS = new S();
static void Main()
{
Test test = new Test();
test.readonlyS.SetX();//valid we are incrementing the value of a copy of readonlyS. This is per the rules defined in 7.6.4
Console.WriteLine(test.readonlyS.x);//outputs 0 because readonlyS is a value not a variable
//test.readonlyS.x = 0;//invalid
using (S s = new S())
{
s.SetX();//valid, changes the original value.
Console.WriteLine(s.x);//Surprisingly...outputs 2. Although S is supposed to be a readonly field...the behavior diverges.
//s.x = 0;//invalid
}
}
}
struct S : IDisposable
{
public int x;
public void SetX()
{
x = 2;
}
public void Dispose()
{
}
}
The situation is bizarre. Bottom line, avoid creating readonly mutable fields.
Besides just using yield for iterators in Ruby, I also use it to pass control briefly back to the caller before resuming control in the called method. What I want to do in C# is similar. In a test class, I want to get a connection instance, create another variable instance that uses that connection, then pass the variable to the calling method so it can be fiddled with. I then want control to return to the called method so that the connection can be disposed. I guess I'm wanting a block/closure like in Ruby. Here's the general idea:
private static MyThing getThing()
{
using (var connection = new Connection())
{
yield return new MyThing(connection);
}
}
[TestMethod]
public void MyTest1()
{
// call getThing(), use yielded MyThing, control returns to getThing()
// for disposal
}
[TestMethod]
public void MyTest2()
{
// call getThing(), use yielded MyThing, control returns to getThing()
// for disposal
}
...
This doesn't work in C#; ReSharper tells me that the body of getThing cannot be an iterator block because MyThing is not an iterator interface type. That's definitely true, but I don't want to iterate through some list. I'm guessing I shouldn't use yield if I'm not working with iterators. Any idea how I can achieve this block/closure thing in C# so I don't have to wrap my code in MyTest1, MyTest2, ... with the code in getThing()'s body?
What you want are lambda expressions, something like:
// not named GetThing because it doesn't return anything
private static void Thing(Action<MyThing> thing)
{
using (var connection = new Connection())
{
thing(new MyThing(connection));
}
}
// ...
// you call it like this
Thing(t=>{
t.Read();
t.Sing();
t.Laugh();
});
This captures t the same way yield does in Ruby. The C# yield is different, it constructs generators that can be iterated over.
You say you want to use C#'s yield keyword the same way you would use Ruby's yield keyword. You seem to be a little confused about what the two actually do: the two have absolutely nothing to do with each other, what you are asking for, is simply not possible.
The C# yield keyword is not the C# equivalent of the Ruby yield keyword. In fact, there is no equivalent to the Ruby yield keyword in C#. And the Ruby equivalent to C#'s yield keyword is not the yield keyword, it's the Enumerator::Yielder#yield method (also aliased as Enumerator::Yielder#<<).
IOW, it's for returning the next element of an iterator. Here's an abridged example from the official MSDN documentation:
public static IEnumerable Power(int number, int exponent) {
var counter = 0;
var result = 1;
while (counter++ < exponent) {
result *= number;
yield return result; }}
Use it like so:
foreach (int i in Power(2, 8)) { Console.Write("{0} ", i); }
The Ruby equivalent would be something like:
def power(number, exponent)
Enumerator.new do |yielder|
result = 1
1.upto(exponent-1) { yielder.yield result *= number } end end
puts power(2, 8).to_a
In C#, yield is used to yield a value to the caller and in Ruby, yield is used to yield control to a block argument
In fact, in Ruby, yield is just a shortcut for Proc#call.
Imagine, if yield didn't exist. How would you write an if method in Ruby? It would look like this:
class TrueClass
def if(code)
code.call
end
end
class FalseClass
def if(_); end
end
true.if(lambda { puts "It's true!" })
This is kind of cumbersome. In Ruby 1.9, we get proc literals and a shortcut syntax for Proc#call, which make it a little bit nicer:
class TrueClass
def if(code)
code.()
end
end
true.if(->{ puts "It's true!' })
However, Yukihiro Matsumoto noticed, that the vast majority of higher-order procedures only take one procedure argument. (Especially since Ruby has several control-flow constructs built into the language, which would otherwise require multiple procedure arguments, like if-then-else which would require two and case-when which would require n arguments.) So, he created a specialized way to pass exactly one procedural argument: the block. (In fact, we already saw an example of this at the very beginning, because Kernel#lambda is actually just a normal method which takes a block and returns a Proc.)
class TrueClass
def if(&code)
code.()
end
end
true.if { puts "It's true!" }
Now, since we can only ever pass exactly one block into a method, we really don't need to explicitly name the variable, since there can never be an ambiguity anyway:
def if
???.() # But what do we put here? We don't have a name to call #call on!
end
However, since we now no longer have a name that we can send messages to, we need some other way. And again, we get one of those 80/20 solutions that are so typical for Ruby: there are tons of things that one might want to do with a block: transform it, store it in an attribute, pass it to another method, inspect it, print it … However, by far the most common thing to do is to call it. So, matz added another specialized shortcut syntax for exactly this common case: yield means "call the block that was passed to the method". Therefore, we don't need a name:
def if; yield end
So, what is the C# equivalent to Ruby's yield keyword? Well, let's go back to the first Ruby example, where we explicitly passed the procedure as an argument:
def foo(bar)
bar.('StackOverflow')
end
foo ->name { puts "Higher-order Hello World from #{name}!" }
The C# equivalent is exactly the same:
void Foo(Action<string> bar) => bar("StackOverflow")
Foo(name => { Console.WriteLine("Higher-order Hello World from {0]!", name); })
I might pass a delegate into the iterator.
delegate void Action(MyThing myThing);
private static void forEachThing(Action action)
{
using (var connection = new Connection())
{
action(new MyThing(connection));
}
}
yield in C# is specifically for returning bits of an iterated collection. Specifically, your function has to return IEnumerable<Thing> or IEnumerable for yield to work, and it's meant to be used from inside of a foreach loop. It is a very specific construct in c#, and it can't be used in the way you're trying.
I'm not sure off the top of my head if there's another construct that you could use or not, possibly something with lambda expressions.
You can have GetThing take a delegate containing the code to execute, then pass anonymous methods from other functions.