My question is, will LINQ in the following code read flag value three times when numbers materializing numbers collection? I am trying to optimize my code. Here I want Where clause to be evaluated only once, if flag == true
List<int> list = new(){1, 2, 3};
bool flag = true;
bool IsNumberBig(int num)
{
return num > 100;
}
var numbers = list.Where(l => flag || IsNumberBig(l)).ToList();
I failed to find a related question. Would be thankful to see how I could check this myself.
The value of flag will be evaluated every time the lambda is called. Obviously, that's cheaper than evaluating IsNumberBig() (or some more complex method in there), but still not free.
To optimize this, you could write something like
List<int> numbers;
if (flag)
{
numbers = list;
}
else
{
numbers = list.Where(IsNumberBig).ToList();
}
Like this, no iteration is done if flag is true (which in your case would return all elements, anyway)
I think it is important to note that LINQ is mostly syntactic sugar. It does not do optimization. The vast majority of optimizations are done by the compiler, or more specifically, the jitter.
One problem when discussing optimizations are that the jitter are allowed to perform any kind of optimization as long as the result is the same. But it also have to do any optimizations as fast as possible, so it rarely does all the things it would be allowed to do. It will also depend on the version of the compiler, the more recent ones have tiered compilation to get a better optimization of frequently used loops.
Because of all this it can be difficult to guess what the compiler will and will not optimize, and the best approach is to just benchmark the code. Use Benchmark.Net with and without the check, and that way you will get a correct answer. It should also tell you if the performance difference is anything to worry about.
Even thou guessing what the compiler will do is difficult, there are a few things worthy of note. Most optimizations are done within a method, the compiler will not try to rewrite method signatures. However, small methods tend to be inlined, and can therefore be optimized as part of the calling method. So if all your code was inlined it would very likely remove the flag check. However, one of the things that prevent inlining are indirect calls, like calling a method thru an interface, or in this case, calling a delegate. Just about everything in LINQ is delegates and interfaces, and this tend to hamper performance. So in general, use LINQ for convenience, not due to performance.
All that said, modern processors have pretty amazing branch-predictors, so I would expect the effect of an easy to predict branch like that to be fairly small. There is likely other things that have a larger effect on performance.
But the most important thing is Benchmark and/or profile the code instead of just guessing about performance. It is common for people trying to optimize the completely wrong thing, even for experienced developers. If you want to get started check out Measure app performance in visual studio and Benchmark .net.
You could do this extension:
It basically wraps #PMF's answer in an extension method, so you can use it like the Where you already know. It just takes an additional condition parameter, that switches on/off the application of the predicate. This comes with the advantage that you can chain it with other Linq-Methods just like the plain old Where.
using System.Linq;
using System.Collections.Generic;
public static class LinqEx
{
public static IEnumerable<T> WhereIf<T>(this IEnumerable<T> source, bool condition, Func<T, bool> predicate)
{
return condition
? source.Where(predicate)
: source;
}
}
And then use it like:
var someList = // assume it is a List<int> with some items
var shallFilter = true; // or false
var filteredList = someList.WhereIf(shallFilter, l => IsBigNumber(l)).ToList();
See it in Action in this Fiddle.
Mind that this would make more sense in a DB-Related (Linq to SQL) setting, since that here is really a micro optimization. For it to show effect, you'd have to have many items and a very costly predicate.
One word on your code, too:
var numbers = list.Where(l => flag || IsNumberBig(l)).ToList();
If flag is true, flag || X will evaluate to true, regardless of X. X will not even be evaluated.
So, you basically implemented the opposite of your requirement.
See also: Conditional logical OR operator ||
Related
How do you do "inline functions" in C#? I don't think I understand the concept. Are they like anonymous methods? Like lambda functions?
Note: The answers almost entirely deal with the ability to inline functions, i.e. "a manual or compiler optimization that replaces a function call site with the body of the callee." If you are interested in anonymous (a.k.a. lambda) functions, see #jalf's answer or What is this 'Lambda' everyone keeps speaking of?.
Finally in .NET 4.5, the CLR allows one to hint/suggest1 method inlining using MethodImplOptions.AggressiveInlining value. It is also available in the Mono's trunk (committed today).
// The full attribute usage is in mscorlib.dll,
// so should not need to include extra references
using System.Runtime.CompilerServices;
...
[MethodImpl(MethodImplOptions.AggressiveInlining)]
void MyMethod(...)
1. Previously "force" was used here. I'll try to clarify the term. As in the comments and the documentation, The method should be inlined if possible. Especially considering Mono (which is open), there are some mono-specific technical limitations considering inlining or more general one (like virtual functions). Overall, yes, this is a hint to compiler, but I guess that is what was asked for.
Inline methods are simply a compiler optimization where the code of a function is rolled into the caller.
There's no mechanism by which to do this in C#, and they're to be used sparingly in languages where they are supported -- if you don't know why they should be used somewhere, they shouldn't be.
Edit: To clarify, there are two major reasons they need to be used sparingly:
It's easy to make massive binaries by using inline in cases where it's not necessary
The compiler tends to know better than you do when something should, from a performance standpoint, be inlined
It's best to leave things alone and let the compiler do its work, then profile and figure out if inline is the best solution for you. Of course, some things just make sense to be inlined (mathematical operators particularly), but letting the compiler handle it is typically the best practice.
Update: Per konrad.kruczynski's answer, the following is true for versions of .NET up to and including 4.0.
You can use the MethodImplAttribute class to prevent a method from being inlined...
[MethodImpl(MethodImplOptions.NoInlining)]
void SomeMethod()
{
// ...
}
...but there is no way to do the opposite and force it to be inlined.
You're mixing up two separate concepts. Function inlining is a compiler optimization which has no impact on the semantics. A function behaves the same whether it's inlined or not.
On the other hand, lambda functions are purely a semantic concept. There is no requirement on how they should be implemented or executed, as long as they follow the behavior set out in the language spec. They can be inlined if the JIT compiler feels like it, or not if it doesn't.
There is no inline keyword in C#, because it's an optimization that can usually be left to the compiler, especially in JIT'ed languages. The JIT compiler has access to runtime statistics which enables it to decide what to inline much more efficiently than you can when writing the code. A function will be inlined if the compiler decides to, and there's nothing you can do about it either way. :)
Cody has it right, but I want to provide an example of what an inline function is.
Let's say you have this code:
private void OutputItem(string x)
{
Console.WriteLine(x);
//maybe encapsulate additional logic to decide
// whether to also write the message to Trace or a log file
}
public IList<string> BuildListAndOutput(IEnumerable<string> x)
{ // let's pretend IEnumerable<T>.ToList() doesn't exist for the moment
IList<string> result = new List<string>();
foreach(string y in x)
{
result.Add(y);
OutputItem(y);
}
return result;
}
The compilerJust-In-Time optimizer could choose to alter the code to avoid repeatedly placing a call to OutputItem() on the stack, so that it would be as if you had written the code like this instead:
public IList<string> BuildListAndOutput(IEnumerable<string> x)
{
IList<string> result = new List<string>();
foreach(string y in x)
{
result.Add(y);
// full OutputItem() implementation is placed here
Console.WriteLine(y);
}
return result;
}
In this case, we would say the OutputItem() function was inlined. Note that it might do this even if the OutputItem() is called from other places as well.
Edited to show a scenario more-likely to be inlined.
Do you mean inline functions in the C++ sense? In which the contents of a normal function are automatically copied inline into the callsite? The end effect being that no function call actually happens when calling a function.
Example:
inline int Add(int left, int right) { return left + right; }
If so then no, there is no C# equivalent to this.
Or Do you mean functions that are declared within another function? If so then yes, C# supports this via anonymous methods or lambda expressions.
Example:
static void Example() {
Func<int,int,int> add = (x,y) => x + y;
var result = add(4,6); // 10
}
Yes Exactly, the only distinction is the fact it returns a value.
Simplification (not using expressions):
List<T>.ForEach Takes an action, it doesn't expect a return result.
So an Action<T> delegate would suffice.. say:
List<T>.ForEach(param => Console.WriteLine(param));
is the same as saying:
List<T>.ForEach(delegate(T param) { Console.WriteLine(param); });
the difference is that the param type and delegate decleration are inferred by usage and the braces aren't required on a simple inline method.
Where as
List<T>.Where Takes a function, expecting a result.
So an Function<T, bool> would be expected:
List<T>.Where(param => param.Value == SomeExpectedComparison);
which is the same as:
List<T>.Where(delegate(T param) { return param.Value == SomeExpectedComparison; });
You can also declare these methods inline and asign them to variables IE:
Action myAction = () => Console.WriteLine("I'm doing something Nifty!");
myAction();
or
Function<object, string> myFunction = theObject => theObject.ToString();
string myString = myFunction(someObject);
I hope this helps.
The statement "its best to leave these things alone and let the compiler do the work.." (Cody Brocious) is complete rubish. I have been programming high performance game code for 20 years, and I have yet to come across a compiler that is 'smart enough' to know which code should be inlined (functions) or not. It would be useful to have a "inline" statement in c#, truth is that the compiler just doesnt have all the information it needs to determine which function should be always inlined or not without the "inline" hint. Sure if the function is small (accessor) then it might be automatically inlined, but what if it is a few lines of code? Nonesense, the compiler has no way of knowing, you cant just leave that up to the compiler for optimized code (beyond algorithims).
There are occasions where I do wish to force code to be in-lined.
For example if I have a complex routine where there are a large number of decisions made within a highly iterative block and those decisions result in similar but slightly differing actions to be carried out. Consider for example, a complex (non DB driven) sort comparer where the sorting algorythm sorts the elements according to a number of different unrelated criteria such as one might do if they were sorting words according to gramatical as well as semantic criteria for a fast language recognition system. I would tend to write helper functions to handle those actions in order to maintain the readability and modularity of the source code.
I know that those helper functions should be in-lined because that is the way that the code would be written if it never had to be understood by a human. I would certainly want to ensure in this case that there were no function calling overhead.
I know this question is about C#. However, you can write inline functions in .NET with F#. see: Use of `inline` in F#
No, there is no such construct in C#, but the .NET JIT compiler could decide to do inline function calls on JIT time. But i actually don't know if it is really doing such optimizations.
(I think it should :-))
In case your assemblies will be ngen-ed, you might want to take a look at TargetedPatchingOptOut. This will help ngen decide whether to inline methods. MSDN reference
It is still only a declarative hint to optimize though, not an imperative command.
Lambda expressions are inline functions! I think, that C# doesn`t have a extra attribute like inline or something like that!
I have the following code that performs a null or empty check on any type of object:
public static void IfNullOrEmpty(Expression<Func<string>> parameter)
{
Throw.IfNull(parameter);
if (parameter.GetValue().ToString().Length == 0)
{
throw new ArgumentException("Cannot be empty", parameter.GetName());
}
}
It calls the GetValue extension method below:
public static T GetValue<T>(this Expression<Func<T>> parameter)
{
MemberExpression member;
Expression expression;
member = (MemberExpression)parameter.Body;
expression = member.Expression;
return (T)parameter.Compile()();
}
I am passing in an expression containing a string in this method for testing. This method takes on average 2 ms on my machine (even slower on another machine I'm testing on) which adds up if it is called several times throughout the application. It seems like this method is too slow. What is the fastest way to do this type of null check?
Compiling an expression naturally requires quite some work. What I normally do if this code will run often is that I only compile the expressions once and save the compiled delegate for further usage.
It's possible to keep a "normal" cache but for a cache to be efficient you need a good hash function and I don't see how you could make that here. You need to restructure your code a bit so that every place where you use GetValue has a proper access to a compiled delegate instead. Without seeing more code I can't give you any hints about that one.
There can be many reasons why you see the following call being faster. Because of the difficulty to hash I don't expect that one. More likely you are seeing the works of a modern CPU that does a lot of guessing to run code fast. If you just ran the same expressions it's possible that the CPU is able to to guess more about the next call and can run faster. There is always GC to consider too.
One way to test the guessing idea could be to create a large array with a few different expressions. Do one test where it is ordered by expression and one where it is random. If my suspicion holds true the first one should be faster.
If I'm reading your code right, the only reason why you need an Expression is so if the check fails, you'll be able to extract the name of the parameter and pass it into the exception that you'll throw, right? If so, that's an awfully steep price to pay for a slightly more convenient error message that (hopefully) only occurs in a very tiny percentage of cases.
The 2ms overhead is slightly higher than I'd expect, but substantial overhead is hard or impossible to avoid here. You're essentially forcing the runtime to traverse an expression tree, translate it into MSIL and then translate and optimize that MSIL again into executable code through the JIT, just to do a != null check which will always succeed unless a developer has made a mistake somewhere.
You could probably come up with some sort of caching mechanism; the Entity Framework caches expressions by traversing the expression tree and building a hash of it and uses that as the key of a dictionary where the compiled expression is stored as a delegate. It'd be substantially cheaper than sending it through the JIT on every call, but it'd still be orders of magnitude more expensive when compared to a simpler != null check which takes nanoseconds or less (especially when you consider modern branch-predicting CPUs).
So in my opinion, this approach is a nice idea, but simply not worth it when you consider the cost and when the alternative is pretty painless (especially with the new nameof operator). It's also fairly brittle, because what if I'm a developer who thinks he can do this:
Throw.IfNullOrEmpty(() => clientId + "something")
Your cast to MemberExpression will fail there.
Likewise, it would be reasonable for someone to think that because you pass in an expression and an expression is only code as data that it would be safe to do this:
Throw.IfNullOrEmpty(() => parent.Child.MightBeNull.MightAlsoBeNull.ClientID)
It's perfectly possible to safely evaluate that expression if you partially traversed the expression tree, but in your example the whole expression is compiled and executed at once and will likely fail with a NullReferenceException there.
I guess it comes down to that an argument of type Expression<Func<T>> is just not sufficiently strict enough for the use of a null check. You could do all sorts of weird stuff in them and the compiler would be perfectly happy, but you'd get unexpected results at runtime.
I was just looking at the question "SubQuery using Lambda Expression" and wondered about compiler optimization of Linq predicates.
Suppose I had a List<string> called names, and I was looking for the items with the shortest string length. So we have the query names.Where(x => x.Length == names.Min(y => y.Length)) (from the question mentioned above). Simple enough.
Now, we know the C# specification does not allow you to modify a collection while enumerating it. So I believe it is technically safe to assume the above call to Min() will always return the same value for every call.
But, my hypothesis is the compiler truly has no way of knowing what the lambda inside the Enumerable.Min extension method returns. Since, for example we could do:
int i = 0;
return names.Where(x => x.Length == names.Min(y => ++i));
Which would mean the query in question is really O(n²) - the result of Min() will be calculated for each iteration. And to get the desired O(n) implementation, you would have to be explicit:
int minLength = names.Min(y => y.Length);
return names.Where(x => x.Length == minLength);
Is my hypothesis correct, or is there something special about Linq or the C# specification that allows the compiler to look inside the lambda and optimize this call to Min()?
#spender is absolutely correct. Consider the following snippet:
List<string> names = new List<string>(new[] { "r", "abcde", "bcdef", "cdefg", "q" });
return names.Where(x =>
{
bool b = (x.Length == names.Min(y => y.Length));
names = new List<string>(new[] { "ab" });
return b;
});
This will return only "r", and not "q", because while the old reference to names is being iterated (foreach x), the call to Min after the first iteration is actually called with the new instance of names. But, a human looking at the query in the top of the question can say for certain nothing gets modified. So my question still stands: is the compiler smart enough to see this?
wondered about compiler optimization of Linq predicates.
The C# compiler does not know how the BCL types are implemented. It could look at the assemblies that you reference but those can change at any time. The compiler cannot assume that the machine the compiled program will run on will have the same binaries. Therefore, th C# compiler cannot legally perform these optimizations because you could tell the difference.
The JIT is in a position to make such optimizations (it does not at the moment).
Now, we know the C# specification does not allow you to modify a
collection while enumerating it. So I believe it is technically safe
to assume the above call to Min() will always return the same value
for every call.
The specification of C# knows nothing about libraries. It does not say this at all. Each implementation of IEnumerable can decide whether it wants to allows such behavior or not.
But, my hypothesis is the compiler truly has no way of knowing what
the lambda inside the Enumerable.Min extension method returns.
Yes, it could do anything. At runtime the JIT could deduce such properties but it does not. Note, that deducing even basic facts is hard because there are things like reflection, runtime code generation and multi-threading.
Is my hypothesis correct, or is there something special about Linq or
the C# specification that allows the compiler to look inside the
lambda and optimize this call to Min()?
No. LINQ has library-only optimizations. LINQ to objects is executed exactly as you wrote it. Other LINQ providers do this differently.
If you wonder whether the JIT will perform some advanced optimization the answer is usually no as of .NET 4.5.
C# compiler works in passes. Each pass takes some complex language feature and converts it to simpler. Quite often context is lost in this conversion. Lambda expressions are one of those steps. Each lambda is converted to class and this class is then instantiated and it's main method is passed to the delegate. And the compilation pass doesn't even look inside the lambda. So compiler that produces the IL code doesn't even know there are any lambdas and just sees bunch of classes. And those classes doesn't give him enough information to infer what you propose.
If I am comparing two known value types, will I get better performance from the | or || operator in c#?
My particular use case is a bool member in the class indicating stale data, being activated by a loop method like this:
private bool _stale;
private HashSet<Foo> _foos;
Then another method loops and possibly activates this flag
foreach (var foo in foos)
{
_stale = _foos.Add(foo) | _stale;
//Or is the following line better?
//_stale = _foos.Add(foo) || _stale;
}
I suppose I'm asking if the overhead of the short-circuiting operator is enough that I wouldn't use it for checking against an already assigned value type...
I would use the Or Assignment operator:
_stale |= added;
Or:
_stale |= _foos.Add(foo);
This operator is specifically designed for this purpose, which makes the intent clear (and will most likely be the best in terms of performance, as it's specifically designed for this purpose).
As for the actual performance, this level of micro-optimization is typically almost impossible to measure as any difference in performance is going to be so much smaller than the difference in your operations (the HashSet<T>.Add call) that clarity of code is far more important. You could build a test to measure this, but it is incredibly unlikely to be reliably different enough in terms of performance to matter.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
A lot of questions are being answered on Stack Overflow, with members specifying how to solve these real world/time problems using lambda expressions.
Are we overusing it, and are we considering the performance impact of using lambda expressions?
I found a few articles that explores the performance impact of lambda vs anonymous delegates vs for/foreach loops with different results
Anonymous Delegates vs Lambda Expressions vs Function Calls Performance
Performance of foreach vs. List.ForEach
.NET/C# Loop Performance Test (FOR, FOREACH, LINQ, & Lambda).
DataTable.Select is faster than LINQ
What should be the evaluation criteria when choosing the appropriate solution? Except for the obvious reason that it's more concise code and readable when using lambda.
Even though I will focus on point one, I begin by giving my 2 cents on the whole issue of performance. Unless differences are big or usage is intensive, usually I don't bother about microseconds that when added don't amount to any visible difference to the user. I emphasize that I only don't care when considering non-intensive called methods. Where I do have special performance considerations is on the way I design the application itself. I care about caching, about the use of threads, about clever ways to call methods (whether to make several calls or to try to make only one call), whether to pool connections or not, etc., etc. In fact I usually don't focus on raw performance, but on scalibility. I don't care if it runs better by a tiny slice of a nanosecond for a single user, but I care a lot to have the ability to load the system with big amounts of simultaneous users without noticing the impact.
Having said that, here goes my opinion about point 1. I love anonymous methods. They give me great flexibility and code elegance. The other great feature about anonymous methods is that they allow me to directly use local variables from the container method (from a C# perspective, not from an IL perspective, of course). They spare me loads of code oftentimes. When do I use anonymous methods? Evey single time the piece of code I need isn't needed elsewhere. If it is used in two different places, I don't like copy-paste as a reuse technique, so I'll use a plain ol' delegate. So, just like shoosh answered, it isn't good to have code duplication. In theory there are no performance differences as anonyms are C# tricks, not IL stuff.
Most of what I think about anonymous methods applies to lambda expressions, as the latter can be used as a compact syntax to represent anonymous methods. Let's assume the following method:
public static void DoSomethingMethod(string[] names, Func<string, bool> myExpression)
{
Console.WriteLine("Lambda used to represent an anonymous method");
foreach (var item in names)
{
if (myExpression(item))
Console.WriteLine("Found {0}", item);
}
}
It receives an array of strings and for each one of them, it will call the method passed to it. If that method returns true, it will say "Found...". You can call this method the following way:
string[] names = {"Alice", "Bob", "Charles"};
DoSomethingMethod(names, delegate(string p) { return p == "Alice"; });
But, you can also call it the following way:
DoSomethingMethod(names, p => p == "Alice");
There is no difference in IL between the both, being that the one using the Lambda expression is much more readable. Once again, there is no performance impact as these are all C# compiler tricks (not JIT compiler tricks). Just as I didn't feel we are overusing anonymous methods, I don't feel we are overusing Lambda expressions to represent anonymous methods. Of course, the same logic applies to repeated code: Don't do lambdas, use regular delegates. There are other restrictions leading you back to anonymous methods or plain delegates, like out or ref argument passing.
The other nice things about Lambda expressions is that the exact same syntax doesn't need to represent an anonymous method. Lambda expressions can also represent... you guessed, expressions. Take the following example:
public static void DoSomethingExpression(string[] names, System.Linq.Expressions.Expression<Func<string, bool>> myExpression)
{
Console.WriteLine("Lambda used to represent an expression");
BinaryExpression bExpr = myExpression.Body as BinaryExpression;
if (bExpr == null)
return;
Console.WriteLine("It is a binary expression");
Console.WriteLine("The node type is {0}", bExpr.NodeType.ToString());
Console.WriteLine("The left side is {0}", bExpr.Left.NodeType.ToString());
Console.WriteLine("The right side is {0}", bExpr.Right.NodeType.ToString());
if (bExpr.Right.NodeType == ExpressionType.Constant)
{
ConstantExpression right = (ConstantExpression)bExpr.Right;
Console.WriteLine("The value of the right side is {0}", right.Value.ToString());
}
}
Notice the slightly different signature. The second parameter receives an expression and not a delegate. The way to call this method would be:
DoSomethingExpression(names, p => p == "Alice");
Which is exactly the same as the call we made when creating an anonymous method with a lambda. The difference here is that we are not creating an anonymous method, but creating an expression tree. It is due to these expression trees that we can then translate lambda expressions to SQL, which is what Linq 2 SQL does, for instance, instead of executing stuff in the engine for each clause like the Where, the Select, etc. The nice thing is that the calling syntax is the same whether you're creating an anonymous method or sending an expression.
My answer will not be popular.
I believe Lambda's are 99% always the better choice for three reasons.
First, there is ABSOLUTELY nothing wrong with assuming your developers are smart. Other answers have an underlying premise that every developer but you is stupid. Not so.
Second, Lamdas (et al) are a modern syntax - and tomorrow they will be more commonplace than they already are today. Your project's code should flow from current and emerging conventions.
Third, writing code "the old fashioned way" might seem easier to you, but it's not easier to the compiler. This is important, legacy approaches have little opportunity to be improved as the compiler is rev'ed. Lambdas (et al) which rely on the compiler to expand them can benefit as the compiler deals with them better over time.
To sum up:
Developers can handle it
Everyone is doing it
There's future potential
Again, I know this will not be a popular answer. And believe me "Simple is Best" is my mantra, too. Maintenance is an important aspect to any source. I get it. But I think we are overshadowing reality with some cliché rules of thumb.
// Jerry
Code duplication.
If you find yourself writing the same anonymous function more than once, it shouldn't be one.
Well, when we are talking bout delegate usage, there shouldn't be any difference between lambda and anonymous methods -- they are the same, just with different syntax. And named methods (used as delegates) are also identical from the runtime's viewpoint. The difference, then, is between using delegates, vs. inline code - i.e.
list.ForEach(s=>s.Foo());
// vs.
foreach(var s in list) { s.Foo(); }
(where I would expect the latter to be quicker)
And equally, if you are talking about anything other than in-memory objects, lambdas are one of your most powerful tools in terms of maintaining type checking (rather than parsing strings all the time).
Certainly, there are cases when a simple foreach with code will be faster than the LINQ version, as there will be fewer invokes to do, and invokes cost a small but measurable time. However, in many cases, the code is simply not the bottleneck, and the simpler code (especially for grouping, etc) is worth a lot more than a few nanoseconds.
Note also that in .NET 4.0 there are additional Expression nodes for things like loops, commas, etc. The language doesn't support them, but the runtime does. I mention this only for completeness: I'm certainly not saying you should use manual Expression construction where foreach would do!
I'd say that the performance differences are usually so small (and in the case of loops, obviously, if you look at the results of the 2nd article (btw, Jon Skeet has a similar article here)) that you should almost never choose a solution for performance reasons alone, unless you are writing a piece of software where performance is absolutely the number one non-functional requirement and you really have to do micro-optimalizations.
When to choose what? I guess it depends on the situation but also the person. Just as an example, some people perfer List.Foreach over a normal foreach loop. I personally prefer the latter, as it is usually more readable, but who am I to argue against this?
Rules of thumb:
Write your code to be natural and readable.
Avoid code duplications (lambda expressions might require a little extra diligence).
Optimize only when there's a problem, and only with data to back up what that problem actually is.
Any time the lambda simply passes its arguments directly to another function. Don't create a lambda for function application.
Example:
var coll = new ObservableCollection<int>();
myInts.ForEach(x => coll.Add(x))
Is nicer as:
var coll = new ObservableCollection<int>();
myInts.ForEach(coll.Add)
The main exception is where C#'s type inference fails for whatever reason (and there are plenty of times that's true).
If you need recursion, don't use lambdas, or you'll end up getting very distracted!
Lambda expressions are cool. Over older delegate syntax they have a few advantages like, they can be converted to either anonymous function or expression trees, parameter types are inferred from the declaration, they are cleaner and more concise, etc. I see no real value to not use lambda expression when you're in need of an anonymous function. One not so big advantage the earlier style has is that you can omit the parameter declaration totally if they are not used. Like
Action<int> a = delegate { }; //takes one argument, but no argument specified
This is useful when you have to declare an empty delegate that does nothing, but it is not a strong reason enough to not use lambdas.
Lambdas let you write quick anonymous methods. Now that makes lambdas meaningless everywhere where anonymous methods go meaningless, ie where named methods make more sense. Over named methods, anonymous methods can be disadvantageous (not a lambda expression per se thing, but since these days lambdas widely represent anonymous methods it is relevant):
because it tend to lead to logic duplication (often does, reuse is difficult)
when it is unnecessary to write to one, like:
//this is unnecessary
Func<string, int> f = x => int.Parse(x);
//this is enough
Func<string, int> f = int.Parse;
since writing anonymous iterator block is impossible.
Func<IEnumerable<int>> f = () => { yield return 0; }; //impossible
since recursive lambdas require one more line of quirkiness, like
Func<int, int> f = null;
f = x => (x <= 1) ? 1 : x * f(x - 1);
well, since reflection is kinda messier, but that is moot isn't it?
Apart from point 3, the rest are not strong reasons not to use lambdas.
Also see this thread about what is disadvantageous about Func/Action delegates, since often they are used along with lambda expressions.