What is the specificity of var? - c#

So, as far as a question to a real problem goes, this probably isn't a very good question, but it's bugging me and I can't find an answer, so I consider that to be a problem.
What is the specificity of var? The MSDN reference on it states the following:
An implicitly typed local variable is strongly typed just as if you had declared the type yourself
Bur it doesn't seem to say anywhere what type it is strongly typed for. For example, if I have the following:
var x = new Tree();
But I then don't call any methods of Tree, is x still strongly typed to tree? Or could I have something like the following?
var x = new Tree();
x = new Object();
I'm guessing this isn't allowed, but I don't have access to a compiler right now, and I'm really wondering if there are any caveats that allow unexpected behaviour like the above.

It's strongly typed to the type of the expression on the right side:
The var keyword instructs the compiler to infer the type of the variable from the expression on the right side of the initialization statement.
From here.

It's tied to the type on the right-side of the equals-sign, so in this case, it is equivalent to:
Tree x = new Tree();
Regardless of whatever interface or base classes are tied to Tree. If you need x to be of a lower type, you have to declare it specifically, like:
Plant x = new Tree();
// or
IHasLeaves x = new Tree();

Yes, in your example x is strongly typed to Tree just as if you had declared the type yourself.
Your second example would not compile because you are redefining x.

No, it is exactly the same, had you typed Tree x = new Tree();. Obviously the only unambiguous inference the compiler can do is the exact type of the right hand side expression, so it won't suddenly become ITree x
So this doesn't work:
Tree x = new Tree();
x = new Object(); //cannot convert implicitly
If you are curious, the dynamic is closer to the behavior you expect.
dynamic x = new Tree();
x = new Object();

In the example:
var x = new Tree();
is the same as
Tree x = new Tree();
I've found it is always better to use "var" since it facilitates code re-factoring.
Also, adding,
var x = new Object();
in the same scope would break compilation due to the fact that you cannot declare a variable twice.

var is neither a type nor does it make the variable something special. It tells the compiler to infer the type of the variable AT COMPILE TIME by analyzing the initialization expression on the right hand side of the assignment operator.
These two expressions are equivalent:
Tree t = new Tree();
and
var t = new Tree();
Personally I prefer to use var when the type name is mentioned explicitly on the right hand side or when the exact type is complicated and not really relevant as for results returned from LINQ queries. These LINQ results are often just intermediate results that are processed further:
var x = new Dictionary<string, List<int>>();
is easier to read than the following statement and yet very clear:
Dictionary<string, List<int>> x = new Dictionary<string, List<int>>();
var query = someSource
.Where(x => x.Name.StartsWith("A"))
.GroupBy(x => x.State)
.OrderBy(x => x.Date);
Here query is of type IOrderedEnumerable<IGrouping<string, SomeType>>. Who cares?
When the type name does not appear on the right hand side and is simple, then I prefer to write it explicitly as it doesn't simplify anything to use var:
int y = 7;
string s = "hello";
And of cause, if you create anonymous types, you must use var because you have no type name:
var z = new { Name = "Coordinate", X = 5.343, Y = 76.04 };
The var keyword was introduced together with LINQ in order to simplify their use and to allow to create types on the fly in order to simulate the way you would work with SQL:
SELECT Name, Date FROM Person
var result = DB.Persons.Select(p => new { p.Name, p.Date });

Related

implicit type var [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Use of var keyword in C#
Being relatively new to C# I was wondering the motivation MS had to introduce the var implicit typed variables. The documentation says:
An implicitly typed local variable is strongly typed just as if you
had declared the type yourself, but the compiler determines the type.
Some lines further:
In many cases the use of var is optional and is just a syntactic
convenience
This is all nice but in my mind, this will only cause confusion.
Say you are reviewing this for loop.
foreach (var item in custQuery){
// A bench of code...
}
Instead of reviewing the content and the semantics of the loop, I would loose precious time looking for the item's type!
I would prefer the following instead:
foreach (String item in custQuery){
// A bench of code...
}
The question is: I read that implicit typed variables help when dealing with LINQ, does really help to use it in other scenarios?
The var keyword was needed when LINQ was introduced, so that the language could create a strongly typed variable for an anonymous type.
Example:
var x = new { Y = 42 };
Now x is a strongly typed variable that has a specific type, but there is no name for that type. The compiler knows what x.Y means, so you don't have to use reflection to get to the data in the object, as you would if you did:
object x = new { Y = 42 };
Now x is of the type object, so you can't use x.Y.
When used with LINQ it can for example look like this:
var x = from item in source select new { X = item.X, Y = item.Y };
The x variable is now an IEnumerable<T> where T is a specific type which doesn't have a name.
Since the var keyword was introduced, it has also been used to make code more readable, and misused to save keystrokes.
An example where it makes the code more readable would be:
var list =
new System.Collections.Generic.List<System.Windows.Forms.Message.Msg>();
instead of:
System.Collections.Generic.List<System.Windows.Forms.Message.Msg> list =
new System.Collections.Generic.List<System.Windows.Forms.Message.Msg>();
This is a good use of the var keyword, as the type already exists in the statement. A case where the keyword can be misused is in a statement like:
var result = SomeMethod();
As the SomeMethod name doesn't give any indication of what type it returns, it's not obvious what the type of the variable will be. In this case you should write out the type rather than using the var keyword.
I think some of the motivation was to allow something like this -
List<int> list = new List<int>();
to be turned into this -
var list = new List<int>();
The second example is shorter and more readable, but still clearly expresses the intent of the code. There are instances when it will be less clear, but in lots of situations you get conciseness with no loss of clarity.
var is really needed for anonymous types, which are used in Linq a bit:
var results =
from item in context.Table
select new {Name=item.Name, id=item.id};
Since the collection is of an anonymous type, it can not be named. It has a real type, but not one with a name before compilation.

Parallel.ForEach and IGrouping source item issue

I am trying to parallelize a query with a groupby statement in it. The query is similar to
var colletionByWeek = (
from item in objectCollection
group item by item.WeekStartDate into weekGroups
select weekGroups
).ToList();
If I use Parallel.ForEach with shared variable like below, it works fine. But I don't want to use shared variables in parallel query.
var pSummary=new List<object>();
Parallel.ForEach(colletionByWeek, week =>
{
pSummary.Add(new object()
{
p1 = week.First().someprop,
p2= week.key,
.....
});
}
);
So, I have changed the above parallel statement to use local variables. But the compiler complains about the source type <IEnumerable<IGrouping<DateTime, object>> can not be converted into System.Collections.Concurrent.OrderablePartitioner<IEnumerable<IGrouping<DateTime, object>>.
Am I giving a wrong source type? or is this type IGouping type handled differently? Any help would be appreciated. Thanks!
Parallel.ForEach<IEnumerable<IGrouping<DateTime, object>>, IEnumerable<object>>
(spotColletionByWeek,
() => new List<object>(),
(week, loop, summary) =>
{
summary.Add(new object()
{
p1 = week.First().someprop,
p2= week.key,
.....
});
return new List<object>();
},
(finalResult) => pSummary.AddRange(finalResult)
);
The type parameter TSource is the element type, not the collection type. And the second type parameter represents the local storage type, so it should be List<T>, if you want to Add() to it. This should work:
Parallel.ForEach<IGrouping<DateTime, object>, List<object>>
That's assuming you don't actually have objects there, but some specific type.
Although explicit type parameters are not even necessary here. The compiler should be able to infer them.
But there are other problems in the code:
you shouldn't return new List from the main delegate, but summary
the delegate that processes finalResult might be executed concurrently on multiple threads, so you should use locks or a concurrent collection there.
I'm going to skip the 'Are you sure you even need to optimize this' stage, and assume you have a performance issue which you hope to solve by parallelizing.
First of all, you're not doing yourself any favors trying to use Parallel.Foreach<> for this task. I'm pretty sure you will get a readable and more optimal result using PLINQ:
var random = new Random();
var weeks = new List<Week>();
for (int i=0; i<1000000; i++)
{
weeks.Add(
new Week {
WeekStartDate = DateTime.Now.Date.AddDays(7 * random.Next(0, 100))
});
}
var parallelCollectionByWeek =
(from item in weeks.AsParallel()
group item by item.WeekStartDate into weekGroups
select new
{
p1 = weekGroups.First().WeekStartDate,
p2 = weekGroups.Key,
}).ToList();
It's worth noting that there is some overhead associated with parallelizing the GroupBy operator, so the benefit will be marginal at best. (Some crude benchmarks hint at a 10-20% speed up)
Apart from that, the reason you're getting a compile error is because the first Type parameter is supposed to be an IGrouping<DateTime, object> and not an IE<IG<..,..>>.

Efficiency of creating delegate instance inline of LINQ query?

Following are two examples that do the same thing in different ways. I'm comparing them.
Version 1
For the sake of an example, define any method to create and return an ExpandoObject from an XElement based on business logic:
var ToExpando = new Func<XElement, ExpandoObject>(xClient =>
{
dynamic o = new ExpandoObject();
o.OnlineDetails = new ExpandoObject();
o.OnlineDetails.Password = xClient.Element(XKey.onlineDetails).Element(XKey.password).Value;
o.OnlineDetails.Roles = xClient.Element(XKey.onlineDetails).Element(XKey.roles).Elements(XKey.roleId).Select(xroleid => xroleid.Value);
// More fields TBD.
}
Call the above delegate from a LINQ to XML query:
var qClients =
from client in xdoc.Root.Element(XKey.clients).Elements(XKey.client)
select ToExpando(client);
Version 2
Do it all in the LINQ query, including creation and call to Func delegate.
var qClients =
from client in xdoc.Root.Element(XKey.clients).Elements(XKey.client)
select (new Func<ExpandoObject>(() =>
{
dynamic o = new ExpandoObject();
o.OnlineDetails = new ExpandoObject();
o.OnlineDetails.Password = client.Element(XKey.onlineDetails).Element(XKey.password).Value;
o.OnlineDetails.Roles = client.Element(XKey.onlineDetails).Element(XKey.roles).Elements(XKey.roleId).Select(xroleid => xroleid.Value);
// More fields TBD.
return o;
}))();
Considering delegate creation is in the select part, is Version 2 inefficient? Is it managed or optimized by either the C# compiler or runtime so it won't matter?
I like Version 2 for its tightness (keeping the object creation logic in the query), but am aware it might not be viable depending on what the compiler or runtime does.
The latter approach looks pretty horrible to me. I believe it will have to genuinely create a new delegate each time as you're capturing a different client each time, but personally I wouldn't do it that way at all. Given that you've got real statements in there, why not write a normal method?
private static ToExpando(XElement client)
{
// Possibly use an object initializer instead?
dynamic o = new ExpandoObject();
o.OnlineDetails = new ExpandoObject();
o.OnlineDetails.Password = client.Element(XKey.onlineDetails)
.Element(XKey.password).Value;
o.OnlineDetails.Roles = client.Element(XKey.onlineDetails)
.Element(XKey.roles)
.Elements(XKey.roleId)
.Select(xroleid => xroleid.Value);
return o;
}
and then query it with:
var qClients = xdoc.Root.Element(XKey.clients)
.Elements(XKey.client)
.Select(ToExpando);
I would be much more concerned about the readability of the code than the performance of creating delegates, which is generally pretty quick. I don't think there's any need to use nearly as many lambdas as you currently seem keen to do. Think about when you come back to this code in a year's time. Are you really going to find the nested lambda easier to understand than a method?
(By the way, separating the conversion logic into a method makes that easy to test in isolation...)
EDIT: Even if you do want to do it all in the LINQ expression, why are you so keen to create another level of indirection? Just because query expressions don't allow statement lambdas? Given that you're doing nothing but a simple select, that's easy enough to cope with:
var qClients = xdoc.Root
.Element(XKey.clients)
.Elements(XKey.client)
.Select(client => {
dynamic o = new ExpandoObject();
o.OnlineDetails = new ExpandoObject();
o.OnlineDetails.Password = client.Element(XKey.onlineDetails)
.Element(XKey.password).Value;
o.OnlineDetails.Roles = client.Element(XKey.onlineDetails)
.Element(XKey.roles)
.Elements(XKey.roleId)
.Select(xroleid => xroleid.Value);
return o;
});
It is true that your second version creates new Func instance repeatedly - however, this just means allocating some small object (closure) and using pointer to a function. I don't think this is a large overhead compared to dynamic lookups that you need to perform in the body of the delegate (to work with dynamic objects).
Alternatively, you could declare a local lambda function like this:
Func<XElement, ExpandoObject> convert = client => {
dynamic o = new ExpandoObject();
o.OnlineDetails = new ExpandoObject();
o.OnlineDetails.Password =
client.Element(XKey.onlineDetails).Element(XKey.password).Value;
o.OnlineDetails.Roles = client.Element(XKey.onlineDetails).
Element(XKey.roles).Elements(XKey.roleId).
Select(xroleid => xroleid.Value);
// More fields TBD.
return o;
}
var qClients =
from client in xdoc.Root.Element(XKey.clients).Elements(XKey.client)
select convert(client);
This way, you can create just a single delegate, but keep the code that does the conversion close to the code that implements the query.
Another option would be to use anonymous types instead - what are the reasons for using ExpandoObject in your scenario? The only limitation of anonymous types would be that you may not be able to access them from other assemblies (they are internal), but working with them using dynamic should be fine...
Your select could look like:
select new { OnlineDetails = new { Password = ..., Roles = ... }}
Finally, you could also use Reflection to convert anonymous type to ExpandoObject, but that would probably be even more inefficient (i.e. very difficult to write efficiently)

Member by Member copy

In an application we have we have a set of ORM objects, and a set of business object. Most of the time we're simply doing a member by member copy. Other times we process the data slightly. For instance:
tEmployee emp = new tEmployee();
emp.Name = obj.Name;
emp.LastName = obj.LastName;
emp.Age = obj.Age;
emp.LastEdited = obj.LastEdited.ToGMT();
Now this works just fine, and is rather fast, but not exactly terse when it comes to coding. Some of our objects have upto 40 members, so doing a copy like this can get rather tedious. Granted you only need 2 methods for two->from conversion, but I'd like to find a better way to do this.
Reflection is an natural choice, but on a benchmark I found that execution time was about 100x slower when using reflection.
Is there a better way to go about this?
Clarification:
I'm converting from one type to another. In the above example obj is of type BLogicEmployee and emp is of type tEmployee. They share member names, but that is it.
You might want to check out AutoMapper.
If you don't mind it being a bit slow the first time you can compile a lambda expression:
public static class Copier<T>
{
private static readonly Action<T, T> _copier;
static Copier()
{
var x = Expression.Parameter(typeof(T), "x");
var y = Expression.Parameter(typeof(T), "y");
var expressions = new List<Expression>();
foreach (var property in typeof(T).GetProperties())
{
if (property.CanWrite)
{
var xProp = Expression.Property(x, property);
var yProp = Expression.Property(y, property);
expressions.Add(Expression.Assign(yProp, xProp));
}
}
var block = Expression.Block(expressions);
var lambda = Expression.Lambda<Action<T, T>>(block, x, y);
_copier = lambda.Compile();
}
public static void CopyTo(T from, T to)
{
_copier(from, to);
}
}
Reflection can be sped up an awful lot if you use delegates. Basically, you can create a pair of delegates for each getter/setter pair, and then execute those - it's likely to go very fast. Use Delegate.CreateDelegate to create a delegate given a MethodInfo etc. Alternatively, you can use expression trees.
If you're creating a new object, I already have a bunch of code to do this in MiscUtil. (It's in the MiscUtil.Reflection.PropertyCopy class.) That uses reflection for properties to copy into existing objects, but a delegate to convert objects into new ones. Obviously you can adapt it to your needs. I'm sure if I were writing it now I'd be able to avoid the reflection for copying using Delegate.CreateDelegate, but I'm not about to change it :)
Consider using AutoMapper. From its documentation:
.. AutoMapper works best as long as
the names of the members match up to
the source type's members. If you have
a source member called "FirstName",
this will automatically be mapped to a
destination member with the name
"FirstName".
This will save you a great deal of explicit mapping, and AutoMapper of course allows for the customization of particular mappings along the lines of:
Mapper.CreateMap<Model.User, Api.UserInfo>()
.ForMember(s => s.Address, opt => opt.Ignore())
.ForMember(s => s.Uri, opt => opt.MapFrom(c => HttpEndpoint.GetURI(c)))
Object.MemberwiseClone might be useful if all you need is a shallow clone. Not sure how well it performs though, and obviously any complex objects would need additional handling to ensure a proper copy.
See if you can use this
RECAP:
and class must be Serializable for this to work.
public static T DeepClone<T>(T obj)
{
using (var ms = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(ms, obj);
ms.Position = 0;
return (T) formatter.Deserialize(ms);
}
}
Look att Automapper it can autmatically map your objects if your fields match...
http://automapper.codeplex.com/

What are the benefits of implicit typing in C# 3.0 >+

The only advantage I can see to do:
var s = new ClassA();
over
ClassA s = new ClassA();
Is that later if you decide you want ClassB, you only have to change the RHS of the declaration.
I guess if you are enumerating through a collection you can also just to 'var' and then figure out the type later.
Is that it?? Is there some other huge benefit my feeble mind does not see?
It's mostly syntactic sugar. It's really your preference. Unless when using anonymous types, then using var is required. I prefer implicit typing wherever possible though, it really shines with LINQ.
I find it redundant to type out a type twice.
List<string> Foo = new List<string>();
When I can easily just type var when it's obvious what the type is.
var Foo = new List<string>();
var is useful for anonymous types, which do not have names for you to use.
var point = new {X = 10, Y = 10};
This will create an anonymous type with properties X and Y. It's primarily used to support LINQ though. Suppose you have:
class Person
{
public String Name {get; set;}
public Int32 Age {get; set;}
public String Address {get; set;}
// Many other fields
}
List<Person> people; // Some list of people
Now suppose I want to select only the names and years until age 18 of those people who are under the age of 18:
var minors = from person in people where person.Age < 18 select new {Name = person.Name, YearsLeft = 18 - person.Age};
Now minors contains a List of some anonymous type. We can iterate those people with:
foreach (var minor in minors)
{
Console.WriteLine("{0} is {1} years away from age 18!", minor.Name, minor.YearsLeft);
}
None of this would otherwise be possible; we would need to select the whole Person object and then calculate YearsLeft in our loop, which isn't what we want.
I started what turned out to be a hugely controversial thread when I first signed on here (my choice of "evilness" to describe general use of var was obviously a terrible choice.) Needless to say, I have more appreciation for var than I did before I started that thread, as well as a better understanding of how it can be usefully used:
The evilness of 'var' in C#?
Some good reasons to use var:
Brevity
Reduction of Repetition (DRY)
Reduced refactoring effort
Supports anonymous types (key reason it was added to C#)
Have a look at this questions. Maybe they'll help you decide.
Use of var keyword in C#
https://stackoverflow.com/questions/633474/c-do-you-use-var
It allows me to not repeat myself unnecessary. Consider this:
Dictionary<string, List<int>> dict = new Dictionary<string, List<int>>();
We have a very long typename repeated twice on the same line twice with absolutely no benefit. Furthermore, if you ever need to refactor this, you'll need to update the type twice. Whereas this is just as expressive:
var dict = new Dictionary<string, List<int>>();
There's still no doubt about type of dict here, but the code is shorter, and I would claim that it is easier to read as well.
Actually "var" in C# is used to deal with anonymous type.
Like:
var t = new {number = 10, name = "test"};
The overarching reason for the implicit typing was to support anonymous types, like those produced by LINQ queries. They can also be a big benefit when dealing with complex-typed generics...something like Dictionary<int,List<string>>. That's not too complex, but imagine enumerating...
foreach KeyValuePair<int,List<string>> pair in myDictionary
{
}
is simplified with implicit typing.
While the var keyword was primarily introduced to support anonymous types, the main argument I've seen for using it more widely is for brevity/more readable code, plus fitting more on your line of code:
Dictionary<string, double> data = new Dictionary<string, double>();
versus
var data = new Dictionary<string, double>();
Though not a good reason for most people, I also like the var keyword as a blind programmer, as I listen to the code being read out and thus here the variable name after just one syllabul :-)

Categories