Using ReSharper, I occasionally get quick-fix suggestions for importing a namespace for a LINQ operation. So given the following code in a brand-new class:
linqToSqlDataContext.Customers.Count();
I get a quick-fix drop down as follows:
Which should I choose, and what is the difference between them?
System.Linq.Dynamic is the namespace for Dynamic LINQ. You shouldn't be seeing that as an option unless you've added a reference to the Dynamic LINQ assembly though. Have you done so?
You should only do that if you actually want to use Dynamic LINQ.
Dynamic LINQ lets you express queries as text - a bit like with DataTable.Select. I've personally never found a use for it, but you may want it. It should be a deliberate choice though. Most of the time you'll be fine with the statically typed LINQ to Objects.
EDIT: As per the OP's comment, the code for Dynamic LINQ could have been added directly to the project, rather than referenced as a separate assembly. Even if you do actually want to use Dynamic LINQ, I'd strongly recommend keeping it in a separate assembly rather than mixing it in with your own code.
Dynamic LINQ is a non-typesafe version of LINQ. That takes strings rather than lambdas to generate the queries.
Unless you need any of the specialist functionality that this will give you use the Enumerable version instead.
Scott Hanselman did a good explanation of DynamicQueryable. Basically it allows you to have a little more dynamism where the parameters may change during runtime.
Argh! The answer in the end was that one of my colleagues added the DynamicQueryable extensions class to our project (from http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx), and ReSharper was picking that up.
Since I don't see any example of usage of Dynamic LINQ here, here it goes:
In my faculty project I had a situation where I would use Repository pattern to make an abstraction over my used database technology, particulary Entity Framework.
In my Repository I would have a method something like this:
public IEnumerable<T> Find(Expression<Func<T, bool>> predicate);
As you can see, an Expression is used as the predicate.
Also, I had client-server communication over WCF. Since Expressions are not Serializable, I had to use Dynamic LINQ where I would just send string representation of predicates and use them with my Repository.
Related
I'm currently implementing a LINQ provider for my own educational purposes. I have managed to get Count() extension to work recently, so far so good.
Now my question is not a cry for help, but just a request for some clarification.
There are two interfaces to be implemented in order to create the provider: IQueryProvider and something like IOrderedQueryable<>. MSDN makes clear how one implements them, but one point is still confusing me.
Why these interfaces are implemented by separate classes, even though each IOrderedQueryable instance refers own IQueryProvider instance and both objects actually (indirectly) refer the same data?
Do they really need to be separated?
Furthermore. I am able to combine them like this: class Source<RowContract> : IQueryProvider, IOrderedQueryable<RowContract> - in order to simplify type information access. This implementation works properly now and looks more simple and clear than "separate-classes" approach.
I am wondering if there is a flaw in my combined implementation. Or, maybe it's valid?
Any explanation would be appreciated greatly.
As mentioned on msdn IQueryProvider is focused on creating and executing the query. Whereas IQueryable is the thing being queried. Rolling it all together may put similar code together, but it ultimately doesn't respect separation of concerns.
Is it possible writing code that generate a class, method, member at runtime using .NET (C#)?
For more details consider this scenario :
create a dynamic workflow program to enable user for creating his own process, activities, and writing dynamic SQL SPs, …, and collect all this stuff together then generate a classes, member variables , member functions , UIs, conditions, … dynamically at run-time ! in other word your own dynamic code factory framework !
Yes, there are various options for this:
Use CodeDOM (the System.CodeDom namespace)
Use the System.Reflection.Emit namespace
Create C# code and then compile it with Microsoft.CSharp.CSharpCodeProvider
For individual methods, create an expression tree and then compile it to a delegate
Use the Roslyn CTP to either compile C# code or create your own AST and compile that
The short response is yes. You have to look and study the following technologies:
CodeDom
Windows Wordflow Foundation
If it is anyway useful can be discussed: One able to "dynamically" program a workflow in a so specific mode will probably prefer to write the code by hand himself.
Another alternative of using strong types in this case, you may consider also using
Dynamic object to allow fully featured dynamic behaviour.
Could be more appropriate then strong typing generated at runtime, in this case.
Quick answer: Yes!
For a detail of how to achieve this you would want to start your learning by looking at Reflection.
The next step would be looking for other resources on the internet and a quick search located this question on SO:
How to create a method at runtime using Reflection.emit
Dynamic Language Runtime may also be worth a look.
I was having troubles earlier while trying to declare a ChangeAction parameter in a method, with the IDE saying I might be missing a Namespace.
So I right click it and Resolve it and find that System.Data.Linq has been added and now everything is fine.
What is the difference between these two namespaces?
As I understand it, System.Linq is about the overall Linq library -- it applies to all data types like Lists and such.
System.Data.Linq is about databases (aka Linq to SQL), which includes tracking changes (ChangeAction).
I believe System.Linq is LINQ-OBJECTS specific (IEnumerable, IQueryable, etc)
Whilst System.Data.Linq is LINQ-SQL specific (DataContext, etc)
As described here:
http://msdn.microsoft.com/en-us/library/system.data.linq.aspx
System.Data.Linq is for accessing relational data
To my understanding, System.Linq is generic-level implementation which relies on
IEnumerable whereas System.Data.Linq is provider-specific (LINQ to SQL) which relies on IQueryable.
Is there a way to use reflection to completely "scan" an assembly to see if System.IO.File or System.IO.Directory is ever used? These are just example classes. Just wondering if there is a way to do it via reflection (vs code analysis).
update:
see comments
As Tommy Carlier suggested, it's very easy to do with Cecil.
using Mono.Cecil;
// ..
var assembly = AssemblyFactory.GetAssembly ("Foo.Bar.dll");
var module = assembly.MainModule;
bool references_file = module.TypeReferences.Contains ("System.IO.File");
The fantastic NDepend tool will give you this sort of dependency information.
Load your dll in NDepend and either use the GUI to find what you want, or the following CQL query:
SELECT TYPES WHERE IsDirectlyUsing "System.IO.File"
and you should get a list of all the types that use this.
I'd suggest looking at Mono Cecil for this. With Cecil, you can enumerate all the classes, methods and even the IL-instructions (including all the methods calls).
I don't remember where, but I found this handy piece of code:
http://gist.github.com/raw/104001/5ed01ea8a3bf7c8ad669d836de48209048d02b96/MethodBaseRocks.cs
It adds an extension method to MethodInfo/ConstructorInfo that parses the ILByteArray into Instruction objects.
So with this, you could loop over every MethodInfo/ConstructorInfo in the assembly, then loop over every Instruction on that MethodInfo/ConstructorInfo, and check if any of those Instruction objects contains an Operand which is an instance of a MemberInfo which has a DeclaringType that is equal to either class.
You can get a list of dependent assemblies via Assembly.GetExecutingAssembly().GetReferencedAssemblies(). I don't believe you can comprehend namespace usage via reflection. Try looking at System.CodeDom. That may help you parse the code.
.NET Reflector can do this, or something close to it. The other day I checked to see where a particular type was used.
ReSharper might also help. I do this with my own symbols all the time - I suppose it would also work for .NET Framework types.
I have wondered about the appropriateness of reflection in C# code. For example I have written a function which iterates through the properties of a given source object and creates a new instance of a specified type, then copies the values of properties with the same name from one to the other. I created this to copy data from one auto-generated LINQ object to another in order to get around the lack of inheritance from multiple tables in LINQ.
However, I can't help but think code like this is really 'cheating', i.e. rather than using using the provided language constructs to achieve a given end it allows you to circumvent them.
To what degree is this sort of code acceptable? What are the risks? What are legitimate uses of this approach?
Sometimes using reflection can be a bit of a hack, but a lot of the time it's simply the most fantastic code tool.
Look at the .Net property grid - anyone who's used Visual Studio will be familiar with it. You can point it at any object and it it will produce a simple property editor. That uses reflection, in fact most of VS's toolbox does.
Look at unit tests - they're loaded by reflection (at least in NUnit and MSTest).
Reflection allows dynamic-style behaviour from static languages.
The one thing it really needs is duck typing - the C# compiler already supports this: you can foreach anything that looks like IEnumerable, whether it implements the interface or not. You can use the C#3 collection syntax on any class that has a method called Add.
Use reflection wherever you need dynamic-style behaviour - for instance you have a collection of objects and you want to check the same property on each.
The risks are similar for dynamic types - compile time exceptions become run time ones. You code is not as 'safe' and you have to react accordingly.
The .Net reflection code is very quick, but not as fast as the explicit call would have been.
I agree, it gives me the it works but it feels like a hack feeling. I try to avoid reflection whenever possible. I have been burned many times after refactoring code which had reflection in it. Code compiles fine, tests even run, but under special circumstances (which the tests didn't cover) the program blows up run-time because of my refactoring in one of the objects the reflection code poked into.
Example 1: Reflection in OR mapper, you change the name or the type of the property in your object model: Blows up run-time.
Example 2: You are in a SOA shop. Web Services are complete decoupled (or so you think). They have their own set of generated proxy classes, but in the mapping you decide to save some time and you do this:
ExternalColor c = (ExternalColor)Enum.Parse(typeof(ExternalColor),
internalColor.ToString());
Under the covers this is also reflection but done by the .net framework itself. Now what happens if you decide to rename InternalColor.Grey to InternalColor.Gray? Everything looks ok, it builds fine, and even runs fine.. until the day some stupid user decides to use the color Gray... at which point the mapper will blow up.
Reflection is a wonderful tool that I could not live without. It can make programming much easier and faster.
For instance, I use reflection in my ORM layer to be able to assign properties with column values from tables. If it wasn't for reflection I have had to create a copy class for each table/class mapping.
As for the external color exception above. The problem is not Enum.Parse, but that the coder didnt not catch the proper exception. Since a string is parsed, the coder should always assume that the string can contain an incorrect value.
The same problem applies to all advanced programming in .Net. "With great power, comes great responsibility". Using reflection gives you much power. But make sure that you know how to use it properly. There are dozens of examples on the web.
It may be just me, but the way I'd get into this is by creating a code generator - using reflection at runtime is a bit costly and untyped. Creating classes that would get generated according to your latest code and copy everything in a strongly typed manner would mean that you will catch these errors at build-time.
For instance, a generated class may look like this:
static class AtoBCopier
{
public static B Copy(A item)
{
return new B() { Prop1 = item.Prop1, Prop2 = item.Prop2 };
}
}
If either class doesn't have the properties or their types change, the code doesn't compile. Plus, there's a huge improvement in times.
I recently used reflection in C# for finding implementations of a specific interface. I had written a simple batch-style interpreter that looked up "actions" for each step of the computation based on the class name. Reflecting the current namespace then pops up the right implementation of my IStep inteface that can be Execute()ed. This way, adding new "actions" is as easy as creating a new derived class - no need to add it to a registry, or even worse: forgetting to add it to a registry...
Reflection makes it very easy to implement plugin architectures where plugin DLLs are automatically loaded at runtime (not explicitly linked at compile time).
These can be scanned for classes that implement/extend relevant interfaces/classes. Reflection can then be used to instantiate instances of these on demand.