This question gives me curiosity... When you want to define a type you must say GetType(Type) ex.: GetType(string), but ain't String a type itself?
Why do you need to use GetType in those situations? And, if the reason is because it is expecting a Type 'Type'... why isn't the conversion implicit... I mean, all the data is there...
What you're doing is getting a reference to the meta-data of the type ... it might be a little more obvious if you look at the C# version of the API, which is typeof(string) ... which returns a Type object with information about the string type.
You would generally do this when using reflection or other meta-programming techniques
string is type, int is type and Type is type and they are not the same. but about why there is no implicit conversion it's not recommended by MSDN:
By eliminating unnecessary casts,
implicit conversions can improve
source code readability. However,
because implicit conversions can occur
without the programmer's specifying
them, care must be taken to prevent
unpleasant surprises. In general,
implicit conversion operators should
never throw exceptions and never lose
information so that they can be used
safely without the programmer's
awareness. If a conversion operator
cannot meet those criteria, it should
be marked explicit.
Take attention to :
never lose information so that they
can be used safely without the
programmer's awareness
When you want to define a type you must say GetType(Type) ex.: GetType(string)...
That's not true. Every time you do any of the following
class MyClass
{
///...
}
class MyChildClass : MyClass
{
}
struct MyStruct
{
///...
}
you're defining a new type.
if the reason is because it is expecting a Type 'Type'... why isn't the conversion implicit... I mean, all the data is there...
One reason for this is polymorphism. For instance, if we were allowed to do the following:
MyChildClass x;
....GetType(x)
GetType(x) could return MyChildClass, MyClass, or Object, since x is really an instance of all of those types.
It's also worth noting that Type is itself a class (ie, it inherits from Object), so you can inherit from it. Although I'm not sure why you'd want to do this other than overriding the default reflection behavior (for instance, to hide the internals from prying eyes).
GetType(string) will return the same information. Look at it like you would a constant. The only other way to get the Type object that represents a string would be to instantiate the string object and call o.GetType(). Also, this is not possible for interfaces and abstract types.
If you want to know the runtime type of a variable, call the .GetType() method off of it, as the runtime type may not be the same as the declared type of the variable.
Related
My class currently has two constructors, which are overloads:
public CustomRangeValidationAttribute(string value) {}
and
public CustomRangeValidationAttribute(object value) {}
this appears to be working correctly: When I call the method using a string the first constructor is called, when I use different values, for example an integer or boolean, the second constructor is called.
I assume there is a rule to force specific type matches into the more specific overload, preventing
var c = new CustomRangeValidationAttrubute("test");
from calling the object-overload.
Is this "safe code", or should (or can) the code be improved? I have a nagging feeling this is not the best practice.
You have two overloads which only vary in the reference types and there's a hierarchical relationship between the reference types, such that one may be cast to the other.
In such a circumstance, you really ought to make sure that the code behaves the same logically when the broader overload is selected but the reference turns out to be of the more derived type1,2. That is where to focus your attention. Of course, if you can stick by this rule, often it'll turn out that the more derived overload isn't required and can just be special-cased within the broader method.
1Especially because, as vc74 points out in a comment, overload resolution (generally, ignoring dynamic) is done at compile time based on compile-time types3.
2And this fits the same broad principal for overloads. Don't have overloads where which one is selected leads to logically different results. If you're exhibiting different behaviours, don't give them the same name (for constructors, that may mean splitting into two separate classes, possibly with a shared base class, if that's what you intend to do)
3I appreciate that this is for an attribute and so you're expecting only compile-time to be relevant, but I'd still hew to the general principal here, where possible.
Once there is overload with signature of more derived type the compiler will always choose most concrete type you provide.
That being said, unless someone does new CustomRangeValidationAttrubute((object)"test") if you pass string to CustomRangeValidationAttrubute always constructor with string in it's parameter will be chosen.
About if this is bad practice, I can't tell for sure if I don't see your specific use case, just keep in mind every value type you pass to new CustomRangeValidationAttrubute(object) will be boxed and this is bad as it puts pressure to the GC and whats more you will loose type safety.
The way I resolved this code was by moving the overloads to a new abstract class with separate methods instead of the the original constructors:
public CustomRangeValidationStringAttribute(string value) {}
public CustomRangeValidationGenericAttribute(object value) {}
In the two classes inheriting from this new base class each use their own method, creating two different attributes to choose from, [CustomRangeValidationString] and [CustomRangeValidationGeneric].
You could use a generic class.
See the documentation
class YourClass<T>
{
public YourClass(T value){}
}
.NET knows many ways to convert data types:
Convert-class;
Functions inside a type like (Try)Parse and ToString, etc.;
Implementation of interface IConvertable;
The TypeConverter;
The implicit and explicit conversion operator;
Am I missing another one?
So if am converting one datatype to another, I need to know both types and I need to know which conversion method to use. And this becomes pretty nasty if one of those two types (or both) is a generic type.
So my question is: I there is uniform (generic) way in .NET to convert one data type to another, which might use all the other limited methods?
A good, generic way to convert between types is with Convert.ChangeType. Here's an example of how you could use it to write a generic converting method:
public static TResult Convert<TResult>(IConvertible source)
{
return (TResult)System.Convert.ChangeType(source, typeof(TResult));
}
It, like other Convert methods, internally calls the IConvertible interface.
This will not make use of your other conversion options:
The most common I'd think would be ToString; for that, you could add a check to see if TResult is string and if so, (after appropriate null checks) simply call ToString on your input.
Using reflection you could check for:
the TypeConverterAttribute (TypeDescriptor.GetConverter seems to be the way to go from there)
(Try)Parse methods, (which you'd invoke), and
implicit/explicit conversion operators (the methods op_Implicit and op_Explicit, which you'd likewise invoke)
These are each fairly self-explanatory if you know a bit about reflection, but I could elaborate if any prove difficult.
You imply those are all the same
They are not
Pick the appropriate
Convert Class
Converts a base data type to another base data type.
Parse is from string
ToString is a to string
IConvertible Interface
Defines methods that convert the value of the implementing reference
or value type to a common language runtime type that has an equivalent
value.
TypeConverter Class
Provides a unified way of converting types of values to other types,
as well as for accessing standard values and subproperties.
Yes you need to know the type you are converting to.
And you should be aware if the type you are converting from.
With generics there is no built in.
At best you provide a method.
But why do you need to convert generics?
You seem to imply that more than one way is a bad thing.
For a single way then I like the answer from Tim S. +1
But that does not mean I would ever use it.
The are even more ways to get data from a SQL database.
Is that a bad thing?
Okay. I've read this post, and I'm confused on how it applies to my example (below).
class Foo
{
public static implicit operator Foo(IFooCompatible fooLike)
{
return fooLike.ToFoo();
}
}
interface IFooCompatible
{
Foo ToFoo();
void FromFoo(Foo foo);
}
class Bar : IFooCompatible
{
public Foo ToFoo()
{
return new Foo();
}
public void FromFoo(Foo foo)
{
}
}
class Program
{
static void Main(string[] args)
{
Foo foo = new Bar();
// should be the same as:
// var foo = (new Bar()).ToFoo();
}
}
I have thoroughly read the post I linked to. I have read section 10.10.3 of the C# 4 specification. All of the examples given relate to generics and inheritance, where the above does not.
Can anyone explain why this is not allowed in the context of this example?
Please no posts in the form of "because the specification says so" or that simply quote the specification. Obviously, the specification is insufficient for my understanding, or else I would not have posted this question.
Edit 1:
I understand that it's not allowed because there are rules against it. I am confused as to why it's not allowed.
I understand that it's not allowed because there are rules against it. I am confused as to why it's not allowed.
The general rule is: a user defined conversion must not in any way replace a built-in conversion. There are subtle ways that this rule can be violated involving generic types, but you specifically say that you are not interested in generic type scenarios.
You cannot, for example, make a user-defined conversion from MyClass to Object, because there already is an implicit conversion from MyClass to Object. The "built in" conversion will always win, so allowing you to declare a user-defined conversion would be pointless.
Moreover, you cannot even make a user-defined implicit conversion that replaces a built-in explicit conversion. You cannot, for example, make a user-defined implicit conversion from Object to MyClass because there already is a built-in explicit conversion from Object to MyClass. It is simply too confusing to the reader of the code to allow you to arbitrarily reclassify existing explicit conversions as implicit conversions.
This is particularly the case where identity is involved. If I say:
object someObject = new MyClass();
MyClass myclass = (MyClass) someObject;
then I expect that this means "someObject actually is of type MyClass, this is an explicit reference conversion, and now myclass and someObject are reference equal". If you were allowed to say
public static implicit operator MyClass(object o) { return new MyClass(); }
then
object someObject = new MyClass();
MyClass myclass = someObject;
would be legal, and the two objects would not have reference equality, which is bizarre.
Already we have enough rules to disqualify your code, which converts from an interface to an unsealed class type. Consider the following:
class Foo { }
class Foo2 : Foo, IBlah { }
...
IBlah blah = new Foo2();
Foo foo = (Foo) blah;
This works, and one reasonably expects that blah and foo are reference equals because casting a Foo2 to its base type Foo does not change the reference. Now suppose this is legal:
class Foo
{
public static implicit operator Foo(IBlah blah) { return new Foo(); }
}
If that is legal then this code is legal:
IBlah blah = new Foo2();
Foo foo = blah;
we have just converted an instance of a derived class to its base class but they are not reference equal. This is bizarre and confusing, and therefore we make it illegal. You simply may not declare such an implicit conversion because it replaces an existing built-in explicit conversion.
So alone, the rule that you must not replace any built-in conversion by any user-defined conversion is sufficient to deny you the ability to create a conversion that takes an interface.
But wait! Suppose Foo is sealed. Then there is no conversion between IBlah and Foo, explicit or implicit, because there cannot possibly by a derived Foo2 that implements IBlah. In this scenario, should we allow a user-defined conversion between Foo and IBlah? Such a user-defined conversion cannot possibly replace any built-in conversion, explicit or implicit.
No. We add an additional rule in section 10.10.3 of the spec that explicitly disallows any user-defined conversion to or from an interface, regardless of whether this replaces or does not replace a built-in conversion.
Why? Because one has the reasonable expectation that when one converts a value to an interface, that you are testing whether the object in question implements the interface, not asking for an entirely different object that implements the interface. In COM terms, converting to an interface is QueryInterface -- "do you implement this interface?" -- and not QueryService -- "can you find me someone who implements this interface?"
Similarly, one has a reasonable expectation that when one converts from an interface, one is asking whether the interface is actually implemented by an object of the given target type, and not asking for an object of the target type that is entirely different from the object that implements the interface.
Thus, it is always illegal to make a user-defined conversion that converts to or from an interface.
However, generics muddy the waters considerably, the spec wording is not very clear, and the C# compiler contains a number of bugs in its implementation. Neither the spec nor the implementation are correct given certain edge cases involving generics, and that presents a difficult problem for me, the implementer. I am actually working with Mads today on clarifying this section of the spec, as I am implementing it in Roslyn next week. I will attempt to do so with as few breaking changes as possible, but a small number may be necessary in order to bring the compiler behaviour and the specification language in line with each other.
The context of your example, it won't work again because the implicit operator has been placed against an interface... I'm not sure how you think your sample is different to the one you linked other than you try to get one concrete type across to another via an interface.
There is a discussion on the topic here on connect:
http://connect.microsoft.com/VisualStudio/feedback/details/318122/allow-user-defined-implicit-type-conversion-to-interface-in-c
And Eric Lippert might have explained the reason when he said in your linked question:
A cast on an interface value is always treated as a type test because
it is almost always possible that the object really is of that type
and really does implement that interface. We don't want to deny you
the possibility of doing a cheap representation-preserving conversion.
It seems to be to do with type identity. Concrete types relate to each other via their hierarchy so type identity can be enforced across it. With interfaces (and other blocked things such as dynamic and object) type identity becomes moot because anyone/everyone can be housed under such types.
Why this is important, I have no idea.
I prefer explicit code that shows me I am trying to get a Foo from another that is IFooCompatible, so a conversion routine that takes a T where T : IFooCompatible returning Foo.
For your question I understand the point of discussion, however my facetious response is if I see code like Foo f = new Bar() in the wild I would very likely refactor it.
An alternative solution:
Don't over egg the pudding here:
Foo f = new Bar().ToFoo();
You have already exposed the idea that Foo compatible types implement an interface to achieve compatibility, use this in your code.
Casting versus converting:
It is also easy to get wires crossed about casting versus converting. Casting implies that type information is integral between the types you are casting around, hence casting doesn't work in this situation:
interface IFoo {}
class Foo : IFoo {}
class Bar : IFoo {}
Foo f = new Foo();
IFoo fInt = f;
Bar b = (Bar)fInt; // Fails.
Casting understands the type hierarchy and the reference of fInt cannot be cast to Bar as it is really Foo. You could provide a user-defined operator to possibly provide this:
public static implicit operator Foo(Bar b) { };
And doing this in your sample code works, but this starts to get silly.
Converting, on the other hand, is completely independent of the type hierarchy. Its behaviour is entirely arbitrary - you code what you want. This is the case you are actually in, converting a Bar to a Foo, you just happen to flag convertible items with IFooCompatible. That interface doesn't make casting legal across disparate implementing classes.
As for why interfaces are not allowed in user-defined conversion operators:
Why can't I use interface with explicit operator?
The short version is that it's disallowed so that the user can be
certain that conversions between reference types and interfaces
succeed if and only if the reference type actually implements that
interface, and that when that conversion takes place that the same
object is actually being referenced.
Okay, here's an example of why I believe the restriction is here:
class Foo
{
public static implicit operator Foo(IFooCompatible fooLike)
{
return fooLike.ToFoo();
}
}
class FooChild : Foo, IFooCompatible
{
}
...
Foo foo = new FooChild();
IFooCompatible ifoo = (IFooCompatible) foo;
What should the compiler do here, and what should happen at execution time? foo already refers to an implementation of IFooCompatible, so from that point of view it should just make it a reference conversion - but the compiler doesn't know that's the case, so should it actually just call the implicit conversion?
I suspect the basic logic is: don't allow an operator to defined which could conflict with an already-valid conversion based on the execution-time type. It's nice for there to be exactly zero or one possible conversion from an expression to a target type.
(EDIT: Adam's answer sounds like it's talking about pretty much the same thing - feel free to regard my answer as merely an example of his :)
What would probably be helpful here would be for .net to provide a "clean" way to associate an interface with a static type, and have various types of operations on interface types map to corresponding operations on the static type. In some scenarios this can be accomplished with extension methods, but that is both ugly and limited. Associating with interfaces with static classes could offer some significant advantages:
Presently, if an interface wishes to offer consumers multiple overloads of a function, every implementation must implement every overload. Pairing a static class with an interface, and allowing that class to declare methods in the style of extension methods would allow consumers of the class to use overloads provided by the static class as though they were part of the interface, without requiring implementers to provide them. This can be done with extension methods, but it requires that the static method be manually imported on the consumer side.
There are many circumstances where an interface will have some static methods or properties which are very strongly associated with it (e.g. `Enumerable.Empty`). Being able to use the same name for the interface and the 'class' of the associated properties would seem cleaner than having to use separate names for the two purposes.
It would provide a path to supporting optional interface members; if a member exists in an interface but not an implementation, the vtable slot could be bound to a static method. This would be an extremely useful feature, since it would allow interfaces to be extended without breaking existing implementations.
Given that unfortunately such a feature only exists to the extent necessary to work with COM objects, the best alternative approach I can figure would be to define a struct type which holds a single member of interface type, and implements the interface by acting as a proxy. Conversion from the interface to the struct would not require creation of an extra object on the heap, and if functions were to provide overloads which accepted that struct, they could convert back to the interface type in a manner whose net result would be value-preserving and not require boxing. Unfortunately, passing such a struct to a method which used the interface type would entail boxing. One could limit the depth of boxing by having the struct's constructor check whether the interface-type object that was passed to it was a nested instance of that struct, and if so unwrap one layer of boxing. That could be a bit icky, but might be useful in some cases.
I have a static method:
public class Example
{
//for demonstration purposes - just returns default(T)
public static T Foo<T>() { return default(T); }
}
And I need to be able to invoke it using a Type parameter calls to which could be numerous, so my standard pattern is to create a thread-safe cache of delegates (using ConcurrentDictionary in .Net 4) which dynamically invoke the Foo<T> method with the correct T. Without the caching, though, the code is this:
static object LateFoo(Type t)
{
//creates the delegate and invokes it in one go
return (Func<object>)Delegate.CreateDelegate(
typeof(Func<object>),
typeof(Example).GetMethod("Foo", BindingFlags.Public | BindingFlags.Static).
MakeGenericMethod(t))();
}
This is not the first time I've had to do this - and in the past I have use Expression trees to build and compile a proxy to invoke the target method - to ensure that return type conversion and boxing from int -> object (for example) is handled correctly.
Update - example of Expression code that works
static object LateFoo(Type t)
{
var method = typeof(Example)
.GetMethod("Foo", BindingFlags.Public | BindingFlags.Static)
.MakeGenericMethod(t);
//in practise I cache the delegate, invoking it freshly built or from the cache
return Expression.Lambda<Func<IField, object>>(Expression.Convert(
Expression.Call(method), typeof(object))).Compile()();
}
What's slightly amusing is that I learned early on with expressions that an explicit Convert was required and accepted it - and in lieu of the answers here it does now make sense why the .Net framework doesn't automatically stick the equivalent in.
End update
However, this time I thought I'd just use Delegate.CreateDelegate as it makes great play of the fact that (from MSDN):
Similarly, the return type of a delegate is compatible with the return type of a method if the return type of the method is more restrictive than the return type of the delegate, because this guarantees that the return value of the method can be cast safely to the return type of the delegate.
Now - if I pass typeof(string) to LateFoo method, everything is fine.
If, however, I pass typeof(int) I get an ArgumentException on the CreateDelegate call, message: Error binding to target method. There is no inner exception or further information.
So it would seem that, for method binding purposes, object is not considered more restrictive than int. Obviously, this must be to do with boxing being a different operation than a simple type conversion and value types not being treated as covariant to object in the .Net framework; despite the actual type relationship at runtime.
The C# compiler seems to agree with this (just shortest way I can model the error, ignore what the code would do):
public static int Foo()
{
Func<object> f = new Func<object>(Foo);
return 0;
}
Does not compile because the Foo method 'has the wrong return type' - given the CreateDelegate problem, C# is simply following .Net's lead.
It seems to me that .Net is inconsistent in it's treatment of covariance - either a value type is an object or it's not; & if it's not it should not expose object as a base (despite how much more difficult it would make our lives). Since it does expose object as a base (or is it only the language that does that?), then according to logic a value type should be covariant to object (or whichever way around you're supposed to say it) making this delegate bind correctly. If that covariance can only be achieved via a boxing operation; then the framework should take care of that.
I dare say the answer here will be that CreateDelegate doesn't say that it will treat a box operation in covariance because it only uses the word 'cast'. I also expect there are whole treatises on the wider subject of value types and object covariance, and I'm shouting about a long-defunct and settled subject. I think there's something I either don't understand or have missed, though - so please enlighten!
If this is unanswerable - I'm happy to delete.
You can only convert a delegate in this way if the parameters and return value can be converted using a representation conserving conversion.
Reference types can only be converted to other reference types in this way
Integral values can be converted to other integer values of the same size (int, uint, and enums of the same size are compatible)
A few more relevant blog articles:
This dichotomy motivates yet another classification scheme for conversions (†). We can divide conversions into representation-preserving conversions (B to D) and representation-changing conversions (T to U). (‡) We can think of representation-preserving conversions on reference types as those conversions which preserve the identity of the object. When you cast a B to a D, you’re not doing anything to the existing object; you’re merely verifying that it is actually the type you say it is, and moving on. The identity of the object and the bits which represent the reference stay the same. But when you cast an int to a double, the resulting bits are very different.
This is why covariant and contravariant conversions of interface and delegate types require that all varying type arguments be of reference types. To ensure that a variant reference conversion is always identity-preserving, all of the conversions involving type arguments must also be identity-preserving. The easiest way to ensure that all the non-trivial conversions on type arguments are identity-preserving is to restrict them to be reference conversions.
http://blogs.msdn.com/b/ericlippert/archive/2009/03/19/representation-and-identity.aspx
"but how can a value type, like int, which is 32 bits of memory, no more, no less, possibly inherit from object? An object laid out in memory is way bigger than 32 bits; it's got a sync block and a virtual function table and all kinds of stuff in there." Apparently lots of people think that inheritance has something to do with how a value is laid out in memory. But how a value is laid out in memory is an implementation detail, not a contractual obligation of the inheritance relationship! When we say that int inherits from object, what we mean is that if object has a member -- say, ToString -- then int has that member as well.
http://ericlippert.com/2011/09/19/inheritance-and-representation/
It seems to me that .Net is inconsistent in it's treatment of covariance - either a value type is an object or it's not; if it's not it should not expose object as a base
It depends on what the meaning of "is" is, as President Clinton famously said.
For the purposes of covariance, int is not object because int is not assignment compatible with object. A variable of type object expects a particular bit pattern with a particular meaning to be stored in it. A variable of type int expects a particular bit pattern with a particular meaning, but a different meaning than the meaning of a variable of object type.
However, for the purposes of inheritance, an int is an object because every member of object is also a member of int. If you want to invoke a method of object -- ToString, say -- on int, you are guaranteed that you can do so, because an int is a kind of object, and an object has ToString.
It is unfortunate, I agree, that the truth value of "an int is an object" varies depending on whether you mean "is assignment-compatible with" or "is a kind of".
If that covariance can only be achieved via a boxing operation; then the framework should take care of that.
OK. Where? Where should the boxing operation go? Someone, somewhere has to generate a hunk of IL that has a boxing instruction. Are you suggesting that when the framework sees:
Func<int> f1 = ()=>1;
Func<object> f2 = f1;
then the framework should automatically pretend that you said:
Func<object> f2 = ()=>(object)f1();
and thereby generate the boxing instruction?
That's a reasonable feature, but what are the consequences? Func<int> and Func<object> are reference types. If you do f2 = f1 on reference types like this, do you not expect that f2 and f1 have reference identity? Would it not be exceedingly strange for this test case to fail?
f2 = f1;
Debug.Assert(object.ReferenceEquals(f1, f2));
Because if the framework implemented that feature, it would.
Similarly, if you said:
f1 = MyMethod;
f2 = f1;
and you asked the two delegates whether they referred to the same method or not, would it not be exceedingly weird if they referred to different methods?
I think that would be weird. However, the VB designers do not. If you try to pull shenanigans like that in VB, the compiler will not stop you. The VB code generator will generate non-reference-equal delegates for you that refer to different methods. Try it!
Moral of the story: maybe C# is not the language for you. Maybe you prefer a language like VB, where the language is designed to take a "make a guess about what the user probably meant and just make it work" attitude. That's not the attitude of the C# designers. We are more "tell the user when something looks suspiciously wrong and let them figure out how they want to fix it" kind of people.
Even though I think #CodeInChaos is absolutely right, I can't help pointing this Eric Lippert's blog post out. In reply to the last comment to his post (at the very bottom of the page) Eric explains the rationale for such behaviour, and I think this is exactly what you're interested in.
UPDATE: As #Sheepy pointed out Microsoft moved old MSDN blogs into archive and removed all comments. Luckily, the Wayback Machine preserved the blog post in its original form.
I have that code:
myDataGrid is an object passed to the method. I know it is type of OvserveableCollection of different types.
All I need is to cast that object to OvserveableCollection<T> (it implements the IEnumerable interface)
//get element's type
Type entryType = (myDataGrid as IEnumerable).AsQueryable().ElementType;
foreach (var item in (IEnumerable<entryType>)myDataGrid)
{}
but the compiler doesn't know the entryType in the loop header. Why ?
You can't use a runtime Type instance as a generic type parameter unless you use reflection (MakeGenericMethod() / MakeGenericType()). However I doubt it would help anyway! In this scenario, either use the non-generic IEnumerable (no <T>) API, or perhaps cast to a known interface/subclass, or use dynamic as a last resort duck-typing.
You can also use MakeGenericMethod() etc, but that is more involved and almost certainly slower.
For example:
foreach(object item in (IEnumerable)myDataGrid)
{
// tada!
}
Another trick can be to use dynamic to invoke the generic code:
public void Iterate<T>(IEnumerable<T> data)
{
foreach(T item in data) {...}
}
...
dynamic evil = myDataGrid;
Iterate(evil);
You're trying to use a Type variable as a type argument for a generic type. Generics don't work that way - you have to use a compile-time type as the type argument. (That compile-time type can be a type parameter itself, if you're doing this in a generic method.)
It's hard to know how to advise you to change the code without knowing more about your requirements though - what do you need to do with the items?
You can't cast to a "runtime type"... in order to "write" the instructions which actually implement that cast, the compiler requires the Type... which really just means that the Type must be known at COMPILE time.
The ONLY way I've ever found around this limitation was code-generator (of one sort or another) to "manually" generate the IL instructions to perform the cast. It's hard to know what to recommend unless we know a lot about your actual requirements (and constraints).
Cheers. Keith.