Is there any generic alternative / implementation for MemoryCache?
I know that a MemoryCache uses a Hashtable under the hood, so all it would take is to transition into using a Dictionary<,>, which is the generic version of a Hashtable.
This would provide type safety and provide performance benefits as no boxing/unboxing.
EDIT: Another thing I'm interested in is having a different key type. The default is a System.String.
Is there any generic alternative / implementation for MemoryCache?
Not in the base class library. You'd have to roll your own, though I, personally, would just make a wrapper around MemoryCache that provides the API you wish.
This would provide type safety and provide performance benefits as no boxing/unboxing
The type safety can be handled fairly easily in a wrapper class. The boxing/unboxing would only be an issue if you were storing value types (not classes), and even then, would likely be minimal, as it's unlikely that you're pushing and pulling from cache often enough to have this be a true performance issue.
As for type safety and usability, I've actually written my own methods to wrap the MemoryCache item's calls in a generic method, which allows a bit nicer usage from an API standpoint. This is very easy - typically just requires a method like:
public T GetItem<T>(string key) where T : class
{
return memoryCache[key] as T;
}
Similarly, you can make a method to set values the same way.
EDIT: Another thing I'm interested in is having a different key type. The default is a System.String.
This is not supported directly with MemoryCache, so it would require a fair bit of work to make your own key generation. One option would be to make a type safe wrapper which also provided a Func<T, string> to generate a string key based off your value - which would allow you to generate a cache entry for any type T. You'd have to be careful, of course, to include all data in the string that you wanted as part of your comparison, however.
I wrote mine, FWIW:
https://github.com/ysharplanguage/GenericMemoryCache#readme (link dead)
There is a fork of the original code here:
https://github.com/caesay/GenericMemoryCache
Related
For example, wouldn't this type:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.vector2.aspx
... having public mutable fields like this:
https://msdn.microsoft.com/en-us/library/microsoft.xna.framework.vector2.x.aspx
... single-handedly make consuming F# code's immutability efforts kind of useless?
PS: performance must be preserved, no wrapping or dynamic instantiation of throw-away values.
PPS: I did some research and suspect the answer (negative) but would appreciate some input. It seems like a typical problem when not implementing everything in F# from scratch.
For collections of structs, this is not an issue. The collection remains immutable irrespective of the struct's members, since getting the struct from the collection returns a copy. Altering this copy does not alter the collection's contents.
Elsewhere, structs can be used to write wrappers without additional GC load. This requires to create methods for all features you want to keep and have them call the original methods. Unless the JIT doesn't inline the calls, this shouldn't cost performance. However, when wrapping reference types, this will create an empty default constructor on the wrapper type, resulting in a null reference if this constructor is called.
As a side note, I wouldn't recommend using vector classes from outside F#, since they are not unit-of-measure aware. In my experience, most vectors can be assigned physical units, which makes code safer and more readable.
You are correct, the short answer is no. However, at least in the case of the Vector2 class you show, many of the operations are implemented in an immutable fashion when you use the static versions of the methods. For example
var vecB = Vector2.Normalize(vecA);
is an immutable call. Unless the libraries you are using support some kind of immutability, you are stuck with having to implement the immutable functionality you want to have.
F# has a design goal of being a hybrid of mutable and immutable content so that it can access the rich functionality of .NET libraries when needed.
To lay out the problem as easily as possible, I'm trying to implement a generic pooling system that can handle an arbitrary number of concrete classes so long as they implement IBaseComponent.
So, in my class that manages the pools, I have a Dictionary of Pools:
Dictionary<Type, Pool<IBaseComponent>> pools;
As this will allow me to create as many classes that implement IBaseComponent (which is a very 'low level' interface, so to speak - so classes implementing it are not going to be compatible too terribly far) as I want, and there can be a pool for each.
Now, the issue I'm running in to is the first load of an IBaseComponent into the pool, to act as a template, so to speak.
This template object is loaded from XML, rather than code, so I do not have its actual class at compile-time, only at run time (it's defined in the XML definition and I grab the formal Type via reflection). That's all fine and dandy except, as we know, generics rely on compile-time safety.
So, using some reflection trickery I have the following:
var type = typeof(MyChildComponent);
var genericType = typeof(Pool<>);
var specificType = genericType.MakeGenericType(type);
var pool = Activator.CreateInstance(specificType );
pools.Add(T, pool as Pool<IBaseComponent>);
assuming some class:
public class MyChildComponent : IBaseComponent
The problem occurs at the last line in the first block there, when I'm adding to the pools dictionary. The cast of the instantiated pool to Pool<IBaseComponent> fails, resulting in null being inserted into the Dictionary.
My question to you fine folks is this: Is there any reasonable way around this? Any possible way, even?
If I need to load elements via some external method (XML, TXT, whatever) for at least the very first template object for a pool, for each possible concrete class that a Pool could be used for, and all I have access to is the top-level interface and formal Type of the class (both defined in the external definition file), can I do anything here?
Or is this simply not possible at all?
Are you using .Net 4+? If so, you can create an interface IPool<out T>. The out makes the generic argument covariant, which means it will accept any version of the interface with a generic argument which is T or derives from T.
For some reason co/contravariance only works with interfaces and delegates, which is why you need IPool.
Your Dictionary will become:
Dictionary<Type, IPool<IBaseComponent>>() pools;
I'm having a little trouble combining it in my head with the reflection, but I think that should work. If not, let me know and I'll spend a little more time on my test code.
One alternative option I'm toying with is modifying Pool<T> itself.
Instead of having Pool<T> store only one type of (concerete) class, I will modify it to support storing any compatible class based on an interface used for T.
So, Pool<IBaseComponent> will be responsible for storing all possible types that implement IBaseComponent.
Internally, it will store everything as an IBaseComponent, but will keep references to where each concrete type is stored (in a Dictionary of lists keyed by Type, perhaps, or even just one big linear list [although this would make resizing the "pool" for specific types a lot more complicated])
One thing I neglected to mention is that IBaseComponent exposes two points of functionality that are all I need for preparing a component for use in a "blind" fashion (ie: the factory that would be calling this pool doesn't know at compile time what types of components it's working with either, it just loads them up based on what's defined in XML, or copying from an existing object that has these components attached to it), namely: Deserialize (build component from XML/JSON/whatever) and CopyInto(IBaseComponent other) (build component by copying from another component).
So, this would still have the problem that the Pool won't be able to dynamically cast the IBaseComponent to the caller's requested Type, but that won't matter. If the caller really knows the hard compile-time type ahead of time it can do the cast. If the caller doesn't, then it wouldn't be able to do anything beyond access methods exposed by IBaseComponent anyways.
All that matters is that the IBaseComponent the Pool returns is of the correct type underneath, which this will handle.
Put simply: I'll be cutting out a bit of modern generics (interally the Pool will only work with passed-in Types, externally it will only allow T to be an interface), and replacing it will good ol' fashioned Type passing. Reflection will have to be used internally to instantiate the Pool of Types, but I figure that it's okay to expect that initializing or resizing a Pool is going to be a very costly manouever.
Lots of sources talks about this question but I dont understand the concept very well. IDictionary is generic, its type safety etc.
When I dig into EntityFrameworkv5 I see a property that is declared as below in the LogEntry class.
private IDictionary<string, object> extendedProperties;
The question is why they prefer IDictionary against Hashtable, hence Hashtable is also takes a key as a string and a object. Only the reason is making the property polymorphic by choosing IDictionary ?
Thanks in advance.
Nowadays, there are few reasons to use Hashtable. Dictionary<> is better than it in most respects. When it's not, you can usually find another strongly typed collection that serves your purpose even better than either.
Type-safe.
None of the severe overhead of boxing and unboxing.
Implements IDictionary<>, which is very compatible with .NET and 3rd party code.
Performs better than Hashtable in many areas. See links: Link #1, Link #2.
If you're asking why type a property as IDictionary<> instead of Dictionary<>, it's for several reasons.
It is generally considered best practice to use interfaces as often as possible, instead of regular types, especially in a framework.
It is possible to change the implementation of the interface fairly easily, but it's difficult to change the nature of a concrete class without causing compatibility problems with dependent code.
Using interfaces, you can take advantage of Covariance and Contravariance.
External code is more likely to consume the more general interface IDictionary<> than the concrete class Dictionary<>. Using IDictionary<> thus lets other developers to interact better with the property.
There are tons more, probably. These are just off the top of my head.
Well yes if they had defined extendedproperies as returning hashtable then they would have been stuck with that for all time, unless they wanted to break all the code that uses extended properties.
Whole point of the returning an Interface is it doesn't matter how the method is implemented as long as it keeps doing that.
"Only reason is making the property polymorphic" misses the point, there should be very few reasons why you shouldn't do this. If you can return an interface do return an interface, most of the time that's good design.
So most of the answers here are about comparing Dictionary to Hashtable for general purposes, not why they chose that particular implementation.
In that particular implementation, you are correct, it is using object as the return type, so the strongly typed benefits of dictionary are not available.
IMO it boils down to New vs Old, ArrayList, Hashtable etc are the older tech, and are largely disfavored in general, because they do not have a host of features (described in the other answers). Although those features are not used in this particular case, there are no strong benefits for switching back to the old tech, and it provides a better example for personal development.
So its more just a matter of "this is the way we do it now"
The HashTable is weakly typed and can only return Object. The Dictionary<> is strongly typed for whatever type you are storing in it.
I realize that enum cannot be used as a generic constraint, and Microsoft has declined to fix this bug.
Any reason why?
The link you posted says why:
and is a somewhat arbitrary limitation of the language
Potentially will change:
If we ever reopen constraints as a feature, this will be one of the things we will reevaluate. For the upcoming release we don't have the opportunity to add any more language features, so you'll see this resolved as "Won't Fix", but it remains on our lists for future consideration.
I suspect the reason that enum is not accepted as a generic constraint is that while there are some things one might "expect" to be able to do with an enum-constrained parameter that one can't do with an unconstrained parameter, the only one that would actually work would be calling the very slow non-generic HasFlag; in particular, operations which involve converting between a enum and its associated base type would not be usable. Allowing a constraint which wouldn't allow programmers to use variables in ways they'd expect, but only add the ability to call a horribly slow non-generic method didn't seem worthwhile.
As it is, I don't think the inability to use Enum as a type constraint would have been a loss, but for the addition of a feature which was not anticipated when the decision was made: extension methods and their interaction with Intellisense. If writes a method bool FlagCheck.HasAnyFlags<T>(T enum1, T enum2) where T:struct which takes two matching-type enumerations and checks whether one contains any flags which are also in the other [one can write such a method to be about an order of magnitude faster than Enum.HasFlag], it may not make sense to call it on parameters of type double, but only consequence of that is that such things will be caught at run-time rather than compile time. It's only if one makes such a thing an extension method that the lack of an enum constraint becomes annoying; in that case, it means that there's no way to have Intellisense offer HasAnyFlags on a variable of an enum type without it also popping up on variables of other types.
BTW, I think I disagree with the philosophy on enum constraints for the same reason I disagree with the rule that one can't constrain to a sealed type. Even if it would be useless to create a generic type parameter that was constrained to a type that would always be sealed, the fact that a type is sealed in a [perhaps preliminary] version of an assembly implies it will always be so. Further, if a type is unsealed but has only internal constructors [called via factory methods] I don't know that replacing it with a sealed class would be a breaking change but for the rule about sealed type constraints.
The underlying type issues and performance are valid, but they have workarounds and the CLR and C++/CLI support generic enum constraints. Working with flags enums has always been less readable than I prefer. HasFlag helps but as has been pointed out there's room for increasing performance.
I have this and several other useful enum extension/helper methods here. If you really need to write a method that constrains on the enum type, that language can handle it and it's not all that difficult to learn enough of it to write if these kinds of simple methods coming from a C# background.
I suspect some of limitations of generic enum types I experienced in C++/CLI have something to do with reasons why it was seen as "not important enough." For instance, most useful operators are missing other than assignment. To do anything with the TEnum, you have to cast to the underlying type, which can be expensive depending on how it's done. Consider that the binary operation to Add/Remove/Test for flag is extremely fast, adding in a single type conversion requirement dramatically changes the performance. Going to a (known) underlying type value in C++/CLI can be done very quickly in a manner that is implemented in IL as the equivalent of "take in an enum parameter and pass it out as though it was actually the underlying type". Doing that back to the enum, however, isn't possible in C++/CLI and requires an Enum.ToObject call, which is an expensive conversion.
I implemented a workaround that basically takes the set of "convertBackToTEnum" methods and rewrites the IL to do it exactly the way the ConvertToUnderlyingType methods does.
The casts are also risky if you screw up and cast to the wrong underlying type. I was concerned enough about it that I wrote a T4 script to generate unit tests for each operation on each underlying type (with values that would cause problems if converted incorrectly).
If you need to write methods that support this, the above project has several examples, including class and method constraints.
The advantage of using generics is that it increases the type safety - you can only put in the correct type of thing, and you get out the correct type without requiring a cast. The only reason I can think of for not using generic collections is that you need to store some arbitrary data. Am I missing something? What other reasons are there to not use generics when dealing with collections?
If you need to store arbitrary data, use List<object> (or whatever). Then it's absolutely clear that it's deliberately arbitrary.
Other than that, I wouldn't use the non-generic collections for anything. I have used IEnumerable and IList when I've been converting an object reference and didn't know the type to cast it to at compile-time - so non-generic interfaces are useful sometimes... but not the non-generic classes themselves.
The obvious other reason is working with code (possibly legacy) that does not use generic collections.
You can see this happening in .NET itself. System.Windows.Form.Control.Controls is not generic, nor is System.Web.UI.Control.Controls.
Generics are almost always the right thing to use. Note that languages like Haskell and ML essentially only allow that model: there is no default "object" or "void*" in those languages at all.
The only reasons I might not use generics are:
When the appropriate type is simply not known at compile time. Things like deserializing objects, or instantiating objects through reflection.
When the users that will be using my code aren't familiar with them (yet). Not all engineers are comfortable using them, especially in some more advanced patterns like the CRTP.
The main advantage is the is no boxing or unboxing penalty with generic collections of value types. This can be seen if you examine the il using ildasm.exe. The generic containers give better performance for value types and a smaller performance improvement for reference types.
Type variance with generics can trip you up, but mostly you should use generic collections. There isn't a really a good reason to avoid them, and all the reason in the world to avoid un-typed collections like ArrayList.
Here's one answer: The change from Hashtable to Dictionary.
One thing I think you need to consider is that a generic collection is not always a drop in replacement for a non-generic collection. For example, Dictionary<object,object> can not simply be plugged in for an instance of Hashtable. They have very different behavior in a number of scenarios that can and will break programs. Switching between these two collections forces a good programmer to examine the use cases to ensure the differences do not bite them.
The non-generic Collection in the Microsoft.VisualBasic namespace has some annoying quirks and goofiness, and is in a lot of ways pretty horrible, but it also has a unique feature: it is the only collection which exhibits sensible semantics if it's modified during an enumeration; code which does something like delete all members of a Collection which meet a certain predicate may need to be significantly rewritten if some other collection type is used.