Improving performance of Dictionary using Arrays - c#

Consider counting number of occurrences of a particular letter in a word.
I came up with this solution.
string a = "aabbcdc"; //Output: a:2,b:2,c:2,d:1
var dict = new Dictionary<char, int>();
foreach(var #char in a)
{
if(dict.ContainsKey(#char))
dict[#char] = dict[#char] + 1;
else dict.Add(#char, 1);
}
foreach(var keyValuePair in dict)
Console.WriteLine("Letter = {0}, Value = {1}", keyValuePair.Key, keyValuePair.Value);
My friend argued that this solution can be improved further using Arrays as they are better than using the above Dictionary.
I guess, Dictionary lookup is O(1) (arrays also O(1), correct ?) . Any ideas how to re-write the program using arrays which performs better?

arrays also O(1), correct
Arrays are only O(1) when accessing a known offset. They are O(n) when you have to scan the array to find a particular match.
You will not out-perform a Dictionary for this type of operation in the general case.
You can leverage the accessing a known offset caveat to use arrays for this limited case. For example, if you know that you will only have input in the range 'a'..'z', you can have an array of counts
int[] counts = new int[26];
// Initialize all array elements to 0 first.
count[(int)(ch - 'a')]++;
This type of code is fragile in that a change to the range of input values breaks it. Any performance gain will not matter vs. Dictionary for almost all cases. If you think your case really needs that small performance edge, benchmark before using the more fragile implementation with arrays.
This type of operation is pretty common. I wrote a generic class for it
[Serializable]
public class OccurrenceCounter<T>
{
private Dictionary<T, int> counts;
public OccurrenceCounter()
{
Initialize(default(StreamingContext));
}
[OnDeserialized]
public void Initialize(StreamingContext context)
{
if (counts == null)
{
counts = new Dictionary<T, int>();
counts.OnDeserialization(this);
}
}
public int this[T key]
{
get
{
if (counts.ContainsKey(key))
{
return counts[key];
}
else return 0;
}
}
public bool ContainsKey(T key)
{
return counts.ContainsKey(key);
}
public int Total()
{
return counts.Sum(c => c.Value);
}
public int Count()
{
return counts.Count;
}
public int TotalFor(IEnumerable<T> keys)
{
if (keys == null) throw new ArgumentException("Parameter keys must not be null.");
HashSet<T> hash = keys.ToHashSet();
return counts.Where(k => hash.Contains(k.Key)).Sum(k => k.Value);
}
public void Increment(T key, int amount = 1)
{
if (!counts.ContainsKey(key))
{
counts.Add(key, amount); // Initialize to zero and increment
}
else
{
counts[key]+=amount;
}
}
public void Decrement(T key, int amount = 1)
{
if (!counts.ContainsKey(key))
{
counts.Add(key, -amount); // Initialize to zero and decrement
}
else
{
counts[key]-=amount;
}
}
/// <summary>
/// Could not correctly implement IEnumerable on .NET (seems to compile on Mono). Should be fixed.
/// See: http://stackoverflow.com/questions/16270722/ienumerablet-int-arity-and-generic-type-definitions
/// </summary>
/// <returns></returns>
public IEnumerable<KeyValuePair<T, int>> Iterate()
{
foreach (KeyValuePair<T, int> kvp in counts) yield return kvp;
}
}

If you are sure the letters are all within the ASCII range, i.e. 0-255, you can use an array and access it by the char value.
var arr = new int[256]; //Make sure it's initialized to 0
foreach(var #char in a)
{
arr[(int)#char] ++;
}

Related

How to Implement a sorting function to a CollectionView in MAUI [duplicate]

I have a following class :
[DataContract]
public class Pair<TKey, TValue> : INotifyPropertyChanged, IDisposable
{
public Pair(TKey key, TValue value)
{
Key = key;
Value = value;
}
#region Properties
[DataMember]
public TKey Key
{
get
{ return m_key; }
set
{
m_key = value;
OnPropertyChanged("Key");
}
}
[DataMember]
public TValue Value
{
get { return m_value; }
set
{
m_value = value;
OnPropertyChanged("Value");
}
}
#endregion
#region Fields
private TKey m_key;
private TValue m_value;
#endregion
#region INotifyPropertyChanged Members
public event PropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(string name)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null)
{
handler(this, new PropertyChangedEventArgs(name));
}
}
#endregion
#region IDisposable Members
public void Dispose()
{ }
#endregion
}
Which I've put in an ObservableCollection :
ObservableCollection<Pair<ushort, string>> my_collection =
new ObservableCollection<Pair<ushort, string>>();
my_collection.Add(new Pair(7, "aaa"));
my_collection.Add(new Pair(3, "xey"));
my_collection.Add(new Pair(6, "fty"));
Q : How do I sort it by key ?
This simple extension worked beautifully for me. I just had to make sure that MyObject was IComparable. When the sort method is called on the observable collection of MyObjects, the CompareTo method on MyObject is called, which calls my Logical Sort method. While it doesn't have all the bells and whistles of the rest of the answers posted here, it's exactly what I needed.
static class Extensions
{
public static void Sort<T>(this ObservableCollection<T> collection) where T : IComparable
{
List<T> sorted = collection.OrderBy(x => x).ToList();
for (int i = 0; i < sorted.Count(); i++)
collection.Move(collection.IndexOf(sorted[i]), i);
}
}
public class MyObject: IComparable
{
public int CompareTo(object o)
{
MyObject a = this;
MyObject b = (MyObject)o;
return Utils.LogicalStringCompare(a.Title, b.Title);
}
public string Title;
}
.
.
.
myCollection = new ObservableCollection<MyObject>();
//add stuff to collection
myCollection.Sort();
I found a relevant blog entry that provides a better answer than the ones here:
http://kiwigis.blogspot.com/2010/03/how-to-sort-obversablecollection.html
UPDATE
The ObservableSortedList that #romkyns points out in the comments automatically maintains sort order.
Implements an observable collection which maintains its items in sorted order. In particular, changes to item properties that result in order changes are handled correctly.
However note also the remark
May be buggy due to the comparative complexity of the interface involved and its relatively poor documentation (see https://stackoverflow.com/a/5883947/33080).
You can use this simple method:
public static void Sort<TSource, TKey>(this Collection<TSource> source, Func<TSource, TKey> keySelector)
{
List<TSource> sortedList = source.OrderBy(keySelector).ToList();
source.Clear();
foreach (var sortedItem in sortedList)
source.Add(sortedItem);
}
You can sort like this:
_collection.Sort(i => i.Key);
Sorting an observable and returning the same object sorted can be done using an extension method. For larger collections watch out for the number of collection changed notifications.
I have updated my code to improve performance (thanks to nawfal) and to handle duplicates which no other answers here do at time of writing. The observable is partitioned into a left sorted half and a right unsorted half, where each time the minimum item (as found in the sorted list) is shifted to the end of the sorted partition from the unsorted. Worst case O(n). Essentially a selection sort (See below for output).
public static void Sort<T>(this ObservableCollection<T> collection)
where T : IComparable<T>, IEquatable<T>
{
List<T> sorted = collection.OrderBy(x => x).ToList();
int ptr = 0;
while (ptr < sorted.Count - 1)
{
if (!collection[ptr].Equals(sorted[ptr]))
{
int idx = search(collection, ptr+1, sorted[ptr]);
collection.Move(idx, ptr);
}
ptr++;
}
}
public static int search<T>(ObservableCollection<T> collection, int startIndex, T other)
{
for (int i = startIndex; i < collection.Count; i++)
{
if (other.Equals(collection[i]))
return i;
}
return -1; // decide how to handle error case
}
usage:
Sample with an observer (used a Person class to keep it simple)
public class Person:IComparable<Person>,IEquatable<Person>
{
public string Name { get; set; }
public int Age { get; set; }
public int CompareTo(Person other)
{
if (this.Age == other.Age) return 0;
return this.Age.CompareTo(other.Age);
}
public override string ToString()
{
return Name + " aged " + Age;
}
public bool Equals(Person other)
{
if (this.Name.Equals(other.Name) && this.Age.Equals(other.Age)) return true;
return false;
}
}
static void Main(string[] args)
{
Console.WriteLine("adding items...");
var observable = new ObservableCollection<Person>()
{
new Person {Name = "Katy", Age = 51},
new Person {Name = "Jack", Age = 12},
new Person {Name = "Bob", Age = 13},
new Person {Name = "Alice", Age = 39},
new Person {Name = "John", Age = 14},
new Person {Name = "Mary", Age = 41},
new Person {Name = "Jane", Age = 20},
new Person {Name = "Jim", Age = 39},
new Person {Name = "Sue", Age = 5},
new Person {Name = "Kim", Age = 19}
};
//what do observers see?
observable.CollectionChanged += (sender, e) =>
{
Console.WriteLine(
e.OldItems[0] + " move from " + e.OldStartingIndex + " to " + e.NewStartingIndex);
int i = 0;
foreach (var person in sender as ObservableCollection<Person>)
{
if (i == e.NewStartingIndex)
{
Console.Write("(" + (person as Person).Age + "),");
}
else
{
Console.Write((person as Person).Age + ",");
}
i++;
}
Console.WriteLine();
};
Details of sorting progress showing how the collection is pivoted:
Sue aged 5 move from 8 to 0
(5),51,12,13,39,14,41,20,39,19,
Jack aged 12 move from 2 to 1
5,(12),51,13,39,14,41,20,39,19,
Bob aged 13 move from 3 to 2
5,12,(13),51,39,14,41,20,39,19,
John aged 14 move from 5 to 3
5,12,13,(14),51,39,41,20,39,19,
Kim aged 19 move from 9 to 4
5,12,13,14,(19),51,39,41,20,39,
Jane aged 20 move from 8 to 5
5,12,13,14,19,(20),51,39,41,39,
Alice aged 39 move from 7 to 6
5,12,13,14,19,20,(39),51,41,39,
Jim aged 39 move from 9 to 7
5,12,13,14,19,20,39,(39),51,41,
Mary aged 41 move from 9 to 8
5,12,13,14,19,20,39,39,(41),51,
The Person class implements both IComparable and IEquatable the latter is used to minimise the changes to the collection so as to reduce the number of change notifications raised
EDIT Sorts same collection without creating a new copy *
To return an ObservableCollection, call .ToObservableCollection on *sortedOC* using e.g. [this implementation][1].
**** orig answer - this creates a new collection ****
You can use linq as the doSort method below illustrates. A quick code snippet: produces
3:xey
6:fty
7:aaa
Alternatively you could use an extension method on the collection itself
var sortedOC = _collection.OrderBy(i => i.Key);
private void doSort()
{
ObservableCollection<Pair<ushort, string>> _collection =
new ObservableCollection<Pair<ushort, string>>();
_collection.Add(new Pair<ushort,string>(7,"aaa"));
_collection.Add(new Pair<ushort, string>(3, "xey"));
_collection.Add(new Pair<ushort, string>(6, "fty"));
var sortedOC = from item in _collection
orderby item.Key
select item;
foreach (var i in sortedOC)
{
Debug.WriteLine(i);
}
}
public class Pair<TKey, TValue>
{
private TKey _key;
public TKey Key
{
get { return _key; }
set { _key = value; }
}
private TValue _value;
public TValue Value
{
get { return _value; }
set { _value = value; }
}
public Pair(TKey key, TValue value)
{
_key = key;
_value = value;
}
public override string ToString()
{
return this.Key + ":" + this.Value;
}
}
WPF provides live sorting out-of-the-box using the ListCollectionView class...
public ObservableCollection<string> MyStrings { get; set; }
private ListCollectionView _listCollectionView;
private void InitializeCollection()
{
MyStrings = new ObservableCollection<string>();
_listCollectionView = CollectionViewSource.GetDefaultView(MyStrings)
as ListCollectionView;
if (_listCollectionView != null)
{
_listCollectionView.IsLiveSorting = true;
_listCollectionView.CustomSort = new
CaseInsensitiveComparer(CultureInfo.InvariantCulture);
}
}
Once this initialization is complete, there's nothing more to do. The advantage over a passive sort is that the ListCollectionView does all the heavy lifting in a way that's transparent to the developer. New items are automatically placed in their correct sort order. Any class that derives from IComparer of T is suitable for the custom sort property.
See ListCollectionView for the documentation and other features.
I liked the bubble sort extension method approach on "Richie"'s blog above, but I don't necessarily want to just sort comparing the entire object. I more often want to sort on a specific property of the object. So I modified it to accept a key selector the way OrderBy does so you can choose which property to sort on:
public static void Sort<TSource, TKey>(this ObservableCollection<TSource> source, Func<TSource, TKey> keySelector)
{
if (source == null) return;
Comparer<TKey> comparer = Comparer<TKey>.Default;
for (int i = source.Count - 1; i >= 0; i--)
{
for (int j = 1; j <= i; j++)
{
TSource o1 = source[j - 1];
TSource o2 = source[j];
if (comparer.Compare(keySelector(o1), keySelector(o2)) > 0)
{
source.Remove(o1);
source.Insert(j, o1);
}
}
}
}
Which you would call the same way you would call OrderBy except it will sort the existing instance of your ObservableCollection instead of returning a new collection:
ObservableCollection<Person> people = new ObservableCollection<Person>();
...
people.Sort(p => p.FirstName);
#NielW's answer is the way to go, for real in-place sorting. I wanted to add an slightly altered solution that allows you bypass having to use IComparable:
static class Extensions
{
public static void Sort<TSource, TKey>(this ObservableCollection<TSource> collection, Func<TSource, TKey> keySelector)
{
List<TSource> sorted = collection.OrderBy(keySelector).ToList();
for (int i = 0; i < sorted.Count(); i++)
collection.Move(collection.IndexOf(sorted[i]), i);
}
}
now you can call it like most any LINQ method:
myObservableCollection.Sort(o => o.MyProperty);
I would like to Add to NeilW's answer. To incorporate a method that resembles the orderby. Add this method as an extension:
public static void Sort<T>(this ObservableCollection<T> collection, Func<T,T> keySelector) where T : IComparable
{
List<T> sorted = collection.OrderBy(keySelector).ToList();
for (int i = 0; i < sorted.Count(); i++)
collection.Move(collection.IndexOf(sorted[i]), i);
}
And use like:
myCollection = new ObservableCollection<MyObject>();
//Sorts in place, on a specific Func<T,T>
myCollection.Sort(x => x.ID);
A variation is where you sort the collection in place using a selection sort algorithm. Elements are moved into place using the Move method. Each move will fire the CollectionChanged event with NotifyCollectionChangedAction.Move (and also PropertyChanged with property name Item[]).
This algorithm has some nice properties:
The algorithm can be implemented as a stable sort.
The number of items moved in the collection (e.g. CollectionChanged events fired) is almost always less than other similar algorithms like insertion sort and bubble sort.
The algorithm is quite simple. The collection is iterated to find the smallest element which is then moved to the start of the collection. The process is repeated starting at the second element and so on until all elements have been moved into place. The algorithm is not terribly efficient but for anything you are going to display in a user interface it shouldn't matter. However, in terms of the number of move operations it is quite efficient.
Here is an extension method that for simplicity requires that the elements implement IComparable<T>. Other options are using an IComparer<T> or a Func<T, T, Int32>.
public static class ObservableCollectionExtensions {
public static void Sort<T>(this ObservableCollection<T> collection) where T : IComparable<T> {
if (collection == null)
throw new ArgumentNullException("collection");
for (var startIndex = 0; startIndex < collection.Count - 1; startIndex += 1) {
var indexOfSmallestItem = startIndex;
for (var i = startIndex + 1; i < collection.Count; i += 1)
if (collection[i].CompareTo(collection[indexOfSmallestItem]) < 0)
indexOfSmallestItem = i;
if (indexOfSmallestItem != startIndex)
collection.Move(indexOfSmallestItem, startIndex);
}
}
}
Sorting a collection is simply a matter of invoking the extension method:
var collection = new ObservableCollection<String>(...);
collection.Sort();
To improve a little bit the extension method on xr280xr answer I added an optional bool parameter to determine whether the sorting is descending or not. I also included the suggestion made by Carlos P in the comment to that answer. Please see below.
public static void Sort<TSource, TKey>(this ObservableCollection<TSource> source, Func<TSource, TKey> keySelector, bool desc = false)
{
if (source == null) return;
Comparer<TKey> comparer = Comparer<TKey>.Default;
for (int i = source.Count - 1; i >= 0; i--)
{
for (int j = 1; j <= i; j++)
{
TSource o1 = source[j - 1];
TSource o2 = source[j];
int comparison = comparer.Compare(keySelector(o1), keySelector(o2));
if (desc && comparison < 0)
source.Move(j, j - 1);
else if (!desc && comparison > 0)
source.Move(j - 1, j);
}
}
}
Do you need to keep your collection sorted at all times? When retrieving the pairs, do you need them to be always sorted, or it's only for a few times (maybe just for presenting)? How big do you expect your collection to be? There are a lot of factors that can help you decide witch method to use.
If you need the collection to be sorted at all times, even when you insert or delete elements and insertion speed is not a problem maybe you should implement some kind of SortedObservableCollection like #Gerrie Schenck mentioned or check out this implementation.
If you need your collection sorted just for a few times use:
my_collection.OrderBy(p => p.Key);
This will take some time to sort the collection, but even so, it might be the best solution depending on what your doing with it.
My current answer already has the most votes, but I found a better, and more modern, way of doing this.
class MyObject
{
public int id { get; set; }
public string title { get; set; }
}
ObservableCollection<MyObject> myCollection = new ObservableCollection<MyObject>();
//add stuff to collection
// .
// .
// .
myCollection = new ObservableCollection<MyObject>(
myCollection.OrderBy(n => n.title, Comparer<string>.Create(
(x, y) => (Utils.Utils.LogicalStringCompare(x, y)))));
This worked for me, found it long time ago somewhere.
// SortableObservableCollection
public class SortableObservableCollection<T> : ObservableCollection<T>
{
public SortableObservableCollection(List<T> list)
: base(list)
{
}
public SortableObservableCollection()
{
}
public void Sort<TKey>(Func<T, TKey> keySelector, System.ComponentModel.ListSortDirection direction)
{
switch (direction)
{
case System.ComponentModel.ListSortDirection.Ascending:
{
ApplySort(Items.OrderBy(keySelector));
break;
}
case System.ComponentModel.ListSortDirection.Descending:
{
ApplySort(Items.OrderByDescending(keySelector));
break;
}
}
}
public void Sort<TKey>(Func<T, TKey> keySelector, IComparer<TKey> comparer)
{
ApplySort(Items.OrderBy(keySelector, comparer));
}
private void ApplySort(IEnumerable<T> sortedItems)
{
var sortedItemsList = sortedItems.ToList();
foreach (var item in sortedItemsList)
{
Move(IndexOf(item), sortedItemsList.IndexOf(item));
}
}
}
Usage:
MySortableCollection.Sort(x => x, System.ComponentModel.ListSortDirection.Ascending);
Make a new class SortedObservableCollection, derive it from ObservableCollection and implement IComparable<Pair<ushort, string>>.
One way would be to convert it to a List and then call Sort(), providing a comparison delegate. Something like:-
(untested)
my_collection.ToList().Sort((left, right) => left == right ? 0 : (left > right ? -1 : 1));
What the heck, I'll throw in a quickly-cobbled-together answer as well...it looks a bit like some other implementations here, but I'll add it anywho:
(barely tested, hopefully I'm not embarassing myself)
Let's state some goals first (my assumptions):
1) Must sort ObservableCollection<T> in place, to maintain notifications, etc.
2) Must not be horribly inefficient (i.e., something close to standard "good" sorting efficiency)
public static class Ext
{
public static void Sort<T>(this ObservableCollection<T> src)
where T : IComparable<T>
{
// Some preliminary safety checks
if(src == null) throw new ArgumentNullException("src");
if(!src.Any()) return;
// N for the select,
// + ~ N log N, assuming "smart" sort implementation on the OrderBy
// Total: N log N + N (est)
var indexedPairs = src
.Select((item,i) => Tuple.Create(i, item))
.OrderBy(tup => tup.Item2);
// N for another select
var postIndexedPairs = indexedPairs
.Select((item,i) => Tuple.Create(i, item.Item1, item.Item2));
// N for a loop over every element
var pairEnum = postIndexedPairs.GetEnumerator();
pairEnum.MoveNext();
for(int idx = 0; idx < src.Count; idx++, pairEnum.MoveNext())
{
src.RemoveAt(pairEnum.Current.Item1);
src.Insert(idx, pairEnum.Current.Item3);
}
// (very roughly) Estimated Complexity:
// N log N + N + N + N
// == N log N + 3N
}
}
None of these answers worked in my case. Either because it screws up binding, or requires so much additional coding that it's kind of a nightmare, or the answer is just broken. So, here's yet another simpler answer i thought. It's a lot less code and it remains the same observable collection with an additional this.sort type of method. Let me know if there's some reason I shouldn't be doing it this way (efficiency etc.)?
public class ScoutItems : ObservableCollection<ScoutItem>
{
public void Sort(SortDirection _sDir, string _sItem)
{
//TODO: Add logic to look at _sItem and decide what property to sort on
IEnumerable<ScoutItem> si_enum = this.AsEnumerable();
if (_sDir == SortDirection.Ascending)
{
si_enum = si_enum.OrderBy(p => p.UPC).AsEnumerable();
} else
{
si_enum = si_enum.OrderByDescending(p => p.UPC).AsEnumerable();
}
foreach (ScoutItem si in si_enum)
{
int _OldIndex = this.IndexOf(si);
int _NewIndex = si_enum.ToList().IndexOf(si);
this.MoveItem(_OldIndex, _NewIndex);
}
}
}
...Where ScoutItem is my public class. Just seemed a lot simpler. Added benefit: it actually works and doesn't mess with bindings or return a new collection etc.
Alright, since I was having issues getting ObservableSortedList to work with XAML, I went ahead and created SortingObservableCollection. It inherits from ObservableCollection, so it works with XAML and I've unit tested it to 98% code coverage. I've used it in my own apps, but I won't promise that it is bug free. Feel free to contribute. Here is sample code usage:
var collection = new SortingObservableCollection<MyViewModel, int>(Comparer<int>.Default, model => model.IntPropertyToSortOn);
collection.Add(new MyViewModel(3));
collection.Add(new MyViewModel(1));
collection.Add(new MyViewModel(2));
// At this point, the order is 1, 2, 3
collection[0].IntPropertyToSortOn = 4; // As long as IntPropertyToSortOn uses INotifyPropertyChanged, this will cause the collection to resort correctly
It's a PCL, so it should work with Windows Store, Windows Phone, and .NET 4.5.1.
This is what I do with OC extensions:
/// <summary>
/// Synches the collection items to the target collection items.
/// This does not observe sort order.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="source">The items.</param>
/// <param name="updatedCollection">The updated collection.</param>
public static void SynchCollection<T>(this IList<T> source, IEnumerable<T> updatedCollection)
{
// Evaluate
if (updatedCollection == null) return;
// Make a list
var collectionArray = updatedCollection.ToArray();
// Remove items from FilteredViewItems not in list
source.RemoveRange(source.Except(collectionArray));
// Add items not in FilteredViewItems that are in list
source.AddRange(collectionArray.Except(source));
}
/// <summary>
/// Synches the collection items to the target collection items.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="source">The source.</param>
/// <param name="updatedCollection">The updated collection.</param>
/// <param name="canSort">if set to <c>true</c> [can sort].</param>
public static void SynchCollection<T>(this ObservableCollection<T> source,
IList<T> updatedCollection, bool canSort = false)
{
// Synch collection
SynchCollection(source, updatedCollection.AsEnumerable());
// Sort collection
if (!canSort) return;
// Update indexes as needed
for (var i = 0; i < updatedCollection.Count; i++)
{
// Index of new location
var index = source.IndexOf(updatedCollection[i]);
if (index == i) continue;
// Move item to new index if it has changed.
source.Move(index, i);
}
}
I needed to be able to sort by multiple things not just one. This answer is based on some of the other answers but it allows for more complex sorting.
static class Extensions
{
public static void Sort<T, TKey>(this ObservableCollection<T> collection, Func<ObservableCollection<T>, TKey> sort)
{
var sorted = (sort.Invoke(collection) as IOrderedEnumerable<T>).ToArray();
for (int i = 0; i < sorted.Count(); i++)
collection.Move(collection.IndexOf(sorted[i]), i);
}
}
When you use it, pass in a series of OrderBy/ThenBy calls. Like this:
Children.Sort(col => col.OrderByDescending(xx => xx.ItemType == "drive")
.ThenByDescending(xx => xx.ItemType == "folder")
.ThenBy(xx => xx.Path));
I learned a lot from the other solutions, but I found a couple of problems. First, some depend on IndexOf which tends to be pretty slow for large lists. Second, my ObservableCollection had EF entities and using the Remove seemed to corrupt some of the foreign key properties. Maybe I'm doing something wrong.
Regardless, A Move can be used instead Remove/Insert, but that causes some issues with the performance fix.
To fix the performance problem, I create a dictionary with the IndexOf sorted values. To keep the dictionary up-to-date and to preserve the entity properties, use a swap implemented with two moves instead of one as implemented in other solutions.
A single move shifts the indexes of the elements between the locations, which would invalidate the IndexOf dictionary. Adding a second move to implement a swap restores locations.
public static void Sort<TSource, TKey>(this ObservableCollection<TSource> collection, Func<TSource, TKey> keySelector)
{
List<TSource> sorted = collection.OrderBy(keySelector).ToList();
Dictionary<TSource, int> indexOf = new Dictionary<TSource, int>();
for (int i = 0; i < sorted.Count; i++)
indexOf[sorted[i]] = i;
int idx = 0;
while (idx < sorted.Count)
if (!collection[idx].Equals(sorted[idx])) {
int newIdx = indexOf[collection[idx]]; // where should current item go?
collection.Move(newIdx, idx); // move whatever's there to current location
collection.Move(idx + 1, newIdx); // move current item to proper location
}
else {
idx++;
}
}
var collection = new ObservableCollection<int>();
collection.Add(7);
collection.Add(4);
collection.Add(12);
collection.Add(1);
collection.Add(20);
// ascending
collection = new ObservableCollection<int>(collection.OrderBy(a => a));
// descending
collection = new ObservableCollection<int>(collection.OrderByDescending(a => a));

How can I loop through n-dimensional sparse matrix (created by own class)?

I have two classes
class CSparseMatrix:
{
public int NumberOfDimensions { get; set;}
public int DefaultNumber { get; set; }
List<int> dimensionsRanges = new List<int>(); // that's specify dimension of ranges, f.e. {100, 100, 100, 120..}
List<CSparseCell> cells = new List<CSparseCell>(); // contains only values different from default for this matrix
}
class CSparseCell {
public int Value { get; set; }
public List<int> coordinates = new List<int>();
}
And the problem is: how to loop through this CSparseMatrix and output all ranges with values in format like: [0, 0, 0, 0] - *value*, [0, 0, 0, 1] - *value*, [0, 0, 0, 2] - *value*, ...[dimensionsRanges[0]-1, dimensionsRanges[1]-1, dimensionsRanges[2]-1, dimensionsRanges[3]-1] - *value* so through all ranges and output all values (we can have any number of this dimensions).
That means that in program we had to to output all values of matrix, which can have any number of dimensions and ranges could be different. But we don't know what this number of dimensions will be so can't use n-nested loops, and actually I have no idea of algorithm or method how to iterate through all values from List<int> dimensionsRanges
The way, we get the specific value from this "matrix" is
public int GetValueFromCell(List<int> coordinate)
{
foreach(CSparseCell cell in cells)
{
if(cell.coordinates.All(coordinate.Contains)) {
return cell.Value;
}
}
return DefaultNumber;
}
Since you say your matrix is large, avoid returning CSparseCell by pushing the Value lookup to the answer computation.
You create a method to return all the coordinates in the sparse matrix, using a helper method to increment coordinates. NOTE: I changed the coordinate increment method to use a for loop instead of do which might be easier to understand (?).
void IncCoord(ref List<int> aCoord) { // ref not needed, just for documentation
for (var curDim = NumberOfDimensions - 1; curDim >= 0; --curDim) {
if (aCoord[curDim] == dimensionsRanges[curDim] - 1) // handle carry
aCoord[curDim] = 0;
else {
++aCoord[curDim];
break;
}
}
}
public IEnumerable<List<int>> AllCoords() {
var curCellCoord = Enumerable.Repeat(0, NumberOfDimensions).ToList();
var numCells = dimensionsRanges.Product();
for (int j1 = 0; j1 < numCells; ++j1) {
yield return curCellCoord.ToList();
IncCoord(ref curCellCoord);
}
}
Now you can use this method to get all Values and you can format the output however you like. Assuming t is a CSparseMatrix:
var ans = t.AllCoords().Select(c => $"[{c.Join(",")}] - {t.GetValueFromCell(c)}");
A couple of helper extension methods are used:
public static class IEnumerableExt {
public static int Product(this IEnumerable<int> src) => src.Aggregate(1, (a,n) => a * n);
}
public static class StringExt {
public static string Join<T>(this IEnumerable<T> items, string sep) => String.Join(sep, items);
}
I'd make a couple suggestions. First tying the coordinate and the value together complicate things. It'd be much better to separate those two.
Secondly, having to search the entire array to find a single cell makes it very inefficient. You can create a simple sparse matrix out of a dictionary. This not at all the best way but with the unknown key requirements you have given it's at least better than the array. (is it really a matrix if the key is 1..n ?)
Here's an example. First we make the coordinates their own type.
readonly struct SparseCoord : IEquatable<SparseCoord>
{
public static SparseCoord ToCoordinate(params int[] coords)
{
return new SparseCoord(coords);
}
public SparseCoord(int[] c)
{
this.Coordinate = new int[c?.Length ?? 0];
if (null != c)
c.CopyTo(this.Coordinate, 0);
}
public int[] Coordinate { get; }
public bool Equals([AllowNull] SparseCoord other)
{
return Enumerable.SequenceEqual(this.Coordinate, other.Coordinate);
}
public override bool Equals(object obj)
{
if (obj is SparseCoord c)
return this.Equals(c);
return false;
}
public override int GetHashCode()
{
var hash = new HashCode();
foreach (var i in this.Coordinate)
hash.Add(i);
return hash.ToHashCode();
}
public override string ToString()
{
StringBuilder sb = new StringBuilder();
sb.Append('(');
foreach (var i in this.Coordinate)
{
sb.Append(i);
sb.Append(',');
}
sb.Length = sb.Length - 1;
sb.Append(')');
return sb.ToString();
}
}
Then we create a sparse matrix using a dictionary as the storage. Note this is incomplete obviously as there's no way to clear it partially or completely but again it's just an example.
class SparseMatrix : IEnumerable<KeyValuePair<SparseCoord, int>>
{
Dictionary<SparseCoord, int> cells = new Dictionary<SparseCoord, int>();
public int this[SparseCoord coord]
{
get
{
if (this.cells.TryGetValue(coord, out var ret))
return ret;
return 0;
}
set
{
this.cells[coord] = value;
}
}
public IEnumerator<KeyValuePair<SparseCoord, int>> GetEnumerator()
{
return this.cells.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
}
Then you can add values and iterate over the contents:
static void Main(string[] _)
{
SparseMatrix matrix = new SparseMatrix();
matrix[SparseCoord.ToCoordinate(1, 2, 3)] = 1;
matrix[SparseCoord.ToCoordinate(5, 6, 7)] = 2;
foreach (var value in matrix)
Console.WriteLine(value);
}
Finally I would reiterate that if your matrix really is going to get "big" as you said in a comment then you should invest some time in looking at some well known implementations of sparse matrix.

Dictionary with class as Key

I am studying electronic engineering, and I am a beginner in C#. I have measured data and I would like to store it in a 2 dimensional way. I thought I could make a Dictionary like this:
Dictionary<Key, string> dic = new Dictionary<Key, string>();
"Key" here is my a own class with two int variables. Now I want to store the data in this Dictionary but it doesn't work so far. If I want to read the data with the special Key, the error report says, that the Key is not available in the Dictionary.
Here is the class Key:
public partial class Key
{
public Key(int Bahn, int Zeile) {
myBahn = Bahn;
myZeile = Zeile;
}
public int getBahn()
{
return myBahn;
}
public int getZeile()
{
return myZeile;
}
private int myBahn;
private int myZeile;
}
for testing it I made something like this:
Getting elements in:
Key KE = new Key(1,1);
dic.Add(KE, "hans");
...
Getting elements out:
Key KE = new Key(1,1);
monitor.Text = dic[KE];
Has someone an idea?
You need to override methods GetHashCode and Equals in your own class to use it as a key.
class Foo
{
public string Name { get; set;}
public int FooID {get; set;}
public override int GetHashCode()
{
return FooID;
}
public override bool Equals(object obj)
{
return Equals(obj as Foo);
}
public bool Equals(Foo obj)
{
return obj != null && obj.FooID == this.FooID;
}
}
Though you could use a class as key by implementing your own Equals and GetHashCode, I would not advise to do it if you're not yet familiar with C#.
These methods will be invoked by C# internal libraries, which expect them to work exactly as per specification, handling all edge cases gracefully. If you put a bug in them, you might be in for an unpleasant head scratching session.
In my opinion, it would be no less efficient and way simpler to create a key on the spot using tried, true and tested existing types that already support hashing and comparison.
From your angular coordinates, e.g.:
int Bahn = 15;
int Zeile = 30;
You could use a string (e.g. "15,30"):
String Key (int Bahn, int Zeile) { return $"{Bahn},{Zeile}"; }
var myDict = new Dictionary<string, string>();
myDict.Add (Key(Bahn,Zeile), myString);
or a two elements tuple (e.g. <15,30>) if you need something more efficient:
Tuple<int,int> Key (int Bahn, int Zeile) { return Tuple.Create(Bahn,Zeile); }
var myDict = new Dictionary<Tuple<int, int>, string>();
myDict.Add (Key(Bahn,Zeile), myString);
or a mere combination of your two angles if the range is small enough to fit into an int (e.g. 15+30*360) if you need something even more efficient:
int Key (int Bahn, int Zeile) { return Bahn+360*Zeile; }
var myDict = new Dictionary<int, string>();
myDict.Add (Key(Bahn,Zeile), myString);
That seems a lot less cumbersome than:
class Key {
// truckloads of code to implement the class,
// not to mention testing it thourougly, including edge cases
}
var myDict = new Dictionary<Key, string>();
myDict.Add (new Key(Bahn,Zeile), myString);
Mutability of the key
Also, note that your keys must be immutable as long as they are used to index an entry.
If you change the value of Bahn or Ziel after the key has been used to add an element, you will mess up your dictionary something bad.
The behaviour is undefined, but you will most likely lose random entries, cause memory leaks and possibly crash with an exception if the internal libraries end up detecting an inconsistent state (like several entries indexed by the same key).
For instance:
var myKey = new Key(15, 30);
for (String data in row_of_data_sampled_every_10_degrees)
{
myDict.Add (myKey, data); // myKey must remain constant until the entry is removed
myKey.Bahn += 10; // changing it now spells the death of your dictionary
}
A side note on hashing angular coordinates
Now the catch is, the generic hashing functions provided for ints, strings and tuples are unlikely to produce optimal results for your specific set of data.
I would advise to start with a simple solution and only resort to specialized code if you run into actual performance issues. In which case you would probably be better off using a data structure more suited to spatial indexing (typically a quadtree in case of polar coordinates, or an octree if you want to reconstruct a 3D model from your scanner data).
For the sake of an alternate opinion and not to be disparaging of Mr. Kuroi's solution (which is a good one) here is a simple class that can be used as a key in a map as well as other uses. We use a complex class as a key because we want to know track other things. In this example, we arrive at a vertex in a graph and we want to know if we have visited it before.
<code>
public class Vertex : IComparable<Vertex>, IEquatable<Vertex>, IComparable
{
String m_strVertexName = String.Empty;
bool m_bHasVisited = false;
public Vertex()
{
}
public Vertex(String strVertexName) : this()
{
m_strVertexName = strVertexName;
}
public override string ToString()
{
return m_strVertexName;
}
public string VertexName
{
get { return m_strVertexName; }
set
{
if (!String.IsNullOrEmpty(value))
m_strVertexName = value;
}
}
public bool HasVisited
{
get { return m_bHasVisited; }
set { m_bHasVisited = value; }
}
public override int GetHashCode()
{
return ToString().GetHashCode();
}
public int CompareTo(Vertex rhs)
{
if (Equals(rhs))
return 0;
return ToString().CompareTo(rhs.ToString());
}
int IComparable.CompareTo(object rhs)
{
if (!(rhs is Vertex))
throw new InvalidOperationException("CompareTo: Not a Vertex");
return CompareTo((Vertex)rhs);
}
public static bool operator < (Vertex lhs, Vertex rhs) => lhs.CompareTo(rhs) < 0;
public static bool operator > (Vertex lhs, Vertex rhs) => lhs.CompareTo(rhs) > 0;
public bool Equals (Vertex rhs) => ToString() == rhs.ToString();
public override bool Equals(object rhs)
{
if (!(rhs is Vertex))
return false;
return Equals((Vertex)rhs);
}
public static bool operator == (Vertex lhs, Vertex rhs) => lhs.Equals(rhs);
public static bool operator != (Vertex lhs, Vertex rhs) => !(lhs == rhs);
}
</code>

Generic form of NameValueCollection in .Net

Does .Net provides generic form of NameValueCollection or an alternative to
Dictionary<string,List<T>> ?
Something like
Person john = new Person();
...
Person vick = new Person();
...
NameValueCollection<Person> stringToPerson = new NameValueCollection<Person>();
stringToPerson.Add("John",john)
stringToPerson.Add("Vick",vick)
Actually in my case am forced to rely on Dictionary<string,List<Peron>>, is there any other alternative?
Regards,
Jeez
There's no such thing built in to the BCL as far as I know. I would just write your own class which wraps a Dictionary<string, List<T>> internally and exposes appropriate methods (e.g., Add could add an element to the List<T> for the given key).
For example:
class NameValueCollection<T>
{
Dictionary<string, List<T>> _dict = new Dictionary<string, List<T>>();
public void Add(string name, T value)
{
List<T> list;
if (!_dict.TryGetValue(name, out list))
{
_dict[name] = list = new List<T>();
}
list.Add(value);
}
// etc.
}
The closest alternative is probably the ILookup<TKey, TElement> interface. At the moment, the only public type that implements it in the BCL is the immutable Lookup<TKey, TElement> class, an instance of which can be created with the Enumerable.ToLookup method. If you want a mutable type that implements the interface, you'll have to write one yourself; you can find an example implementation here.
In your case, you probably want an ILookup<string, Person>.
Oh, I see what you want to do now. You want to be able to add to the Person collection without having to create a new List each time. Extension methods to the rescue!
public static void SafeAdd<TValue>(this IDictionary<TKey, ICollection<TValue>> dict, TKey key, TValue value)
{
HashSet<T> container;
if (!dict.TryGetValue(key, out container))
{
dict[key] = new HashSet<TValue>();
}
dict[key].Add(value);
}
Usage:
var names = new Dictionary<string, ICollection<Person>>();
names.SafeAdd("John", new Person("John"));
Nothing inbuilt; there is Lookup<TKey,TValue> which operates as a multi-map, but that is immutable. I wrote a mutable EditableLookup<TKey,TValue> for MiscUtil which may help.
Generic NameValueCollection
What makes NameValueCollection special unlike Dictionary, is that one key can contain several elements.
The generic NameValueCollection<T> based on NameObjectCollectionBase is in the following code:
using System.Collections.Specialized;
namespace System.Collections.Generic
{
public class NameValueCollection<T> : NameObjectCollectionBase
{
private string[] _keys; // Cached keys.
private T[] _values; // Cached values.
// Resets the caches.
protected void InvalidateCachedArrays()
{
_values = null;
_keys = null;
}
// Converts ArrayLit to Array of T elements.
protected static T[] AsArray(ArrayList list)
{
int count = 0;
if (list == null || (count = list.Count) == 0)
return (T[])null;
T[] array = new T[count];
list.CopyTo(0, array, 0, count);
return array;
}
// Gets all values cache.
protected ArrayList GetAllValues()
{
int count = Count;
ArrayList arrayList = new ArrayList(count);
for (int i = 0; i < count; ++i)
{
arrayList.AddRange(Get(i));
}
return arrayList;
}
// Adds single value to collection.
public void Add(string name, T value)
{
InvalidateCachedArrays();
ArrayList arrayList = (ArrayList)BaseGet(name);
if (arrayList == null)
{
arrayList = new ArrayList(1);
if (value != null) arrayList.Add(value);
BaseAdd(name, arrayList);
}
else
{
if (value == null) return;
arrayList.Add(value);
}
}
// Adds range of values to collection.
public void Add(NameValueCollection<T> collection)
{
InvalidateCachedArrays();
int count = collection.Count;
for (int i = 0; i < count; i++)
{
string key = collection.GetKey(i);
T[] values = collection.Get(i);
foreach (var value in values)
{
Add(key, value);
}
}
}
// Set single value (prevoious values will be removed).
public void Set(string name, T value)
{
InvalidateCachedArrays();
BaseSet(name, new ArrayList(1) { value });
}
// Set range of values (prevoious values will be removed).
public void Set(string name, params T[] values)
{
InvalidateCachedArrays();
BaseSet(name, new ArrayList(values));
}
// Gets all values that paired with specified key.
public T[] Get(string name)
{
return AsArray((ArrayList)BaseGet(name));
}
// Gets all values at the specified index of collection.
public T[] Get(int index)
{
return AsArray((ArrayList)BaseGet(index));
}
// Gets string containing the key at the specified index.
public string GetKey(int index)
{
return BaseGetKey(index);
}
// Removes values from the specified key.
public void Remove(string name)
{
InvalidateCachedArrays();
BaseRemove(name);
}
// Removes all data from the collection.
public void Clear()
{
InvalidateCachedArrays();
BaseClear();
}
// All keys that the current collection contains.
public new string[] Keys
{
get
{
if (_keys == null)
_keys = BaseGetAllKeys();
return _keys;
}
}
// All values that the current collection contains.
public T[] Values
{
get
{
if (_values == null)
_values = AsArray(GetAllValues());
return _values;
}
}
// Values at the specefied index.
public T[] this[int index]
{
get
{
return Get(index);
}
set
{
BaseSet(index, new ArrayList(value));
}
}
// Values at the specefied key.
public T[] this[string name]
{
get
{
return Get(name);
}
set
{
BaseSet(name, new ArrayList(value));
}
}
// Enumerates all entries.
public IEnumerable<KeyValuePair<string, T>> GetAllEntries()
{
foreach (string key in Keys)
{
foreach (T value in Get(key))
{
yield return new KeyValuePair<string, T>(key, value);
}
}
}
}
}
Usage:
NameValueCollection<int> collection = new NameValueCollection<int>();
collection.Add("a", 123);
collection.Add("a", 456); // 123 and 456 will be inserted into the same key.
collection.Add("b", 789); // 789 will be inserted into another key.
int[] a = collection.Get("a"); // contains 123 and 456.
int[] b = collection.Get("b"); // contains 789.
The above code implements the main features.
Here is complete implementation of the NameValueCollection<T> with additional tools.

Using an enum as an array index in C#

I want to do the same as in this question, that is:
enum DaysOfTheWeek {Sunday=0, Monday, Tuesday...};
string[] message_array = new string[number_of_items_at_enum];
...
Console.Write(custom_array[(int)DaysOfTheWeek.Sunday]);
however, I would rather have something integral to so, rather than write this error prone code. Is there a built in module in C# that does just this?
If the values of your enum items are contigious, the array method works pretty well. However, in any case, you could use Dictionary<DayOfTheWeek, string> (which is less performant, by the way).
Since C# 7.3 it has been possible to use System.Enum as a constraint on type parameters. So the nasty hacks in the some of the other answers are no longer required.
Here's a very simple ArrayByEum class that does exactly what the question asked.
Note that it will waste space if the enum values are non-contiguous, and won't cope with enum values that are too large for an int. I did say this example was very simple.
/// <summary>An array indexed by an Enum</summary>
/// <typeparam name="T">Type stored in array</typeparam>
/// <typeparam name="U">Indexer Enum type</typeparam>
public class ArrayByEnum<T,U> : IEnumerable where U : Enum // requires C# 7.3 or later
{
private readonly T[] _array;
private readonly int _lower;
public ArrayByEnum()
{
_lower = Convert.ToInt32(Enum.GetValues(typeof(U)).Cast<U>().Min());
int upper = Convert.ToInt32(Enum.GetValues(typeof(U)).Cast<U>().Max());
_array = new T[1 + upper - _lower];
}
public T this[U key]
{
get { return _array[Convert.ToInt32(key) - _lower]; }
set { _array[Convert.ToInt32(key) - _lower] = value; }
}
public IEnumerator GetEnumerator()
{
return Enum.GetValues(typeof(U)).Cast<U>().Select(i => this[i]).GetEnumerator();
}
}
Usage:
ArrayByEnum<string,MyEnum> myArray = new ArrayByEnum<string,MyEnum>();
myArray[MyEnum.First] = "Hello";
myArray[YourEnum.Other] = "World"; // compiler error
You could make a class or struct that could do the work for you
public class Caster
{
public enum DayOfWeek
{
Sunday = 0,
Monday,
Tuesday,
Wednesday,
Thursday,
Friday,
Saturday
}
public Caster() {}
public Caster(string[] data) { this.Data = data; }
public string this[DayOfWeek dow]{
get { return this.Data[(int)dow]; }
}
public string[] Data { get; set; }
public static implicit operator string[](Caster caster) { return caster.Data; }
public static implicit operator Caster(string[] data) { return new Caster(data); }
}
class Program
{
static void Main(string[] args)
{
Caster message_array = new string[7];
Console.Write(message_array[Caster.DayOfWeek.Sunday]);
}
}
EDIT
For lack of a better place to put this, I am posting a generic version of the Caster class below. Unfortunately, it relies on runtime checks to enforce TKey as an enum.
public enum DayOfWeek
{
Weekend,
Sunday = 0,
Monday,
Tuesday,
Wednesday,
Thursday,
Friday,
Saturday
}
public class TypeNotSupportedException : ApplicationException
{
public TypeNotSupportedException(Type type)
: base(string.Format("The type \"{0}\" is not supported in this context.", type.Name))
{
}
}
public class CannotBeIndexerException : ApplicationException
{
public CannotBeIndexerException(Type enumUnderlyingType, Type indexerType)
: base(
string.Format("The base type of the enum (\"{0}\") cannot be safely cast to \"{1}\".",
enumUnderlyingType.Name, indexerType)
)
{
}
}
public class Caster<TKey, TValue>
{
private readonly Type baseEnumType;
public Caster()
{
baseEnumType = typeof(TKey);
if (!baseEnumType.IsEnum)
throw new TypeNotSupportedException(baseEnumType);
}
public Caster(TValue[] data)
: this()
{
Data = data;
}
public TValue this[TKey key]
{
get
{
var enumUnderlyingType = Enum.GetUnderlyingType(baseEnumType);
var intType = typeof(int);
if (!enumUnderlyingType.IsAssignableFrom(intType))
throw new CannotBeIndexerException(enumUnderlyingType, intType);
var index = (int) Enum.Parse(baseEnumType, key.ToString());
return Data[index];
}
}
public TValue[] Data { get; set; }
public static implicit operator TValue[](Caster<TKey, TValue> caster)
{
return caster.Data;
}
public static implicit operator Caster<TKey, TValue>(TValue[] data)
{
return new Caster<TKey, TValue>(data);
}
}
// declaring and using it.
Caster<DayOfWeek, string> messageArray =
new[]
{
"Sunday",
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday"
};
Console.WriteLine(messageArray[DayOfWeek.Sunday]);
Console.WriteLine(messageArray[DayOfWeek.Monday]);
Console.WriteLine(messageArray[DayOfWeek.Tuesday]);
Console.WriteLine(messageArray[DayOfWeek.Wednesday]);
Console.WriteLine(messageArray[DayOfWeek.Thursday]);
Console.WriteLine(messageArray[DayOfWeek.Friday]);
Console.WriteLine(messageArray[DayOfWeek.Saturday]);
Here you go:
string[] message_array = Enum.GetNames(typeof(DaysOfTheWeek));
If you really need the length, then just take the .Length on the result :)
You can get values with:
string[] message_array = Enum.GetValues(typeof(DaysOfTheWeek));
Compact form of enum used as index and assigning whatever type to a Dictionary
and strongly typed. In this case float values are returned but values could be complex Class instances having properties and methods and more:
enum opacityLevel { Min, Default, Max }
private static readonly Dictionary<opacityLevel, float> _oLevels = new Dictionary<opacityLevel, float>
{
{ opacityLevel.Max, 40.0 },
{ opacityLevel.Default, 50.0 },
{ opacityLevel.Min, 100.0 }
};
//Access float value like this
var x = _oLevels[opacitylevel.Default];
If all you need is essentially a map, but don't want to incur performance overhead associated with dictionary lookups, this might work:
public class EnumIndexedArray<TKey, T> : IEnumerable<KeyValuePair<TKey, T>> where TKey : struct
{
public EnumIndexedArray()
{
if (!typeof (TKey).IsEnum) throw new InvalidOperationException("Generic type argument is not an Enum");
var size = Convert.ToInt32(Keys.Max()) + 1;
Values = new T[size];
}
protected T[] Values;
public static IEnumerable<TKey> Keys
{
get { return Enum.GetValues(typeof (TKey)).OfType<TKey>(); }
}
public T this[TKey index]
{
get { return Values[Convert.ToInt32(index)]; }
set { Values[Convert.ToInt32(index)] = value; }
}
private IEnumerable<KeyValuePair<TKey, T>> CreateEnumerable()
{
return Keys.Select(key => new KeyValuePair<TKey, T>(key, Values[Convert.ToInt32(key)]));
}
public IEnumerator<KeyValuePair<TKey, T>> GetEnumerator()
{
return CreateEnumerable().GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
So in your case you could derive:
class DaysOfWeekToStringsMap:EnumIndexedArray<DayOfWeek,string>{};
Usage:
var map = new DaysOfWeekToStringsMap();
//using the Keys static property
foreach(var day in DaysOfWeekToStringsMap.Keys){
map[day] = day.ToString();
}
foreach(var day in DaysOfWeekToStringsMap.Keys){
Console.WriteLine("map[{0}]={1}",day, map[day]);
}
// using iterator
foreach(var value in map){
Console.WriteLine("map[{0}]={1}",value.Key, value.Value);
}
Obviously this implementation is backed by an array, so non-contiguous enums like this:
enum
{
Ok = 1,
NotOk = 1000000
}
would result in excessive memory usage.
If you require maximum possible performance you might want to make it less generic and loose all generic enum handling code I had to use to get it to compile and work. I didn't benchmark this though, so maybe it's no big deal.
Caching the Keys static property might also help.
I realize this is an old question, but there have been a number of comments about the fact that all solutions so far have run-time checks to ensure the data type is an enum. Here is a complete solution (with some examples) of a solution with compile time checks (as well as some comments and discussions from my fellow developers)
//There is no good way to constrain a generic class parameter to an Enum. The hack below does work at compile time,
// though it is convoluted. For examples of how to use the two classes EnumIndexedArray and ObjEnumIndexedArray,
// see AssetClassArray below. Or, e.g.
// EConstraint.EnumIndexedArray<int, YourEnum> x = new EConstraint.EnumIndexedArray<int, YourEnum>();
// See this post
// http://stackoverflow.com/questions/79126/create-generic-method-constraining-t-to-an-enum/29581813#29581813
// and the answer/comments by Julien Lebosquain
public class EConstraint : HackForCompileTimeConstraintOfTEnumToAnEnum<System.Enum> { }//THIS MUST BE THE ONLY IMPLEMENTATION OF THE ABSTRACT HackForCompileTimeConstraintOfTEnumToAnEnum
public abstract class HackForCompileTimeConstraintOfTEnumToAnEnum<SystemEnum> where SystemEnum : class
{
//For object types T, users should use EnumIndexedObjectArray below.
public class EnumIndexedArray<T, TEnum>
where TEnum : struct, SystemEnum
{
//Needs to be public so that we can easily do things like intIndexedArray.data.sum()
// - just not worth writing up all the equivalent methods, and we can't inherit from T[] and guarantee proper initialization.
//Also, note that we cannot use Length here for initialization, even if Length were defined the same as GetNumEnums up to
// static qualification, because we cannot use a non-static for initialization here.
// Since we want Length to be non-static, in keeping with other definitions of the Length property, we define the separate static
// GetNumEnums, and then define the non-static Length in terms of the actual size of the data array, just for clarity,
// safety and certainty (in case someone does something stupid like resizing data).
public T[] data = new T[GetNumEnums()];
//First, a couple of statics allowing easy use of the enums themselves.
public static TEnum[] GetEnums()
{
return (TEnum[])Enum.GetValues(typeof(TEnum));
}
public TEnum[] getEnums()
{
return GetEnums();
}
//Provide a static method of getting the number of enums. The Length property also returns this, but it is not static and cannot be use in many circumstances.
public static int GetNumEnums()
{
return GetEnums().Length;
}
//This should always return the same as GetNumEnums, but is not static and does it in a way that guarantees consistency with the member array.
public int Length { get { return data.Length; } }
//public int Count { get { return data.Length; } }
public EnumIndexedArray() { }
// [WDS 2015-04-17] Remove. This can be dangerous. Just force people to use EnumIndexedArray(T[] inputArray).
// [DIM 2015-04-18] Actually, if you think about it, EnumIndexedArray(T[] inputArray) is just as dangerous:
// For value types, both are fine. For object types, the latter causes each object in the input array to be referenced twice,
// while the former causes the single object t to be multiply referenced. Two references to each of many is no less dangerous
// than 3 or more references to one. So all of these are dangerous for object types.
// We could remove all these ctors from this base class, and create a separate
// EnumIndexedValueArray<T, TEnum> : EnumIndexedArray<T, TEnum> where T: struct ...
// but then specializing to TEnum = AssetClass would have to be done twice below, once for value types and once
// for object types, with a repetition of all the property definitions. Violating the DRY principle that much
// just to protect against stupid usage, clearly documented as dangerous, is not worth it IMHO.
public EnumIndexedArray(T t)
{
int i = Length;
while (--i >= 0)
{
this[i] = t;
}
}
public EnumIndexedArray(T[] inputArray)
{
if (inputArray.Length > Length)
{
throw new Exception(string.Format("Length of enum-indexed array ({0}) to big. Can't be more than {1}.", inputArray.Length, Length));
}
Array.Copy(inputArray, data, inputArray.Length);
}
public EnumIndexedArray(EnumIndexedArray<T, TEnum> inputArray)
{
Array.Copy(inputArray.data, data, data.Length);
}
//Clean data access
public T this[int ac] { get { return data[ac]; } set { data[ac] = value; } }
public T this[TEnum ac] { get { return data[Convert.ToInt32(ac)]; } set { data[Convert.ToInt32(ac)] = value; } }
}
public class EnumIndexedObjectArray<T, TEnum> : EnumIndexedArray<T, TEnum>
where TEnum : struct, SystemEnum
where T : new()
{
public EnumIndexedObjectArray(bool doInitializeWithNewObjects = true)
{
if (doInitializeWithNewObjects)
{
for (int i = Length; i > 0; this[--i] = new T()) ;
}
}
// The other ctor's are dangerous for object arrays
}
public class EnumIndexedArrayComparator<T, TEnum> : EqualityComparer<EnumIndexedArray<T, TEnum>>
where TEnum : struct, SystemEnum
{
private readonly EqualityComparer<T> elementComparer = EqualityComparer<T>.Default;
public override bool Equals(EnumIndexedArray<T, TEnum> lhs, EnumIndexedArray<T, TEnum> rhs)
{
if (lhs == rhs)
return true;
if (lhs == null || rhs == null)
return false;
//These cases should not be possible because of the way these classes are constructed.
// HOWEVER, the data member is public, so somebody _could_ do something stupid and make
// data=null, or make lhs.data == rhs.data, even though lhs!=rhs (above check)
//On the other hand, these are just optimizations, so it won't be an issue if we reomve them anyway,
// Unless someone does something really dumb like setting .data to null or resizing to an incorrect size,
// in which case things will crash, but any developer who does this deserves to have it crash painfully...
//if (lhs.data == rhs.data)
// return true;
//if (lhs.data == null || rhs.data == null)
// return false;
int i = lhs.Length;
//if (rhs.Length != i)
// return false;
while (--i >= 0)
{
if (!elementComparer.Equals(lhs[i], rhs[i]))
return false;
}
return true;
}
public override int GetHashCode(EnumIndexedArray<T, TEnum> enumIndexedArray)
{
//This doesn't work: for two arrays ar1 and ar2, ar1.GetHashCode() != ar2.GetHashCode() even when ar1[i]==ar2[i] for all i (unless of course they are the exact same array object)
//return engineArray.GetHashCode();
//Code taken from comment by Jon Skeet - of course - in http://stackoverflow.com/questions/7244699/gethashcode-on-byte-array
//31 and 17 are used commonly elsewhere, but maybe because everyone is using Skeet's post.
//On the other hand, this is really not very critical.
unchecked
{
int hash = 17;
int i = enumIndexedArray.Length;
while (--i >= 0)
{
hash = hash * 31 + elementComparer.GetHashCode(enumIndexedArray[i]);
}
return hash;
}
}
}
}
//Because of the above hack, this fails at compile time - as it should. It would, otherwise, only fail at run time.
//public class ThisShouldNotCompile : EConstraint.EnumIndexedArray<int, bool>
//{
//}
//An example
public enum AssetClass { Ir, FxFwd, Cm, Eq, FxOpt, Cr };
public class AssetClassArrayComparator<T> : EConstraint.EnumIndexedArrayComparator<T, AssetClass> { }
public class AssetClassIndexedArray<T> : EConstraint.EnumIndexedArray<T, AssetClass>
{
public AssetClassIndexedArray()
{
}
public AssetClassIndexedArray(T t) : base(t)
{
}
public AssetClassIndexedArray(T[] inputArray) : base(inputArray)
{
}
public AssetClassIndexedArray(EConstraint.EnumIndexedArray<T, AssetClass> inputArray) : base(inputArray)
{
}
public T Cm { get { return this[AssetClass.Cm ]; } set { this[AssetClass.Cm ] = value; } }
public T FxFwd { get { return this[AssetClass.FxFwd]; } set { this[AssetClass.FxFwd] = value; } }
public T Ir { get { return this[AssetClass.Ir ]; } set { this[AssetClass.Ir ] = value; } }
public T Eq { get { return this[AssetClass.Eq ]; } set { this[AssetClass.Eq ] = value; } }
public T FxOpt { get { return this[AssetClass.FxOpt]; } set { this[AssetClass.FxOpt] = value; } }
public T Cr { get { return this[AssetClass.Cr ]; } set { this[AssetClass.Cr ] = value; } }
}
//Inherit from AssetClassArray<T>, not EnumIndexedObjectArray<T, AssetClass>, so we get the benefit of the public access getters and setters above
public class AssetClassIndexedObjectArray<T> : AssetClassIndexedArray<T> where T : new()
{
public AssetClassIndexedObjectArray(bool bInitializeWithNewObjects = true)
{
if (bInitializeWithNewObjects)
{
for (int i = Length; i > 0; this[--i] = new T()) ;
}
}
}
EDIT:
If you are using C# 7.3 or later, PLEASE don't use this ugly solution. See Ian Goldby's answer from 2018.
You can always do some extra mapping to get an array index of an enum value in a consistent and defined way:
int ArrayIndexFromDaysOfTheWeekEnum(DaysOfWeek day)
{
switch (day)
{
case DaysOfWeek.Sunday: return 0;
case DaysOfWeek.Monday: return 1;
...
default: throw ...;
}
}
Be as specific as you can. One day someone will modify your enum and the code will fail because the enum's value was (mis)used as an array index.
For future reference the above problem can be summarized as follows:
I come from Delphi where you can define an array as follows:
type
{$SCOPEDENUMS ON}
TDaysOfTheWeek = (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday);
TDaysOfTheWeekStrings = array[TDaysOfTheWeek];
Then you can iterate through the array using Min and Max:
for Dow := Min(TDaysOfTheWeek) to Max(TDaysOfTheWeek)
DaysOfTheWeekStrings[Dow] := '';
Though this is quite a contrived example, when you are dealing with array positions later in the code I can just type DaysOfTheWeekStrings[TDaysOfTheWeek.Monday]. This has the advantage of the fact that I should the TDaysOfTheWeek increase in size then I do not have to remember the new size of the array etc..... However back to the C# world. I have found this example C# Enum Array Example.
It was a very good answer by #ian-goldby, but it didn't address the issue raised by #zar-shardan, which is an issue I hit myself. Below is my take on a solution, with a an extension class for converting an IEnumerable, and a test class below that:
/// <summary>
/// An array indexed by an enumerated type instead of an integer
/// </summary>
public class ArrayIndexedByEnum<TKey, TElement> : IEnumerable<TElement> where TKey : Enum
{
private readonly Array _array;
private readonly Dictionary<TKey, TElement> _dictionary;
/// <summary>
/// Creates the initial array, populated with the defaults for TElement
/// </summary>
public ArrayIndexedByEnum()
{
var min = Convert.ToInt64(Enum.GetValues(typeof(TKey)).Cast<TKey>().Min());
var max = Convert.ToInt64(Enum.GetValues(typeof(TKey)).Cast<TKey>().Max());
var size = max - min + 1;
// Check that we aren't creating a ridiculously big array, if we are,
// then use a dictionary instead
if (min >= Int32.MinValue &&
max <= Int32.MaxValue &&
size < Enum.GetValues(typeof(TKey)).Length * 3L)
{
var lowerBound = Convert.ToInt32(min);
var upperBound = Convert.ToInt32(max);
_array = Array.CreateInstance(typeof(TElement), new int[] {(int)size }, new int[] { lowerBound });
}
else
{
_dictionary = new Dictionary<TKey, TElement>();
foreach (var value in Enum.GetValues(typeof(TKey)).Cast<TKey>())
{
_dictionary[value] = default(TElement);
}
}
}
/// <summary>
/// Gets the element by enumerated type
/// </summary>
public TElement this[TKey key]
{
get => (TElement)(_array?.GetValue(Convert.ToInt32(key)) ?? _dictionary[key]);
set
{
if (_array != null)
{
_array.SetValue(value, Convert.ToInt32(key));
}
else
{
_dictionary[key] = value;
}
}
}
/// <summary>
/// Gets a generic enumerator
/// </summary>
public IEnumerator<TElement> GetEnumerator()
{
return Enum.GetValues(typeof(TKey)).Cast<TKey>().Select(k => this[k]).GetEnumerator();
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Here's the extension class:
/// <summary>
/// Extensions for converting IEnumerable<TElement> to ArrayIndexedByEnum
/// </summary>
public static class ArrayIndexedByEnumExtensions
{
/// <summary>
/// Creates a ArrayIndexedByEnumExtensions from an System.Collections.Generic.IEnumerable
/// according to specified key selector and element selector functions.
/// </summary>
public static ArrayIndexedByEnum<TKey, TElement> ToArrayIndexedByEnum<TSource, TKey, TElement>(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector, Func<TSource, TElement> elementSelector) where TKey : Enum
{
var array = new ArrayIndexedByEnum<TKey, TElement>();
foreach(var item in source)
{
array[keySelector(item)] = elementSelector(item);
}
return array;
}
/// <summary>
/// Creates a ArrayIndexedByEnum from an System.Collections.Generic.IEnumerable
/// according to a specified key selector function.
/// </summary>
public static ArrayIndexedByEnum<TKey, TSource> ToArrayIndexedByEnum<TSource, TKey>(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector) where TKey : Enum
{
return source.ToArrayIndexedByEnum(keySelector, i => i);
}
}
And here are my tests:
[TestClass]
public class ArrayIndexedByEnumUnitTest
{
private enum OddNumbersEnum : UInt16
{
One = 1,
Three = 3,
Five = 5,
Seven = 7,
Nine = 9
}
private enum PowersOf2 : Int64
{
TwoP0 = 1,
TwoP1 = 2,
TwoP2 = 4,
TwoP3 = 8,
TwoP4 = 16,
TwoP5 = 32,
TwoP6 = 64,
TwoP7 = 128,
TwoP8 = 256,
TwoP9 = 512,
TwoP10 = 1_024,
TwoP11 = 2_048,
TwoP12 = 4_096,
TwoP13 = 8_192,
TwoP14 = 16_384,
TwoP15 = 32_768,
TwoP16 = 65_536,
TwoP17 = 131_072,
TwoP18 = 262_144,
TwoP19 = 524_288,
TwoP20 = 1_048_576,
TwoP21 = 2_097_152,
TwoP22 = 4_194_304,
TwoP23 = 8_388_608,
TwoP24 = 16_777_216,
TwoP25 = 33_554_432,
TwoP26 = 67_108_864,
TwoP27 = 134_217_728,
TwoP28 = 268_435_456,
TwoP29 = 536_870_912,
TwoP30 = 1_073_741_824,
TwoP31 = 2_147_483_648,
TwoP32 = 4_294_967_296,
TwoP33 = 8_589_934_592,
TwoP34 = 17_179_869_184,
TwoP35 = 34_359_738_368,
TwoP36 = 68_719_476_736,
TwoP37 = 137_438_953_472,
TwoP38 = 274_877_906_944,
TwoP39 = 549_755_813_888,
TwoP40 = 1_099_511_627_776,
TwoP41 = 2_199_023_255_552,
TwoP42 = 4_398_046_511_104,
TwoP43 = 8_796_093_022_208,
TwoP44 = 17_592_186_044_416,
TwoP45 = 35_184_372_088_832,
TwoP46 = 70_368_744_177_664,
TwoP47 = 140_737_488_355_328,
TwoP48 = 281_474_976_710_656,
TwoP49 = 562_949_953_421_312,
TwoP50 = 1_125_899_906_842_620,
TwoP51 = 2_251_799_813_685_250,
TwoP52 = 4_503_599_627_370_500,
TwoP53 = 9_007_199_254_740_990,
TwoP54 = 18_014_398_509_482_000,
TwoP55 = 36_028_797_018_964_000,
TwoP56 = 72_057_594_037_927_900,
TwoP57 = 144_115_188_075_856_000,
TwoP58 = 288_230_376_151_712_000,
TwoP59 = 576_460_752_303_423_000,
TwoP60 = 1_152_921_504_606_850_000,
}
[TestMethod]
public void TestSimpleArray()
{
var array = new ArrayIndexedByEnum<OddNumbersEnum, string>();
var odds = Enum.GetValues(typeof(OddNumbersEnum)).Cast<OddNumbersEnum>().ToList();
// Store all the values
foreach (var odd in odds)
{
array[odd] = odd.ToString();
}
// Check the retrieved values are the same as what was stored
foreach (var odd in odds)
{
Assert.AreEqual(odd.ToString(), array[odd]);
}
}
[TestMethod]
public void TestPossiblyHugeArray()
{
var array = new ArrayIndexedByEnum<PowersOf2, string>();
var powersOf2s = Enum.GetValues(typeof(PowersOf2)).Cast<PowersOf2>().ToList();
// Store all the values
foreach (var powerOf2 in powersOf2s)
{
array[powerOf2] = powerOf2.ToString();
}
// Check the retrieved values are the same as what was stored
foreach (var powerOf2 in powersOf2s)
{
Assert.AreEqual(powerOf2.ToString(), array[powerOf2]);
}
}
}

Categories