I have an IEnumerable<Point> collection. Lets say it contains 5 points (in reality it is more like 2000)
I want to order this collection so that a specifc point in the collection becomes the first element, so it's basically chopping a collection at a specific point and rejoining them together.
So my list of 5 points:
{0,0}, {10,0}, {10,10}, {5,5}, {0,10}
Reordered with respect to element at index 3 would become:
{5,5}, {0,10}, {0,0}, {10,0}, {10,10}
What is the most computationally efficient way of resolving this problem, or is there an inbuilt method that already exists... If so I can't seem to find one!
var list = new[] { 1, 2, 3, 4, 5 };
var rotated = list.Skip(3).Concat(list.Take(3));
// rotated is now {4, 5, 1, 2, 3}
A simple array copy is O(n) in this case, which should be good enough for almost all real-world purposes. However, I will grant you that in certain cases - if this is a part deep inside a multi-level algorithm - this may be relevant. Also, do you simply need to iterate through this collection in an ordered fashion or create a copy?
Linked lists are very easy to reorganize like this, although accessing random elements will be more costly. Overall, the computational efficiency will also depend on how exactly you access this collection of items (and also, what sort of items they are - value types or reference types?).
The standard .NET linked list does not seem to support such manual manipulation but in general, if you have a linked list, you can easily move around sections of the list in the way you describe, just by assigning new "next" and "previous" pointers to the endpoints.
The collection library available here supports this functionality: http://www.itu.dk/research/c5/.
Specifically, you are looking for LinkedList<T>.Slide() the method which you can use on the object returned by LinkedList<T>.View().
Version without enumerating list two times, but higher memory consumption because of the T[]:
public static IEnumerable<T> Rotate<T>(IEnumerable<T> source, int count)
{
int i = 0;
T[] temp = new T[count];
foreach (var item in source)
{
if (i < count)
{
temp[i] = item;
}
else
{
yield return item;
}
i++;
}
foreach (var item in temp)
{
yield return item;
}
}
[Test]
public void TestRotate()
{
var list = new[] { 1, 2, 3, 4, 5 };
var rotated = Rotate(list, 3);
Assert.That(rotated, Is.EqualTo(new[] { 4, 5, 1, 2, 3 }));
}
Note: Add argument checks.
Another alternative to the Linq method shown by ulrichb would be to use the Queue Class (a fifo collection) dequeue to your index, and enqueue the ones you have taken out.
The naive implementation using linq would be:
IEnumerable x = new[] { 1, 2, 3, 4 };
var tail = x.TakeWhile(i => i != 3);
var head = x.SkipWhile(i => i != 3);
var combined = head.Concat(tail); // is now 3, 4, 1, 2
What happens here is that you perform twice the comparisons needed to get to your first element in the combined sequence.
The solution is readable and compact but not very efficient.
The solutions described by the other contributors may be more efficient since they use special data structures as arrays or lists.
You can write a user defined extension of List that does the rotation by using List.Reverse(). I took the basic idea from the C++ Standard Template Library which basically uses Reverse in three steps: Reverse(first, mid) Reverse(mid, last) Reverse(first, last)
As far as I know, this is the most efficient and fastest way. I tested with 1 billion elements and the rotation Rotate(0, 50000, 800000) takes 0.00097 seconds. (By the way: adding 1 billion ints to the List already takes 7.3 seconds)
Here's the extension you can use:
public static class Extensions
{
public static void Rotate(this List<int> me, int first, int mid, int last)
{
//indexes are zero based!
if (first >= mid || mid >= lastIndex)
return;
me.Reverse(first, mid - first + 1);
me.Reverse(mid + 1, last - mid);
me.Reverse(first, last - first + 1);
}
}
The usage is like:
static void Main(string[] args)
{
List<int> iList = new List<int>{0,1,2,3,4,5};
Console.WriteLine("Before rotate:");
foreach (var item in iList)
{
Console.Write(item + " ");
}
Console.WriteLine();
int firstIndex = 0, midIndex = 2, lastIndex = 4;
iList.Rotate(firstIndex, midIndex, lastIndex);
Console.WriteLine($"After rotate {firstIndex}, {midIndex}, {lastIndex}:");
foreach (var item in iList)
{
Console.Write(item + " ");
}
Console.ReadKey();
}
Related
C# why does binarysearch have to be made on sorted arrays and lists?
Is there any other method that does not require me to sort the list?
It kinda messes with my program in a way that I cannot sort the list for it to work as I want to.
A binary search works by dividing the list of candidates in half using equality. Imagine the following set:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
We can also represent this as a binary tree, to make it easier to visualise:
Source
Now, say we want to find the number 3. We can do it like so:
Is 3 smaller than 8? Yes. OK, now we're looking at everything between 1 and 7.
Is 3 smaller than 4? Yes. OK, now we're looking at everything between 1 and 3.
Is 3 smaller than 2? No. OK, now we're looking at 3.
We found it!
Now, if your list isn't sorted, how will we divide the list in half? The simple answer is: we can't. If we swap 3 and 15 in the example above, it would work like this:
Is 3 smaller than 8? Yes. OK, now we're looking at everything between 1 and 7.
Is 3 smaller than 4? Yes. OK, now we're looking at everything between 1 and 3 (except we swapped it with 15).
Is 3 smaller than 2? No. OK, now we're looking at 15.
Huh? There's no more items to check but we didn't find it. I guess it's not in the list.
The solution is to use an appropriate data type instead. For fast lookups of key/value pairs, I'll use a Dictionary. For fast checks if something already exists, I'll use a HashSet. For general storage I'll use a List or an array.
Dictionary example:
var values = new Dictionary<int, string>();
values[1] = "hello";
values[2] = "goodbye";
var value2 = values[2]; // this lookup will be fast because Dictionaries are internally optimised inside and partition keys' hash codes into buckets.
HashSet example:
var mySet = new HashSet<int>();
mySet.Add(1);
mySet.Add(2);
if (mySet.Contains(2)) // this lookup is fast for the same reason as a dictionary.
{
// do something
}
List exmaple:
var list = new List<int>();
list.Add(1);
list.Add(2);
if (list.Contains(2)) // this isn't fast because it has to visit each item in the list, but it works OK for small sets or places where performance isn't so important
{
}
var idx2 = list.IndexOf(2);
If you have multiple values with the same key, you could store a list in a Dictionary like this:
var values = new Dictionary<int, List<string>>();
if (!values.ContainsKey(key))
{
values[key] = new List<string>();
}
values[key].Add("value1");
values[key].Add("value2");
There is no way you use binary search on unordered collections. Sorting collection is the main concept of the binary search. The key is that on every move u take the middle index between l and r. On first step they are 0 and size - 1, after every step one of them becomes middle index between them. If x > arr[m] then l becomes m + 1, otherwise r becomes m - 1. Basically, on every step you take half of the array you had and, of course, it remains sorted. This code is recursive, if you don't know what recursion is(which is very important in programming), you can review and learn here.
// C# implementation of recursive Binary Search
using System;
class GFG {
// Returns index of x if it is present in
// arr[l..r], else return -1
static int binarySearch(int[] arr, int l,
int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;
// If the element is present at the
// middle itself
if (arr[mid] == x)
return mid;
// If element is smaller than mid, then
// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);
// Else the element can only be present
// in right subarray
return binarySearch(arr, mid + 1, r, x);
}
// We reach here when element is not present
// in array
return -1;
}
// Driver method to test above
public static void Main()
{
int[] arr = { 2, 3, 4, 10, 40 };
int n = arr.Length;
int x = 10;
int result = binarySearch(arr, 0, n - 1, x);
if (result == -1)
Console.WriteLine("Element not present");
else
Console.WriteLine("Element found at index "
+ result);
}
}
Output:
Element is present at index 3
Sure there is.
var list = new List<int>();
list.Add(42);
list.Add(1);
list.Add(54);
var index = list.IndexOf(1); //TADA!!!!
EDIT: Ok, I hoped the irony was obvious. But strictly speaking, if your array is not sorted, you are pretty much stuck with the linear search, readily available by means of IndexOf() or IEnumerable.First().
I've to create an extension method jump<T> that, taken an arbitrary sequence s, returns an infinite sequence whose elements are obtained visiting in a circular way s and skipping n elements. So, if step == 0, then all the sequence is returned (infinite times), if step == 1, let's take all the numbers in the interval [0-10] as an example, will return 0,2,4,6,8,10,1,3,5 ecc. If step==2 then the result will be 0,3,6,9,1,4,7,10.
Obviously this is only an example with an ordered list of int, i need to do so with a generic sequence of T elements.
How can i achieve that?
To test it, i created a nunit test as following:
[Test]
public void Jumping_validArg_IsOk()
{
var start = new[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}
.Jumping(3)
.ToList()//remove this to make it work
.Take(4);
var expected = new[] {1, 5, 9, 2};
CollectionAssert.AreEquivalent(start, expected);
}
but it seems to never end and it throws a System.OutOfMemoryException.
The solution:
I modified a little what i selected as best answer to make it more general in this way:
public static IEnumerable<T> Jump<T>(this IEnumerable<T> sequence, int step)
{
var pos = 0;
var list = sequence.ToList();
while (true)
{
yield return list[pos];
pos = (pos + step + 1) % list.Count;
}
}
In this way it should work with all the IEnumerable. I added a comment on the test to show what to delete to make it work. I hope it's all correct.
You can implement this very easily with a yield return statement:
public static IEnumerable<T> Jump<T>(this IList<T> data, int step) {
int pos = 0;
while(true) {
yield return data[pos];
pos = (pos + step) % data.Count;
}
}
There must be an better way to do this, I'm sure...
// Simplified code
var a = new List<int>() { 1, 2, 3, 4, 5, 6 };
var b = new List<int>() { 2, 3, 5, 7, 11 };
var z = new List<int>();
for (int i = 0; i < a.Count; i++)
if (b.Contains(a[i]))
z.Add(a[i]);
// (z) contains all of the numbers that are in BOTH (a) and (b), i.e. { 2, 3, 5 }
I don't mind using the above technique, but I want something fast and efficient (I need to compare very large Lists<> multiple times), and this appears to be neither! Any thoughts?
Edit: As it makes a difference - I'm using .NET 4.0, the initial arrays are already sorted and don't contain duplicates.
You could use IEnumerable.Intersect.
var z = a.Intersect(b);
which will probably be more efficient than your current solution.
note you left out one important piece of information - whether the lists happen to be ordered or not. If they are then a couple of nested loops that pass over each input array exactly once each may be faster - and a little more fun to write.
Edit
In response to your comment on ordering:
first stab at looping - it will need a little tweaking on your behalf but works for your initial data.
int j = 0;
foreach (var i in a)
{
int x = b[j];
while (x < i)
{
if (x == i)
{
z.Add(b[j]);
}
j++;
x = b[j];
}
}
this is where you need to add some unit tests ;)
Edit
final point - it may well be that Linq can use SortedList to perform this intersection very efficiently, if performance is a concern it is worth testing the various solutions. Dont forget to take the sorting into account if you load your data in an un-ordered manner.
One Final Edit because there has been some to and fro on this and people may be using the above without properly debugging it I am posting a later version here:
int j = 0;
int b1 = b[j];
foreach (var a1 in a)
{
while (b1 <= a1)
{
if (b1 == a1)
z1.Add(b[j]);
j++;
if (j >= b.Count)
break;
b1 = b[j];
}
}
There's IEnumerable.Intersect, but since this is an extension method, I doubt it will be very efficient.
If you want efficiency, take one list and turn it into a Set, then go over the second list and see which elements are in the set. Note that I preallocate z, just to make sure you don't suffer from any reallocations.
var set = new HashSet<int>(a);
var z = new List<int>(Math.Min(set.Count, b.Count));
foreach(int i in b)
{
if(set.Contains(i))
a.Add(i);
}
This is guaranteed to run in O(N+M) (N and M being the sizes of the two lists).
Now, you could use set.IntersectWith(b), and I believe it will be just as efficient, but I'm not 100% sure.
The Intersect() method does just that. From MSDN:
Produces the set intersection of two sequences by using the default
equality comparer to compare values.
So in your case:
var z = a.Intersect(b);
Use SortedSet<T> in System.Collections.Generic namespace:
SortedSet<int> a = new SortedSet<int>() { 1, 2, 3, 4, 5, 6 };
SortedSet<int> b = new SortedSet<int>() { 2, 3, 5, 7, 11 };
b.IntersectWith(s2);
But surely you have no duplicates!
Although your second list needs not to be a SortedSet. It can be any collection (IEnumerable<T>), but internally the method act in a way that if the second list also is SortedSet<T>, the operation is an O(n) operation.
If you can use LINQ, you could use the Enumerable.Intersect() extension method.
I just came across the ArraySegment<byte> type while subclassing the MessageEncoder class.
I now understand that it's a segment of a given array, takes an offset, is not enumerable, and does not have an indexer, but I still fail to understand its usage. Can someone please explain with an example?
ArraySegment<T> has become a lot more useful in .NET 4.5+ and .NET Core as it now implements:
IList<T>
ICollection<T>
IEnumerable<T>
IEnumerable
IReadOnlyList<T>
IReadOnlyCollection<T>
as opposed to the .NET 4 version which implemented no interfaces whatsoever.
The class is now able to take part in the wonderful world of LINQ so we can do the usual LINQ things like query the contents, reverse the contents without affecting the original array, get the first item, and so on:
var array = new byte[] { 5, 8, 9, 20, 70, 44, 2, 4 };
array.Dump();
var segment = new ArraySegment<byte>(array, 2, 3);
segment.Dump(); // output: 9, 20, 70
segment.Reverse().Dump(); // output 70, 20, 9
segment.Any(s => s == 99).Dump(); // output false
segment.First().Dump(); // output 9
array.Dump(); // no change
It is a puny little soldier struct that does nothing but keep a reference to an array and stores an index range. A little dangerous, beware that it does not make a copy of the array data and does not in any way make the array immutable or express the need for immutability. The more typical programming pattern is to just keep or pass the array and a length variable or parameter, like it is done in the .NET BeginRead() methods, String.SubString(), Encoding.GetString(), etc, etc.
It does not get much use inside the .NET Framework, except for what seems like one particular Microsoft programmer that worked on web sockets and WCF liking it. Which is probably the proper guidance, if you like it then use it. It did do a peek-a-boo in .NET 4.6, the added MemoryStream.TryGetBuffer() method uses it. Preferred over having two out arguments I assume.
In general, the more universal notion of slices is high on the wishlist of principal .NET engineers like Mads Torgersen and Stephen Toub. The latter kicked off the array[:] syntax proposal a while ago, you can see what they've been thinking about in this Roslyn page. I'd assume that getting CLR support is what this ultimately hinges on. This is actively being thought about for C# version 7 afaik, keep your eye on System.Slices.
Update: dead link, this shipped in version 7.2 as Span.
Update2: more support in C# version 8.0 with Range and Index types and a Slice() method.
Buffer partioning for IO classes - Use the same buffer for simultaneous
read and write operations and have a
single structure you can pass around
the describes your entire operation.
Set Functions - Mathematically speaking you can represent any
contiguous subsets using this new
structure. That basically means you
can create partitions of the array,
but you can't represent say all odds
and all evens. Note that the phone
teaser proposed by The1 could have
been elegantly solved using
ArraySegment partitioning and a tree
structure. The final numbers could
have been written out by traversing
the tree depth first. This would have
been an ideal scenario in terms of
memory and speed I believe.
Multithreading - You can now spawn multiple threads to operate over the
same data source while using segmented
arrays as the control gate. Loops
that use discrete calculations can now
be farmed out quite easily, something
that the latest C++ compilers are
starting to do as a code optimization
step.
UI Segmentation - Constrain your UI displays using segmented
structures. You can now store
structures representing pages of data
that can quickly be applied to the
display functions. Single contiguous
arrays can be used in order to display
discrete views, or even hierarchical
structures such as the nodes in a
TreeView by segmenting a linear data
store into node collection segments.
In this example, we look at how you can use the original array, the Offset and Count properties, and also how you can loop through the elements specified in the ArraySegment.
using System;
class Program
{
static void Main()
{
// Create an ArraySegment from this array.
int[] array = { 10, 20, 30 };
ArraySegment<int> segment = new ArraySegment<int>(array, 1, 2);
// Write the array.
Console.WriteLine("-- Array --");
int[] original = segment.Array;
foreach (int value in original)
{
Console.WriteLine(value);
}
// Write the offset.
Console.WriteLine("-- Offset --");
Console.WriteLine(segment.Offset);
// Write the count.
Console.WriteLine("-- Count --");
Console.WriteLine(segment.Count);
// Write the elements in the range specified in the ArraySegment.
Console.WriteLine("-- Range --");
for (int i = segment.Offset; i < segment.Count+segment.Offset; i++)
{
Console.WriteLine(segment.Array[i]);
}
}
}
ArraySegment Structure - what were they thinking?
What's about a wrapper class? Just to avoid copy data to temporal buffers.
public class SubArray<T> {
private ArraySegment<T> segment;
public SubArray(T[] array, int offset, int count) {
segment = new ArraySegment<T>(array, offset, count);
}
public int Count {
get { return segment.Count; }
}
public T this[int index] {
get {
return segment.Array[segment.Offset + index];
}
}
public T[] ToArray() {
T[] temp = new T[segment.Count];
Array.Copy(segment.Array, segment.Offset, temp, 0, segment.Count);
return temp;
}
public IEnumerator<T> GetEnumerator() {
for (int i = segment.Offset; i < segment.Offset + segment.Count; i++) {
yield return segment.Array[i];
}
}
} //end of the class
Example:
byte[] pp = new byte[] { 1, 2, 3, 4 };
SubArray<byte> sa = new SubArray<byte>(pp, 2, 2);
Console.WriteLine(sa[0]);
Console.WriteLine(sa[1]);
//Console.WriteLine(b[2]); exception
Console.WriteLine();
foreach (byte b in sa) {
Console.WriteLine(b);
}
Ouput:
3
4
3
4
The ArraySegment is MUCH more useful than you might think. Try running the following unit test and prepare to be amazed!
[TestMethod]
public void ArraySegmentMagic()
{
var arr = new[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
var arrSegs = new ArraySegment<int>[3];
arrSegs[0] = new ArraySegment<int>(arr, 0, 3);
arrSegs[1] = new ArraySegment<int>(arr, 3, 3);
arrSegs[2] = new ArraySegment<int>(arr, 6, 3);
for (var i = 0; i < 3; i++)
{
var seg = arrSegs[i] as IList<int>;
Console.Write(seg.GetType().Name.Substring(0, 12) + i);
Console.Write(" {");
for (var j = 0; j < seg.Count; j++)
{
Console.Write("{0},", seg[j]);
}
Console.WriteLine("}");
}
}
You see, all you have to do is cast an ArraySegment to IList and it will do all of the things you probably expected it to do in the first place. Notice that the type is still ArraySegment, even though it is behaving like a normal list.
OUTPUT:
ArraySegment0 {0,1,2,}
ArraySegment1 {3,4,5,}
ArraySegment2 {6,7,8,}
In simple words: it keeps reference to an array, allowing you to have multiple references to a single array variable, each one with a different range.
In fact it helps you to use and pass sections of an array in a more structured way, instead of having multiple variables, for holding start index and length. Also it provides collection interfaces to work more easily with array sections.
For example the following two code examples do the same thing, one with ArraySegment and one without:
byte[] arr1 = new byte[] { 1, 2, 3, 4, 5, 6 };
ArraySegment<byte> seg1 = new ArraySegment<byte>(arr1, 2, 2);
MessageBox.Show((seg1 as IList<byte>)[0].ToString());
and,
byte[] arr1 = new byte[] { 1, 2, 3, 4, 5, 6 };
int offset = 2;
int length = 2;
byte[] arr2 = arr1;
MessageBox.Show(arr2[offset + 0].ToString());
Obviously first code snippet is more preferred, specially when you want to pass array segments to a function.
While implementing this generic merge sort, as a kind of Code Kata, I stumbled on a difference between IEnumerable and List that I need help to figure out.
Here's the MergeSort
public class MergeSort<T>
{
public IEnumerable<T> Sort(IEnumerable<T> arr)
{
if (arr.Count() <= 1) return arr;
int middle = arr.Count() / 2;
var left = arr.Take(middle).ToList();
var right = arr.Skip(middle).ToList();
return Merge(Sort(left), Sort(right));
}
private static IEnumerable<T> Merge(IEnumerable<T> left, IEnumerable<T> right)
{
var arrSorted = new List<T>();
while (left.Count() > 0 && right.Count() > 0)
{
if (Comparer<T>.Default.Compare(left.First(), right.First()) < 0)
{
arrSorted.Add(left.First());
left=left.Skip(1);
}
else
{
arrSorted.Add(right.First());
right=right.Skip(1);
}
}
return arrSorted.Concat(left).Concat(right);
}
}
If I remove the .ToList() on the left and right variables it fails to sort correctly. Do you see why?
Example
var ints = new List<int> { 5, 8, 2, 1, 7 };
var mergeSortInt = new MergeSort<int>();
var sortedInts = mergeSortInt.Sort(ints);
With .ToList()
[0]: 1
[1]: 2
[2]: 5
[3]: 7
[4]: 8
Without .ToList()
[0]: 1
[1]: 2
[2]: 5
[3]: 7
[4]: 2
Edit
It was my stupid test that got me.
I tested it like this:
var sortedInts = mergeSortInt.Sort(ints);
ints.Sort();
if (Enumerable.SequenceEqual(ints, sortedInts)) Console.WriteLine("ints sorts ok");
just changing the first row to
var sortedInts = mergeSortInt.Sort(ints).ToList();
removes the problem (and the lazy evaluation).
EDIT 2010-12-29
I thought I would figure out just how the lazy evaluation messes things up here but I just don't get it.
Remove the .ToList() in the Sort method above like this
var left = arr.Take(middle);
var right = arr.Skip(middle);
then try this
var ints = new List<int> { 5, 8, 2 };
var mergeSortInt = new MergeSort<int>();
var sortedInts = mergeSortInt.Sort(ints);
ints.Sort();
if (Enumerable.SequenceEqual(ints, sortedInts)) Console.WriteLine("ints sorts ok");
When debugging You can see that before ints.Sort() a sortedInts.ToList() returns
[0]: 2
[1]: 5
[2]: 8
but after ints.Sort() it returns
[0]: 2
[1]: 5
[2]: 5
What is really happening here?
Your function is correct - if you inspect the result of Merge, you'll see the result is sorted (example).
So where's the problem? Just as you've suspected, you're testing it wrong - when you call Sort on your original list you change all collections that derive from it!
Here's a snippet that demonstrates what you did:
List<int> numbers = new List<int> {5, 4};
IEnumerable<int> first = numbers.Take(1);
Console.WriteLine(first.Single()); //prints 5
numbers.Sort();
Console.WriteLine(first.Single()); //prints 4!
All collections you create are basically the same as first - in a way, they are lazy pointers to positions in ints. Obviously, when you call ToList, the problem is eliminated.
Your case is more complex than that. Your Sort is part lazy, exactly as you suggest: First you create a list (arrSorted) and add integers to it. That part isn't lazy, and is the reason you see the first few elements sorted. Next, you add the remaining elements - but Concat is lazy. Now, recursion enters to mess this even more: In most cases, most elements on your IEnumerable are eager - you create lists out of left and right, which are also made of mostly eager + lazy tail. You end up with a sorted List<int>, lazily concated to a lazy pointer, which should be just the last element (other elements were merged before).
Here's a call graph of your functions - red indicated a lazy collection, black a real number:
When you change the list the new list is mostly intact, but the last element is lazy, and point to the position of the largest element in the original list.
The result is mostly good, but its last element still points to the original list:
One last example: consider you're changing all elements in the original list. As you can see, most elements in the sorted collection remain the same, but the last is lazy and points to the new value:
var ints = new List<int> { 3,2,1 };
var mergeSortInt = new MergeSort<int>();
var sortedInts = mergeSortInt.Sort(ints);
// sortedInts is { 1, 2, 3 }
for(int i=0;i<ints.Count;i++) ints[i] = -i * 10;
// sortedInts is { 1, 2, 0 }
Here's the same example on Ideone: http://ideone.com/FQVR7
Unable to reproduce - I've just tried this, and it works absolutely fine. Obviously it's rather inefficient in various ways, but removing the ToList calls didn't make it fail.
Here's my test code, with your MergeSort code as-is, but without the ToList() calls:
using System;
using System.Collections.Generic;
public static class Extensions
{
public static void Dump<T>(this IEnumerable<T> items, string name)
{
Console.WriteLine(name);
foreach (T item in items)
{
Console.Write(item);
Console.Write(" ");
}
Console.WriteLine();
}
}
class Test
{
static void Main()
{
var ints = new List<int> { 5, 8, 2, 1, 7 };
var mergeSortInt = new MergeSort<int>();
var sortedInts = mergeSortInt.Sort(ints);
sortedInts.Dump("Sorted");
}
}
Output:
Sorted
1 2 5 7 8
Perhaps the problem was how you were testing your code?
I ran it with and without the list and it worked.
Anyway, one of the strengths points of merge sort is its ability to sort in-place with O(1) space complexity that this implementation will not benefit.
The problem is you sort the left right, than the right side and merge to one sequence. That does not mean that you get a completely sorted sequence.
First you have to merge and than you have to sort:
public IEnumerable<T> Sort(IEnumerable<T> arr)
{
if (arr.Count() <= 1) return arr;
int middle = arr.Count() / 2;
var left = arr.Take(middle).ToList();
var right = arr.Skip(middle).ToList();
// first merge and than sort
return Sort(Merge(left, right));
}