Check if all elements in an array are equal - c#

In the first part I am creating pairs out of array elements and the array is twice as short. The array is always even.
Here is the first part:
using System;
class Program
{
static void Main()
{
int[] Arr = new int[]{1, 2, 0, 3, 4, -1};
int[] newArr = new int[(Arr.Length / 2)];
int sum = 0;
for (int i = 0; i < Arr.Length; i+=2)
{
if (i + 1 < Arr.Length)
{
newArr[sum] = Arr[i] + Arr[i + 1];
}
else
{
newArr[sum] = Arr[i];
}
sum++;
}
in the second part I would like to check if the array elements are equal. What I want to do is to increment int counter each time the index in the for loop is equal to the next index in the array.
What I have as second part:
int counter = 0;
for (int i = 0; i < newArr.Length -1; i++)
{
if (newArr[i] == newArr[i + 1])
{
counter++;
}
else
{
Console.Write(" ");
}
}
What is wrong in this code. I cannot seem to understand how to write code that will work with int Arr[5] and int Arr[5000]

All you need to change is the termination condition in the for loop to
i < newArr.Length - 1
so that you can compare array[i] with array[i + 1]. This change makes sure you do not get past the upper bound of the array.

try this
for ( i=1;i<arr.Length;i++)
{
if(arr[0]==arr[i])
continue;
else
break;
}
if (i==arr.Length)
Console.WriteLine("All element in array are equal");

If there is no need to write so imperative code, other than to achieve your final goal – you don't have to. Almost always you can do it in a much more readable way.
I suggest using LINQ. For collections implementing IEnumerable<T>:
newArr.Distinct().Take(2).Count() == 1
LINQ is a built-in feature, just make sure you are using System.Linq; at the top of your .cs file.
What goes on here?
Distinct returns an IEnumerable<T>, its enumeration will give all distinct elements from your array, but no enumeration, and hence computation, happened yet.
Take returns new IEnumerable<T>, its enumeration will enumerate previous IEnumerable<T> internally, but it will give only first two distinct elements. Again, no enumeration happened yet.
At last, Count enumerates the last IEnumerable<T> and returns its elements count (in our case 0, 1 or 2).
As we used Take(2), the enumeration initiated by Count method will be stopped right when the second distinct element is found. If we don't use Take(2), our code will enumerate the whole array even if it is not needed.
Why is this approach better?
Much shorter and more readable;
Lazy evaluation – if an element is found out to be distinct from the other ones, the enumeration will be stopped immediately;
Flexible – you can pass a custom equality comparer to Distinct method. You can also call Select method before calling Distinct to choose what specific member your elements will be compared by;
Universal – Works with any collection which impletents IEnumerable<T> interface.
Other ways
The same result can be achieved in slightly other ways, for example:
!newArr.Distinct().Take(2).Skip(1).Any()
Experiment with LINQ and choose the way you and your team consider the most readable.
For collections implementing IList<T> you can also write (as #Alexander suggested):
newArr.All(x => x == newArr[0])
This variant is shorter but not as flexible and universal.
OFF TOPIC. Encapsulating common code
You should encapsulate code that does one simple thing into a separate method, it further improves your code readability and allows reusing your method in several places. I'd write an extension method for this one.
public static class CollectionExtensions {
public static bool AllElementsEqual<T>(this IEnumerable<T> items) {
return items.Distinct().Take(2).Count() == 1;
}
}
Later in your code you need just to call this method:
newArr.AllElementsEqual()

Try this..
for (int i = 0; i < newArr.Length-1; i++)
{
for(int j=0 ;j< newArr.Length-1; i++)
{
if (newArr[i] == newArr[j])
{
/////
}
}
}
else
{
Console.Write(" ");
}
}

Related

Changing my iterative selection sort into a recursive sort using C#

I've tried changing my sort into a recursive function where the method calls itself. At least that's my understanding of recursion to forego for loops and the method calls itself to repeat the necessary iterations.
Below is my iterative verion:
for (int i = 0; i < Integers.Count; i++) //loops through all the numbers
{
min = i; //setting the current index number to be the minimum
for (int index = i + 1; index < Integers.Count; index++) //finds and checks pos of min value
{ //and switches it with last element using the swap method
if ((int) Integers[index] > (int) Integers[min]) {
min = index;
}
comparisons++;
}
if (i != min) //if statement for if swop is done
{
Swap(i, min, Integers, ref swops); //swap method called
}
//Swap method called
}
I've tried making it recursive. I read online that it was OK to still have for loops in a recursive funtion which I guess is not true. I just havent been able to develop a working sort. Am I going to need to split the method into 2 where one method traverses a list and the other does the sort?
Here's my selection sort recursive method attempt below:
static void DoSelectionSortRecursive(ArrayList Integers, int i, int swops, int comparisons) {
int min;
min = i;
for (int index = i + 1; index < Integers.Count; index++) //read online that the use of arraylists are deprecated, and i shoudlve rather used List<int> in order to remedy the code. is it necassary
{
if ((int) Integers[index] > (int) Integers[min]) {
min = index;
}
comparisons++;
}
if (i != min) {
Swap(i, min, Integers, ref swops);
}
DoSelectionSortRecursive(Integers, (i + 1), comparisons, swops); //DoSelectionSortRecursive method called
}
This is my imporved attempt including performance measures and everything. The original list of integers in the unsorted lists. 84,24,13,10,37.
and im getting 84,24,13,37,10. clearly not in a sorted descending order.
below is the improved code
static void DoSelectionSortRecursive(ArrayList Integers)
{
Stopwatch timer = new Stopwatch();
timer.Start();
int shifts = 0;
int swops = 0;
int comparisons = 0;
Sort(Integers, 1,ref swops,ref comparisons);
timer.Stop();
Console.WriteLine("Selection Sort Recursive: ");
Console.WriteLine(timer.ElapsedMilliseconds);
Console.WriteLine(swops);
Console.WriteLine(comparisons);
Console.WriteLine(shifts); //not needed in this sort
Console.WriteLine("-------------------------------------");
foreach (int i in Integers)
{
Console.WriteLine(i);
}
}
static void Sort(ArrayList Integers, int i, ref int swops, ref int comparisons)
{
int min = i;
int index = i + 1;
if (index < Integers.Count) //read online that the use of arraylists are deprecated, and i shoudlve rather used List<int> in order to remedy the code. is it necassary
{
if ((int)Integers[index] > (int)Integers[min])
{
min = index;
}
comparisons++;
index++;
}
if (i != min)
{
Swap(i, min, Integers, ref swops);
}
if (i < Integers.Count - 1)
{
Sort(Integers, (i + 1), ref comparisons, ref swops); //DoSelectionSortRecursive method called
}
}
static void Swap(int x, int y, ArrayList Integers, ref int swap) //swap method, swaps the position of 2 elements
{
swap++;
int temporary = (int)Integers[x]; //essentially will swap the min with the current position
Integers[x] = Integers[y];
Integers[y] = temporary;
}
There are no "rules" about recursion that say you cannot use loops in the recursive method body. The only rule in recursion is that the function has to call itself, which your second code snippet does, so DoSelectionSortRecursive is legitimately recursive.
For example, merge sort uses recursion for splitting the array and loops for merging the sorted subarrays. It'd be wrong to call it anything but a recursive function, and it'd be somewhat silly to implement the merging stage (an implementation detail of merge sort) recursively -- it'd be slower and harder to reason about, so loops are the natural choice.
On the other hand, the splitting part of merge sort makes sense to write recursively because it chops the problem space down by a logarithmic factor and has multiple branches. The repeated halving means it won't need to make more than a few or a dozen recursive calls on a typical array. These calls don't incur much overhead and fit well within the call stack.
On the other hand, the call stack can easily blow for linear recursive algorithms in languages without tail-call optimization like C# where each index in the linear structure requires a whole stack frame.
Rules prohibiting loops are concoted by educators who are trying to teach recursion by forcing you to use a specific approach in your solution. It's up to your instructor to determine whether one or both loops need to be converted to recursion for it to "count" as far as the course is concerned. (apologies if my assumptions about your educational arrangement are incorrect)
All that is to say that this requirement to write a nested-loop sort recursively is basically a misapplication of recursion for pedagogical purposes. In the "real world", you'd just write it iteratively and be done with it, as Google does in the V8 JavaScript engine, which uses insertion sort on small arrays. I suspect there are many other cases, but this is the one I'm most readily familiar with.
The point with using simple, nested loop sorts in performance-sensitive production code is that they're not recursive. These sorts' advantage is that they avoid allocating stack frames and incurring function call overhead to sort small arrays of a dozen numbers where the quadratic time complexity isn't a significant factor. When the array is mostly sorted, insertion sort in particular doesn't have to do much work and is mostly a linear walk over the array (sometimes a drawback in certain real-time applications that need predictable performance, in which case selection sort might be preferable -- see Wikipedia).
Regarding ArrayLists, the docs say: "We don't recommend that you use the ArrayList class for new development. Instead, we recommend that you use the generic List<T> class." So you can basically forget about ArrayList unless you're doing legacy code (Note: Java does use ArrayLists which are more akin to the C# List. std::list isn't an array in C++, so it can be confusing to keep all of this straight).
It's commendable that you've written your sort iteratively first, then translated to recursion on the outer loop only. It's good to start with what you know and get something working, then gradually transform it to meet the new requirements.
Zooming out a bit, we can isolate the role this inner loop plays when we pull it out as a function, then write and test it independent of the selection sort we hope to use it in. After the subroutine works on its own, then selection sort can use it as a black box and the overall design is verifiable and modular.
More specifically, the role of this inner loop is to find the minimum value beginning at an index: int IndexOfMin(List<int> lst, int i = 0). The contract is that it'll throw an ArgumentOutOfRangeException error if the precondition 0 <= i < lst.Count is violated.
I skipped the metrics variables for simplicity but added a random test harness that gives a pretty reasonable validation against the built-in sort.
using System;
using System.Collections.Generic;
using System.Linq;
class Sorter
{
private static void Swap(List<int> lst, int i, int j)
{
int temp = lst[i];
lst[i] = lst[j];
lst[j] = temp;
}
private static int IndexOfMin(List<int> lst, int i = 0)
{
if (i < 0 || i >= lst.Count)
{
throw new ArgumentOutOfRangeException();
}
else if (i == lst.Count - 1)
{
return i;
}
int bestIndex = IndexOfMin(lst, i + 1);
return lst[bestIndex] < lst[i] ? bestIndex : i;
}
public static void SelectionSort(List<int> lst, int i = 0)
{
if (i < lst.Count)
{
Swap(lst, i, IndexOfMin(lst, i));
SelectionSort(lst, i + 1);
}
}
public static void Main(string[] args)
{
var rand = new Random();
int tests = 1000;
int lstSize = 100;
int randMax = 1000;
for (int i = 0; i < tests; i++)
{
var lst = new List<int>();
for (int j = 0; j < lstSize; j++)
{
lst.Add(rand.Next(randMax));
}
var actual = new List<int>(lst);
SelectionSort(actual);
lst.Sort();
if (!lst.SequenceEqual(actual))
{
Console.WriteLine("FAIL:");
Console.WriteLine($"Expected => {String.Join(",", lst)}");
Console.WriteLine($"Actual => {String.Join(",", actual)}\n");
}
}
}
}
Here's a more generalized solution that uses generics and CompareTo so that you can sort any list of objects that implement the IComparable interface. This functionality is more akin to the built-in sort.
using System;
using System.Collections.Generic;
using System.Linq;
class Sorter
{
public static void Swap<T>(List<T> lst, int i, int j)
{
T temp = lst[i];
lst[i] = lst[j];
lst[j] = temp;
}
public static int IndexOfMin<T>(List<T> lst, int i = 0)
where T : IComparable<T>
{
if (i < 0 || i >= lst.Count)
{
throw new ArgumentOutOfRangeException();
}
else if (i == lst.Count - 1)
{
return i;
}
int bestIndex = IndexOfMin(lst, i + 1);
return lst[bestIndex].CompareTo(lst[i]) < 0 ? bestIndex : i;
}
public static void SelectionSort<T>(List<T> lst, int i = 0)
where T : IComparable<T>
{
if (i < lst.Count)
{
Swap(lst, i, IndexOfMin(lst, i));
SelectionSort(lst, i + 1);
}
}
public static void Main(string[] args)
{
// same as above
}
}
Since you asked how to smush both of the recursive functions into one, it's possible by keeping track of both i and j indices in the parameter list and adding a branch to figure out whether to deal with the inner or outer loop on a frame. For example:
public static void SelectionSort<T>(
List<T> lst,
int i = 0,
int j = 0,
int minJ = 0
) where T : IComparable<T>
{
if (i >= lst.Count)
{
return;
}
else if (j < lst.Count)
{
minJ = lst[minJ].CompareTo(lst[j]) < 0 ? minJ : j;
SelectionSort(lst, i, j + 1, minJ);
}
else
{
Swap(lst, i, minJ);
SelectionSort(lst, i + 1, i + 1, i + 1);
}
}
All of the code shown in this post is not suitable for production -- the point is to illustrate what not to do.

Quickest way to find position of item less than or equal to double in sorted list C#

I am exploring the fastest way to iterate through three sorted lists to find the position of the first item which is equal to or less than a double value. The lists contains two columns of doubles.
I have the two following working examples attached below, these are encompassed by a bigger while loop (which also modifies the currentPressure list changing the [0] value) value. But, considering the amount of rows (500,000+) being parsed by the bigger while loop, the code below is too slow (one iteration of the three while loops takes >20 ms).
"allPressures" contains all rows while currentPressure is modified by the remaining code. The while loops are used to align the time from the Flow, Rpm and Position lists to the Time in the pressure list.
In other words I am trying to find the quickest way to determine the x of
for instance
FlowList[x].Time =< currentPressure[0].Time
Any suggestions are greatly appreciated!
Examples:
for (int i = 0; i < allPressures.Count; i++)
{
if (FlowList[i].Time >= currentPressure[0].Time)
{
fl = i;
break;
}
}
for (int i = 0; i < allPressures.Count; i++)
{
if (RpmList[i].Time >= currentPressure[0].Time)
{
rp = i;
break;
}
}
for (int i = 0; i < allPressures.Count; i++)
{
if (PositionList[i].Time >= currentPressure[0].Time)
{
bp = i;
break;
}
}
Using while loop:
while (FlowList[fl].Time < currentPressure[0].Time)
{
fl++;
}
while (RpmList[rp].Time < currentPressure[0].Time)
{
rp++;
}
while (PositionList[bp].Time < currentPressure[0].Time)
{
bp++;
}
The problem is that your are doing a linear search. This means that in the worst case scenario your are iterating over all the elements in your lists. This gives you a computational complexity of O(3*n) where n is the length of your lists and 3 is the number of lists you are searching.
Since your lists are sorted you can use the much faster binary search which has a complexity of O(log(n)) and in your case O(3*log(n)).
Luckily you don't have to implement it yourself, because .NET offers the helper method List.BinarySearch(). You will need the one that takes a custom comparer, because you want to compare PressureData objects.
Since you are looking for the index of the closest value that's less than your search value, you'll have to use a little trick: when BinarySearch() doesn't find a matching value it returns the index of the next element that is larger than the search value. From this it's easy to find the previous element that is smaller than the search value.
Here is an extension method the implements this:
public static int FindMaxIndex<T>(
this List<T> sortedList, T inclusiveUpperBound, IComparer<T> comparer = null)
{
var index = sortedList.BinarySearch(inclusiveUpperBound, comparer);
// The max value was found in the list. Just return its index.
if (index >= 0)
return index;
// The max value was not found and "~index" is the index of the
// next value greater than the search value.
index = ~index;
// There are values in the list less than the search value.
// Return the index of the closest one.
if (index > 0)
return index - 1;
// All values in the list are greater than the search value.
return -1;
}
Test it at https://dotnetfiddle.net/kLZsM5
Use this method with a comparer that understands PressureData objects:
var pdc = Comparer<PressureData>.Create((x, y) => x.Time.CompareTo(y.Time));
var fl = FlowList.FindMaxIndex(currentPressure[0], pdc);
Here is a working example: https://dotnetfiddle.net/Dmgzsv

Why is removing by index from an IList performing so much worse than removing by item from an ISet?

Edit: I will add some benchmark results. To about a 1000 - 5000 items in the list, IList and RemoveAt beats ISet and Remove, but that's not something to worry about since the differences are marginal. The real fun begins when collection size extends to 10000 and more. I'm posting only those data
I was answering a question here last night and faced a bizarre situation.
First a set of simple methods:
static Random rnd = new Random();
public static int GetRandomIndex<T>(this ICollection<T> source)
{
return rnd.Next(source.Count);
}
public static T GetRandom<T>(this IList<T> source)
{
return source[source.GetRandomIndex()];
}
------------------------------------------------------------------------------------------------------------------------------------
Let's say I'm removing N number of items from a collection randomly. I would write this function:
public static void RemoveRandomly1<T>(this ISet<T> source, int countToRemove)
{
int countToRemain = source.Count - countToRemove;
var inList = source.ToList();
int i = 0;
while (source.Count > countToRemain)
{
source.Remove(inList.GetRandom());
i++;
}
}
or
public static void RemoveRandomly2<T>(this IList<T> source, int countToRemove)
{
int countToRemain = source.Count - countToRemove;
int j = 0;
while (source.Count > countToRemain)
{
source.RemoveAt(source.GetRandomIndex());
j++;
}
}
As you can see the first function is written for an ISet and the second for normal IList. In the first function I'm removing by item from ISet and by index in IList, both of which I believe are O(1). Why is the second function performing so much worse than the first, especially when the lists get bigger?
Odds (my take):
1) In the first function the ISet is converted to an IList (to get the random item from the IList), where as there is no such thing performed in the second function.
Advantage IList.
2) In the first function a call to GetRandomItem is made, where as in the second, a call to GetRandomIndex is made, that's one step less again.
Though trivial, advantage IList.
3) In the first function, the random item is got from a separate list, so the obtained item might be already removed from ISet. This leads in more iterations in the while loop in the first function. In the second function, the random index is got from the source that is being iterated on, hence there are never repetitive iterations. I have tested this and verified this.
i > j always, advantage IList.
I thought the reason for this behaviour is that a List would need constant resizing when items are added or removed. But apparently no in some other testing. I ran:
public static void Remove1(this ISet<int> set)
{
int count = set.Count;
for (int i = 0; i < count; i++)
{
set.Remove(i + 1);
}
}
public static void Remove2(this IList<int> lst)
{
for (int i = lst.Count - 1; i >= 0; i--)
{
lst.RemoveAt(i);
}
}
and found that the second function runs faster.
Test bed:
var f = Enumerable.Range(1, 100000);
var s = new HashSet<int>(f);
var l = new List<int>(f);
Benchmark(() =>
{
//some examples...
s.RemoveRandomly1(2500);
l.RemoveRandomly2(2500);
s.Remove1();
l.Remove2();
}, 1);
public static void Benchmark(Action method, int iterations = 10000)
{
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < iterations; i++)
method();
sw.Stop();
MsgBox.ShowDialog(sw.Elapsed.TotalMilliseconds.ToString());
}
Just trying to know what's with the two structures.. Thanks..
Result:
var f = Enumerable.Range(1, 10000);
s.RemoveRandomly1(7500); => 5ms
l.RemoveRandomly2(7500); => 20ms
var f = Enumerable.Range(1, 100000);
s.RemoveRandomly1(7500); => 7ms
l.RemoveRandomly2(7500); => 275ms
var f = Enumerable.Range(1, 1000000);
s.RemoveRandomly1(75000); => 50ms
l.RemoveRandomly2(75000); => 925000ms
For most typical needs a list would do though..!
First off, IList and ISet aren't implementations of anything. I can write an IList or an ISet implementation that will run very differently, so the concrete implementations are what is important (List and HashSet in your case).
Accessing a List item by index is O(1) but not removing by RemoveAt which is O(n).
List removing from the end will be fast because it doesn't have to copy anything, it just decrements its internal counter that stores how many items it has until the number of empty spots in the underlying array goes below a threshold, at which point it will copy the array to a smaller one. Once you hit the max capacity of the underlying array it creates a new array double the size and copies the elements over. If you go below a certain threshold it will create an array half the size and copy the elements over. It tracks how large it is with a length property, so that unused slots appear like they aren't there.
Randomly removing from a list means that it will have to copy all the array entries that come after the index so that they slide down one spot, which is inherently pretty slow, particularly as the size of the list gets bigger. If you have a List with 1 million entries, and you remove something at index 500,000, it has to copy the second half of the array down a spot.

Off By One errors and Mutation Testing

In the process of writing an "Off By One" mutation tester for my favourite mutation testing framework (NinjaTurtles), I wrote the following code to provide an opportunity to check the correctness of my implementation:
public int SumTo(int max)
{
int sum = 0;
for (var i = 1; i <= max; i++)
{
sum += i;
}
return sum;
}
now this seems simple enough, and it didn't strike me that there would be a problem trying to mutate all the literal integer constants in the IL. After all, there are only 3 (the 0, the 1, and the ++).
WRONG!
It became very obvious on the first run that it was never going to work in this particular instance. Why? Because changing the code to
public int SumTo(int max)
{
int sum = 0;
for (var i = 0; i <= max; i++)
{
sum += i;
}
return sum;
}
only adds 0 (zero) to the sum, and this obviously has no effect. Different story if it was the multiple set, but in this instance it was not.
Now there's a fairly easy algorithm for working out the sum of integers
sum = max * (max + 1) / 2;
which I could have fail the mutations easily, since adding or subtracting 1 from either of the constants there will result in an error. (given that max >= 0)
So, problem solved for this particular case. Although it did not do what I wanted for the test of the mutation, which was to check what would happen when I lost the ++ - effectively an infinite loop. But that's another problem.
So - My Question: Are there any trivial or non-trivial cases where a loop starting from 0 or 1 may result in a "mutation off by one" test failure that cannot be refactored (code under test or test) in a similar way? (examples please)
Note: Mutation tests fail when the test suite passes after a mutation has been applied.
Update: an example of something less trivial, but something that could still have the test refactored so that it failed would be the following
public int SumArray(int[] array)
{
int sum = 0;
for (var i = 0; i < array.Length; i++)
{
sum += array[i];
}
return sum;
}
Mutation testing against this code would fail when changing the var i=0 to var i=1 if the test input you gave it was new[] {0,1,2,3,4,5,6,7,8,9}. However change the test input to new[] {9,8,7,6,5,4,3,2,1,0}, and the mutation testing will fail. So a successful refactor proves the testing.
I think with this particular method, there are two choices. You either admit that it's not suitable for mutation testing because of this mathematical anomaly, or you try to write it in a way that makes it safe for mutation testing, either by refactoring to the form you give, or some other way (possibly recursive?).
Your question really boils down to this: is there a real life situation where we care about whether the element 0 is included in or excluded from the operation of a loop, and for which we cannot write a test around that specific aspect? My instinct is to say no.
Your trivial example may be an example of lack of what I referred to as test-drivenness in my blog, writing about NinjaTurtles. Meaning in the case that you have not refactored this method as far as you should.
One natural case of "mutation test failure" is an algorithm for matrix transposition. To make it more suitable for a single for-loop, add some constraints to this task: let the matrix be non-square and require transposition to be in-place. These constraints make one-dimensional array most suitable place to store the matrix and a for-loop (starting, usually, from index '1') may be used to process it. If you start it from index '0', nothing changes, because top-left element of the matrix always transposes to itself.
For an example of such code, see answer to other question (not in C#, sorry).
Here "mutation off by one" test fails, refactoring the test does not change it. I don't know if the code itself may be refactored to avoid this. In theory it may be possible, but should be too difficult.
The code snippet I referenced earlier is not a perfect example. It still may be refactored if the for loop is substituted by two nested loops (as if for rows and columns) and then these rows and columns are recalculated back to one-dimensional index. Still it gives an idea how to make some algorithm, which cannot be refactored (though not very meaningful).
Iterate through an array of positive integers in the order of increasing indexes, for each index compute its pair as i + i % a[i], and if it's not outside the bounds, swap these elements:
for (var i = 1; i < a.Length; i++)
{
var j = i + i % a[i];
if (j < a.Length)
Swap(a[i], a[j]);
}
Here again a[0] is "unmovable", refactoring the test does not change this, and refactoring the code itself is practically impossible.
One more "meaningful" example. Let's implement an implicit Binary Heap. It is usually placed to some array, starting from index '1' (this simplifies many Binary Heap computations, compared to starting from index '0'). Now implement a copy method for this heap. "Off-by-one" problem in this copy method is undetectable because index zero is unused and C# zero-initializes all arrays. This is similar to OP's array summation, but cannot be refactored.
Strictly speaking, you can refactor the whole class and start everything from '0'. But changing only 'copy' method or the test does not prevent "mutation off by one" test failure. Binary Heap class may be treated just as a motivation to copy an array with unused first element.
int[] dst = new int[src.Length];
for (var i = 1; i < src.Length; i++)
{
dst[i] = src[i];
}
Yes, there are many, assuming I have understood your question.
One similar to your case is:
public int MultiplyTo(int max)
{
int product = 1;
for (var i = 1; i <= max; i++)
{
product *= i;
}
return product;
}
Here, if it starts from 0, the result will be 0, but if it starts from 1 the result should be correct. (Although it won't tell the difference between 1 and 2!).
Not quite sure what you are looking for exactly, but it seems to me that if you change/mutate the initial value of sum from 0 to 1, you should fail the test:
public int SumTo(int max)
{
int sum = 1; // Now we are off-by-one from the beginning!
for (var i = 0; i <= max; i++)
{
sum += i;
}
return sum;
}
Update based on comments:
The loop will only not fail after mutation when the loop invariant is violated in the processing of index 0 (or in the absence of it). Most such special cases can be refactored out of the loop, but consider a summation of 1/x:
for (var i = 1; i <= max; i++) {
sum += 1/i;
}
This works fine, but if you mutate the initial bundary from 1 to 0, the test will fail as 1/0 is invalid operation.

Ways in .NET to get an array of int from 0 to n

I'm searching the way(s) to fill an array with numbers from 0 to a random. For example, from 0 to 12 or 1999, etc.
Of course, there is a for-loop:
var arr = int[n];
for(int i = 0; i < n; i++)
{
arr[i] = i;
}
And I can make this method been an extension for Array class. But is there some more interesting ways?
This already exists(returns IEnumerable, but that is easy enough to change if you need):
arr = Enumerable.Range(0, n);
The most interesting way in my mind produces not an array, but an IEnumerable<int> that enumerates the same number - it has the benefit of O(1) setup time since it defers the actual loop's execution:
public IEnumerable<int> GetNumbers(int max) {
for (int i = 0; i < max; i++)
yield return i;
}
This loop goes through all numbers from 0 to max-1, returning them one at a time - but it only goes through the loop when you actually need it.
You can also use this as GetNumbers(max).ToArray() to get a 'normal' array.
The best answer depends on why you need the array. The thing is, the value of any array element is equal to the index, so accessing any element is essentially a redundant operation. Why not use a class with an indexer, that just returnes the value of the index? It would be indistinguishable from a real array and would scale to any size, except it would take no memory and no time to set up. But I get the feeling it's not speed and compactness you are after. Maybe if you expand on the problem, then a better solution will be more obvious.

Categories