Removing masked entries from an array - c#

The task is to keep an array of objects untouched if input is null and, otherwise, remove the elements that are on positions specified by the input. I've got it working but I'm vastly dissatisfied with the code quality.
List<Stuff> stuff = new List<Stuff>{ new Stuff(1), new Stuff(2), new Stuff(3) };
String input = "5";
if(input == null)
return stuff;
int mask = Int32.Parse(input);
for (int i = stuff.Count - 1; i >= 0; i--)
if ((mask & (int)Math.Pow(2, i)) == 0)
stuff.RemoveAt(i);
return stuff;
The actual obtaining input and the fact that e.g. String.Empty will cause problems need not to be regarded. Let's assume that those are handled.
How can I make the code more efficient?
How can I make the syntax more compact and graspable?

Instead of the backwards running loop, you could use Linq with the following statement.
stuff = stuff.Where( (iStuff, idx) => (mask & (int)Math.Pow(2, idx)) != 0 );
Or even cooler using bitwise shit.
stuff = stuff.Where((_, index) => (mask >> index & 1) == 1);
It uses an overload of Where which can access the position in the sequence, as documented here. For a similar task, there is also an overload of Select which gives access to the index, as documented here.

Untested, but you could make an extension method that iterates the collection and filters, returning matching elements as it goes. Repeatedly bit-shifting the mask and checking the 0th bit seems the easiest to follow - for me at least.
static IEnumerable<T> TakeMaskedItemsByIndex(this IEnumerable<T> collection, ulong mask)
{
foreach (T item in collection)
{
if((mask & 1) == 1)
yield return item;
mask = mask >> 1;
}
}

Related

Checking whether a sequence of integers is increasing

I'm stuck only partially passing the below problem.
Given a sequence of integers, check whether it is possible to obtain a strictly increasing sequence by erasing no more than one element from it.
Example
sequence = [1, 3, 2, 1]
almostIncreasingSequence(sequence) = false
sequence = [1, 3, 2]
almostIncreasingSequence(sequence) = true
My code that is only passing some examples:
bool almostIncreasingSequence(int[] sequence) {
int seqIncreasing = 0;
if (sequence.Length == 1) return true;
for (int i = 0;i < sequence.Length-2;i++)
{
if ((sequence[i] == sequence[++i]+1)||(sequence[i] == sequence[++i]))
{
seqIncreasing++;
}
}
return ((seqIncreasing == sequence.Length) || (--seqIncreasing == sequence.Length));
}
Failed Examples:
Input:
sequence: [1, 3, 2]
Output:
false
Expected Output:
true
Input:
sequence: [10, 1, 2, 3, 4, 5]
Output:
false
Expected Output:
true
Input:
sequence: [0, -2, 5, 6]
Output:
false
Expected Output:
true
Input:
sequence: [1, 1]
Output:
false
Expected Output:
true
The LINQ-based answer is fine, and expresses the basic problem well. It's easy to read and understand, and solves the problem directly. However, it does have the problem that it requires generating a new sequence for each element in the original. As the sequences get longer, this becomes dramatically more costly and eventually, intractable.
It doesn't help that it requires the use of Skip() and Take(), which themselves add to the overhead of handling the original sequence.
A different approach is to scan the sequence once, but keep track of whether a deletion has already been attempted and when finding an out-of-sequence element, to a) immediately return false if a deletion was already found, and b) don't include the deleted element in the determination of the sequence.
The code you tried almost accomplishes this. Here's a version that works:
static bool almostIncreasingSequence(int[] sequence)
{
bool foundOne = false;
for (int i = -1, j = 0, k = 1; k < sequence.Length; k++)
{
bool deleteCurrent = false;
if (sequence[j] >= sequence[k])
{
if (foundOne)
{
return false;
}
foundOne = true;
if (k > 1 && sequence[i] >= sequence[k])
{
deleteCurrent = true;
}
}
if (!foundOne)
{
i = j;
}
if (!deleteCurrent)
{
j = k;
}
}
return true;
}
Note: I originally thought your attempt could be fixed with a minor change. But ultimately, it turned out that it had to be essentially the same as the generic implementation I wrote (especially once I fixed that one too…see below). The only material difference is really just whether one uses an array or a generic IEnumerable<T>.
For grins, I wrote another approach that is in the vein of the LINQ-based solution, in that it works on any sequence, not just arrays. I also made it generic (albeit with the constraint that the type implements IComparable<T>). That looks like this:
static bool almostIncreasingSequence<T>(IEnumerable<T> sequence) where T : IComparable<T>
{
bool foundOne = false;
int i = 0;
T previous = default(T), previousPrevious = default(T);
foreach (T t in sequence)
{
bool deleteCurrent = false;
if (i > 0)
{
if (previous.CompareTo(t) >= 0)
{
if (foundOne)
{
return false;
}
// So, which one do we delete? If the element before the previous
// one is in sequence with the current element, delete the previous
// element. If it's out of sequence with the current element, delete
// the current element. If we don't have a previous previous element,
// delete the previous one.
if (i > 1 && previousPrevious.CompareTo(t) >= 0)
{
deleteCurrent = true;
}
foundOne = true;
}
}
if (!foundOne)
{
previousPrevious = previous;
}
if (!deleteCurrent)
{
previous = t;
}
i++;
}
return true;
}
Of course, if you're willing to copy the original sequence into a temporary array, if it's not already one, then you could easily make the array-based version generic, which would make the code a lot simpler but still generic. It just depends on what your priorities are.
Addendum:
The basic performance difference between the LINQ method and a linear method (such as mine above) is obvious, but I was curious and wanted to quantify this difference. So I ran some tests, using randomly generated sequences, to get a rough idea of the difference.
I performed two versions of the tests: the first, I ran a loop with 1000 trials, where the sequences could be anywhere between 10 and 100 elements long; and the second, with 10,000 trials and sequences between 100 and 1000 elements long. I performed the second version, because on my laptop the entire test of 1000 trials with shorter sequences completed in less than 1/20th of a second, too short a time for me to have confidence in the validity of the result.
With that first version, the code spent about 1ms calling the linear method of the check, and about 30ms calling the LINQ method, for a 30x difference in speed. Increasing the number of trials to 10,000 confirmed the result; the times scaled almost exactly 10x for each method, keeping a difference of 30x.
With the second version, the difference was closer to 400x. The linear version took about 0.07 seconds, while the LINQ version took 30 seconds.
As expected, the longer the sequence, the worse the disparity. For very short sequences, not only is the code unlikely to ever spend much time in the sequence-checking logic, the discrepancy between the linear and LINQ methods is going to be relatively small. But as the sequences get longer, the discrepancy will trend to very poor performance for the LINQ version while the linear version remains an excellent performer.
The LINQ version is very readable and concise. So in a situation where the inputs are always going to be relatively short (on the order of a dozen or two elements at the most), I'd go with the LINQ version. But if I expected to execute this test routinely with data that was any longer than that, I would avoid the LINQ and stick with the much more efficient linear approach.
A note on the randomly-generated sequences: I wrote the code to generate a monotonically increasing sequence of non-negative numbers, of the desired length, and then inserted between 0 and 2 (inclusive) new elements having a value of int.MinValue or int.MaxValue (also randomly selected, for each insertion). In this way, a third of the tests involved sequences that were trivially valid, a third involved sequences that required finding the correct single element to remove, and a third were not valid (i.e. did not meet the requirement that it could be made monotonically increasing by deleting at most one element).
UPDATE: Fixed a bug related to the way I was generating subsequences using Except. The obvious issue was that the subsequences generated when the original sequence contained duplicate items could be wrong; all positions of duplicate items could be potentially removed.
This problem seems deceptively simple but you can easily get bogged down in loops with ifs and elses that will never get it exactly right.
The best way to solve this is to take a step back and understand what the condition you are asking for really means. An almost strictly increasing sequence is one such that, of all possible subsequences created be removing one single item, at least one must be strictly increasing.
Ok, that seems to be sound reasoning, and its easy to implement, so lets do it:
First, a trivial method that tells us if a given sequence is strictly increasing:
private static bool IsStrictlyIncreasing<T>(this IEnumerable<T> sequence)
where T : IComparable<T>
{
using (var e = sequence.GetEnumerator())
{
if (!e.MoveNext())
return true;
var previous = e.Current;
while (e.MoveNext())
{
if (e.Current.CompareTo(previous) <= 0)
return false;
previous = e.Current;
}
return true;
}
}
Now we need a helper method to generate all possible subsequences removing one item (as stated above, simply using Except will not cut it if T has value equality semantics):
private static IEnumerable<IEnumerable<T>> GenerateSubsequences<T>(
this IEnumerable<T> sequence)
=> Enumerable.Range(0, sequence.Count())
.Select(i => sequence.Take(i)
.Concat(sequence.Skip(i + 1)))
And now, we simply need to check all subsequences and find at least one that is strictly increasing:
public static bool IsAlmostStrictlyIncreasing<T>(this IEnumerable<T> sequence)
where T : IComparable<T>
=> sequence.GenerateSubsequences()
.Any(s => s.IsStrictlyIncreasing());
That should do it.
Having solved that CodeSignal challenge using C# myself, I can tell you how I approached it.
First, a helper method to handle the logic of deciding when to remove an element from a sequence:
private static bool removeElement(IEnumerable<int> sequence, int i) {
// This method handles the logic for determining whether to remove an element from a sequence of integers.
// Initialize the return variable and declare some useful element aliases.
bool removeElement = false;
int c = sequence.ElementAt(i), p = sequence.ElementAtOrDefault(i - 1), n = sequence.ElementAtOrDefault(i + 1);
// Remove the first element if and only if it is greater than or equal to the next element.
if (i == 0) removeElement = (c >= n);
// Remove the last element if and only if it is less than or equal to the previous element.
else if (i == (sequence.Count() - 1)) removeElement = (c <= p);
// Removal logic for an element somewhere in the middle of the sequence:
else {
// If the current element is greater than the previous element...
// ...and the current element is less than the next element, then do not remove the current element.
if (c > p && c < n) removeElement = false;
// If the current element is greater than or equal to the next element, then it might need to be removed.
else if (c > p && c >= n) {
removeElement = true;
// Handle edge case for test 19.
// If the current element is the next-to-last element...
// ...and the only reason it's being considered for removal is because it is less than the last element...
// ...then skip it and remove the last element instead.
if (i == (sequence.Count() - 2)) removeElement = false;
// Handle edge case for test 16.
// If the current element occurs before the next-to-last element...
if (i < (sequence.Count() - 2))
// ...and both the current and next elements are less than the following element...
// ...then skip the current element and remove the next one instead.
if (n < sequence.ElementAt(i + 2) && c < sequence.ElementAt(i + 2)) removeElement = false;
// Otherwise, remove the current element.
} else removeElement = true;
}
return removeElement;
}
Then I wrote two versions of the main method: one using LINQ, and one without.
LINQ version:
bool almostIncreasingSequence(int[] sequence) {
// Eliminate the most trivial cases first.
if (sequence.Length <= 2) return true;
else if (sequence.SequenceEqual(sequence.Distinct().OrderBy(x => x))) return true;
else {
// Get the index of the first element that should be removed from the sequence.
int index = Enumerable.Range(0, sequence.Length).First(x => removeElement(sequence, x));
// Remove that element from the sequence.
sequence = sequence.Where((x, i) => i != index).ToArray();
}
// Return whether or not the remaining sequence is strictly increasing.
return sequence.SequenceEqual(sequence.Distinct().OrderBy(x => x));
}
Non-LINQ version:
bool almostIncreasingSequence(int[] sequence) {
// Eliminate the most trivial cases.
if (sequence.Length <= 2) return true;
// Make a copy of the input array in the form of a List collection.
var initSequence = new List<int>(sequence);
// Iterate through the List.
for (int i = 0; i < initSequence.Count; i++) {
// If the current element needs to be removed from the List, remove it.
if (removeElement(initSequence, i)) {
initSequence.RemoveAt(i);
// Now the entire sequence after the first removal must be strictly increasing.
// If this is not the case, return false.
for (int j = i; j < initSequence.Count; j++) {
if (removeElement(initSequence, j)) return false;
}
break;
}
}
return true;
}
Both variations pass all of the provided test cases:
38/38 tests passed.
Sample tests: 19/19
Hidden tests: 19/19
Score: 300/300
Here is my version. It has similarities with Peter Duniho's first solution.
static bool AlmostIncreasingSequence(int[] sequence)
{
int problemIndex = -1;
for (int i = 0; i < sequence.Length - 1; i++)
{
if (sequence[i] < sequence[i + 1])
continue; // The elements i and i + 1 are in order
if (problemIndex != -1)
return false; // The sequence has more than one problems, so it cannot be fixed
problemIndex = i; // This is the first problem found so far
}
if (problemIndex == -1)
return true; // The sequence has no problems
if (problemIndex == 0)
return true; // The sequence can be fixed by removing the first element
if (problemIndex == sequence.Length - 2)
return true; // The sequence can be fixed by removing the last element
if (sequence[problemIndex - 1] < sequence[problemIndex + 1])
return true; // The sequence can be fixed by removing the (problemIndex) element
if (sequence[problemIndex] < sequence[problemIndex + 2])
return true; // The sequence can be fixed by removing the (problemIndex + 1) element
return false; // The sequence cannot be fixed
}
I have applied a recursive method:
public bool IsAlmostIncreasingSequence(int[] sequence)
{
if (sequence.Length <= 2)
return true;
return IsAlmostIncreasingSequenceRecursive(sequence, 0);
}
private bool IsAlmostIncreasingSequenceRecursive(int[] sequence, int seed)
{
int count = seed;
if (count > 1) //condition met: not almost
return false;
for (int i = 1; i < sequence.Length; i++)
{
if (sequence[i] <= sequence[i - 1])
{
if (i >= 2 && sequence[i - 2] >= sequence[i])
sequence = RemoveAt(sequence, i);
else
sequence = RemoveAt(sequence, i - 1);
return IsAlmostIncreasingSequenceRecursive(sequence, ++count);
}
}
return true;
}
private static int[] RemoveAt(int[] sequence, int index)
{
for (int i = index; i < sequence.Length - 1; i++)
sequence[i] = sequence[i + 1];
Array.Resize(ref sequence, sequence.Length - 1);
return sequence;
}
Well I have seen many solutions but things were made complicated a bit so here is my short and precise solution for that particular c# code problem.
bool solution(int[] sequence) {
//if there is just one item return true
if (sequence.Length <= 2) return true;
//create list for sequence comparison, C# beauty
List<int> newList = new List<int>();
if (sequence.Length > 0)
{
newList = new List<int>(sequence);
}
//just check if array is already valid sequence
if (sequence.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
//count occurance of no sequence
int noSecCount = 0;
//for checking Gap
int lastGap = 0, thisGap = 0;
for (int n = 0; n < sequence.Count() - 1; n++)
{
thisGap = sequence[n + 1] - sequence[n];
//if current value is less then next one continue as array is in sequence by this point
//if not less then next one we have a situation here to further digging
if (!(sequence[n] < sequence[n + 1]))
{
noSecCount++;
//if we found more than one occurance of no sequence numbers, this array is not in sequence
if (noSecCount > 1) return false;
switch (n)
{
case 0: //First item at index 0
lastGap = thisGap;
newList = new List<int>(sequence);
newList.RemoveAt(n);
if (newList.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
break;
default: //any other item above index 0
//just remove current item and check the sequence
newList = new List<int>(sequence);
newList.RemoveAt(n);
if (newList.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
//remove the next item and check the sequencce
newList = new List<int>(sequence);
newList.RemoveAt(n + 1);
if (newList.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
//if we reach here we need to check if gap between previous comparison and current one is same? if not we should quick as we find more then
//one out of sequence values.
if (thisGap != lastGap) return false;
lastGap = thisGap;
break;
}
}
}
//if we reach here and there is only one item which is out of sequence, we can remove it and get the sequence
return noSecCount == 1;
}
Thanks for the help strangers! I was able to get all my tests to pass first by removing all the increment/decrement operators for simplicity and simplifying my logic. If the iterator element is greater than or equal to the next element, increment my erasedElements variable. If that variable is 1, we know we've only removed one element and satisfied the increasing sequence.
bool almostIncreasingSequence(int[] sequence) {
int erasedElements = 0;
for (int i = 0; i < sequence.Length-1; i++)
{
if(sequence[i] >= sequence[i+1])
{
erasedElements += 1;
}
}
Console.Write(erasedElements);
return (erasedElements == 1);
}
All of the following sequences passed:
[1, 3, 2, 1]
[1, 3, 2]
[1, 4, 10, 4, 2]
[10, 1, 2, 3, 4, 5]
[1, 1, 1, 2, 3]
[0, -2, 5, 6]
[1, 1]

efficient powerset algorithm for subsets of minimal length

i am using the following C# function to get a powerset limited to subsets of a minimal length
string[] PowerSet(int min_len, string set)
{
IEnumerable<IEnumerable<string>> seed =
new List<IEnumerable<string>>() { Enumerable.Empty<string>() };
return set.Replace(" ", "")
.Split(',')
.Aggregate(seed, (a, b) => a.Concat(a.Select(x => x.Concat(new[] { b }))))
.Where(subset => subset.Count() >= min_len)
.Select(subset => string.Join(",", subset))
.ToArray();
}
the problem is that when the original set is large, the algorithm has to work very hard even if the minimal length is large as well.
e.g:
PowerSet(27, "1,11,12,17,22,127,128,135,240,254,277,284,292,296,399,309,322,326,333,439,440,442,447,567,580,590,692,697");
should be very easy, but is too lengthily for the above function. i am looking for a concise modification of my function which could efficiently handle these cases.
Taking a quick look at your method, one of the inefficiencies is that every possible subset is created, regardless of whether it has enough members to warrant inclusion in the limited super set.
Consider implementing the following extension method instead. This method can trim out some unnecessary subsets based on their count to avoid excess computation.
public static List<List<T>> PowerSet<T>(List<T> startingSet, int minSubsetSize)
{
List<List<T>> subsetList = new List<List<T>>();
//The set bits of each intermediate value represent unique
//combinations from the startingSet.
//We can start checking for combinations at (1<<minSubsetSize)-1 since
//values less than that will not yield large enough subsets.
int iLimit = 1 << startingSet.Count;
for (int i = (1 << minSubsetSize)-1; i < iLimit; i++)
{
//Get the number of 1's in this 'i'
int setBitCount = NumberOfSetBits(i);
//Only include this subset if it will have at least minSubsetSize members.
if (setBitCount >= minSubsetSize)
{
List<T> subset = new List<T>(setBitCount);
for (int j = 0; j < startingSet.Count; j++)
{
//If the j'th bit in i is set,
//then add the j'th element of the startingSet to this subset.
if ((i & (1 << j)) != 0)
{
subset.Add(startingSet[j]);
}
}
subsetList.Add(subset);
}
}
return subsetList;
}
The number of set bits in each incremental i tells you how many members will be in the subset. If there are not enough set bits, then there is no point in doing the work of creating the subset represented by the bit combination. NumberOfSetBits can be implemented a number of ways. See How to count the number of set bits in a 32-bit integer? for various approaches, explanations and references. Here is one example taken from that SO question.
public static int NumberOfSetBits(int i)
{
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
Now, while this solution works for your example, I think you will run into long runtimes and memory issues if you lower the minimum subset size too far or continue to grow the size of the startingSet. Without specific requirements posted in your question, I can't judge if this solution will work for you and/or is safe for your range of expected input cases.
If you find that this solution is still too slow, the operations can be split up for parallel computation, perhaps using PLINQ features.
Lastly, if you would like to dress up the extension method with LINQ, it would look like the following. However, as written, I think you will see slower performance without some changes to it.
public static IEnumerable<List<T>> PowerSet<T>(List<T> startingSet, int minSubsetSize)
{
var startingSetIndexes = Enumerable.Range(0, startingSet.Count).ToList();
var candidates = Enumerable.Range((1 << minSubsetSize)-1, 1 << startingSet.Count)
.Where(p => NumberOfSetBits(p) >= minSubsetSize)
.ToList();
foreach (int p in candidates)
{
yield return startingSetIndexes.Where(setInd => (p & (1 << setInd)) != 0)
.Select(setInd => startingSet[setInd])
.ToList();
}
}

Function to look through list and determine trend

So I have a list of items. Each item on the list has a property called notional. Now, the list is already sorted. What I need to do is, develop a function that sets the type of list to one of the following:
Bullet - notional is the same for every item
Amortizing - notional decreases over the course of the schedule (might stay the same from element to element but it should never go up, and should end lower)
Accreting - notional increases over the course of the schedule (might stay the same from element to element but it should never go down, and should end higher)
Rollercoaster - notional goes up and down (could end the same, higher, or lower, but shouldn't be the same for each element and shouldn't be classfied as the other types)
What would this method look like and what would be the most efficient way to go through the list and figure this out?
Thanks!
This would be a straightforward way to do it:
bool hasGoneUp = false;
bool hasGoneDown = false;
T previous = null; // T is the type of objects in the list; assuming ref type
foreach(var item in list)
{
if (previous == null) {
previous = item;
continue;
}
hasGoneUp = hasGoneUp || item.notional > previous.notional;
hasGoneDown = hasGoneDown || item.notional < previous.notional;
if(hasGoneUp && hasGoneDown) {
return Trend.Rollercoaster;
}
previous = item;
}
if (!hasGoneUp && !hasGoneDown) {
return Trend.Bullet;
}
// Exactly one of hasGoneUp and hasGoneDown is true by this point
return hasGoneUp ? Trend.Accreting : Trend.Amortizing;
Let trendOut = Bullet
Loop from First Item to Last item
2.1. If previous notional < next notional
2.1.a. If trendOut = Amortizing return RollerCoaster
2.1.b. Else set trendOut = Accreting
2.2. if Previous Notional > next notional
2.2.a. If trendOut = Accreting return RollerCoaster
2.2.b. Else set trendOut = Amortizing
return trendOut.
You could probably do something as simple as this
var changeList = new List<Integer>
for(i = 0; i < yourList.Count() - 1; i++)
{
changeList.Add(yourList.Item(i + 1) - yourList.Item(i));
}
//Determine if the nature of the list
var positiveChangeCount = changeList.Where(x => x < 0);
var negativeChangeCount = changeList.Where(x => X > 0);
if (positiveChangeCount = yourList.Count)
{
Accreting;
}
elseif (negativeChangeCount = yourList.Count)
{
Amortizing;
}
elseif (negativeChangeCount + PositiveChangeCount = 0)
{
Bullet;
}
else
{
Rollercoaster;
}
I usually start of by optimizing for simplicity first and then performance. Hence, I would start by making a second list of N-1 elements, whose {elements} are differences between the {notionals} of the first list.
Hence, for the second list, I would expect the following for the list of your needs
Bullet - ALL elements are 0
Amortising - ALL elements stay 0 or negative
Accreting - ALL elements stay 0 or positive
Rollercoaster - Elements oscillate between negative & positive
You can probably optimize it an do it in one pass. Basically, this is a discrete differentiation over your data.
bool OnlyGreaterOrEqual=true;
bool OnlyLessOrEqual=true;
foreach(int i=1;i<itemList.Count;i++){
if(itemList[i].notional>itemList[i-1].notional){
OnlyLessOrEqual=false;
}else if(itemList[i].notional<itemList[i-1].notional){
OnlyGreaterOrEqual=false;
}
}
if(OnlyGreaterOrEqual && OnlyLessOrEqual){
return "Bullet";
}else if(OnlyGreaterOrEqual){
return "Accreting":
}else if(OnlyLessOrEqual){
return "Amortizing";
}else{
return "RollerCoast";
}
This is basically a Linq implementation of Danish's answer. It'll require (worst case) 3 passes through the list, but because they are so small it won't really matter from a performance point of view. (I wrote it to work on a list of ints so you'll have to modify it easily to work with your types).
var tmp = values
.Skip(1)
.Zip( values, (first, second) => first - second )
.ToList();
var up = tmp.Any( t => t > 0 );
var down = tmp.Any( t => t < 0 );
if( up && down )
// Rollercoaster
else if( up )
// Accreting
else if( down )
// Amortizing
else
// Bullet
You could also (ab)use the Aggregate operator and Tuple to do it as one query. However, this will fail if the collection is empty and is a bit weird to use in production code.
var result = values.Skip(1).Aggregate(
Tuple.Create<int, bool, bool>( values.First(), false, false ),
( last, current ) => {
return Tuple.Create(
current,
last.Item2 || (current - last.Item1) > 0,
last.Item3 || (current - last.Item1) < 0 );
});
result will be a tuple that contains:
the last element of the collection (which is of no use)
Item2 will contain a boolean indicating whether any element was bigger than the previous element
Item3 will contain a boolean indicating whether any element was smaller than the previous element
The same switch statement as above can be used to decide which pattern your data follows.

Can I use infinite range and operate over it?

Enumerable.Range(0, int.MaxValue)
.Select(n => Math.Pow(n, 2))
.Where(squared => squared % 2 != 0)
.TakeWhile(squared => squared < 10000).Sum()
Will this code iterate over all of the integer values from 0 to max-range or just through the integer values to satisfy the take-while, where, and select operators?
Can somebody clarify?
EDIT: My first try to make sure it works as expected was dumb one. I revoke it :)
int.MaxValue + 5 overflows to be a negative number. Try it yourself:
unchecked
{
int count = int.MaxValue + 5;
Console.WriteLine(count); // Prints -2147483644
}
The second argument for Enumerable.Range has to be non-negative - hence the exception.
You can certainly use infinite sequences in LINQ though. Here's an example of such a sequence:
public IEnumerable<int> InfiniteCounter()
{
int counter = 0;
while (true)
{
unchecked
{
yield return counter;
counter++;
}
}
}
That will overflow as well, of course, but it'll keep going...
Note that some LINQ operators (e.g. Reverse) need to read all the data before they can yield their first result. Others (like Select) can just keep streaming results as they read them from the input. See my Edulinq blog posts for details of the behaviour of each operator (in LINQ to Objects).
The way to solve these sort of questions in general, is to think about what's going on in steps.
Linq turns the linq code into something that'll be executed by query provider. This could be something like producing SQL code, or all manner of things. In the case of linq-to-objects, it produces some equivalent .NET code. Thinking about what that .NET code will be lets us reason about what will happen.*
With your code you have:
Enumerable.Range(0, int.MaxValue)
.Select(n => Math.Pow(n, 2))
.Where(squared => squared % 2 != 0)
.TakeWhile(squared => squared < 10000).Sum()
Enumerable.Range is slightly more complicated than:
for(int i = start; i != start + count; ++i)
yield return i;
...but that's close enough for argument's sake.
Select is close enough to:
foreach(T item in source)
yield return func(item);
Where is close enough to:
foreach(T item in source)
if(func(item))
yield return item;
TakeWhile is close enough to:
foreach(T item in source)
if(func(item))
yield return item;
else
yield break;
Sum is close enough to:
T tmp = 0;//must be numeric type
foreach(T x in source)
tmp += x;
return tmp;
This simplifies a few optimisations and so on, but is close enough to reason with. Taking each of these in turn, your code is equivalent to:
double ret = 0; // part of equivalent of sum
for(int i = 0; i != int.MaxValue; ++i) // equivalent of Range
{
double j = Math.Pow(i, 2); // equivalent of Select(n => Math.Pow(n, 2))
if(j % 2 != 0) //equivalent of Where(squared => squared %2 != 0)
{
if(j < 10000) //equivalent of TakeWhile(squared => squared < 10000)
{
ret += j; //equaivalent of Sum()
}
else //TakeWhile stopping further iteration
{
break;
}
}
}
return ret; //end of equivalent of Sum()
Now, in some ways the code above is simpler, and in some ways it's more complicated. The whole point of using LINQ is that in many ways its simpler. Still, to answer your question "Will this code iterate over all of the integer values from 0 to max-range or just through the integer values to satisfy the take-while, where, and select operators?" we can look at the above and see that those that don't satisfy the where are iterated through to find that they don't satisfy the where, but no more work is done with them, and once the TakeWhile is satisfied, all further work is stopped (the break in my non-LINQ re-write).
Of course it's only the TakeWhile() in this case that means the call will return in a reasonable length of time, but we also need to think briefly about the others to make sure they yield as they go. Consider the following variant of your code:
Enumerable.Range(0, int.MaxValue)
.Select(n => Math.Pow(n, 2))
.Where(squared => squared % 2 != 0)
.ToList()
.TakeWhile(squared => squared < 10000).Sum()
Theoretically, this will give exactly the same answer, but it will take far longer and far more memory to do so (probably enough to cause an out of memory exception). The equivalent non-linq code here though is:
List<double> tmpList = new List<double>(); // part of ToList equivalent
for(int i = 0; i != int.MaxValue; ++i) // equivalent of Range
{
double j = Math.Pow(i, 2); // equivalent of Select(n => Math.Pow(n, 2))
if(j % 2 != 0) //equivalent of Where(squared => squared %2 != 0)
{
tmpList.Add(j);//part of equivalent to ToList()
}
}
double ret = 0; // part of equivalent of sum
foreach(double k in tmpList)
{
if(k < 10000) //equivalent of TakeWhile(squared => squared < 10000)
{
ret += k; //equaivalent of Sum()
}
else //TakeWhile stopping further iteration
{
break;
}
}
return ret; //end of equivalent of Sum()
Here we can see how adding ToList() to the Linq query vastly affects the query so that every item produced by the Range() call must be dealt with. Methods like ToList() and ToArray() break up the chaining so that non-linq equivalents no longer fit "inside" each other and none can therefore stop the operation of those that come before. (Sum() is another example, but since it's after your TakeWhile() in your example, that isn't an issue).
Another thing that would make it go through every iteration of the range is if you had While(x => false) because it would never actually perform the test in TakeWhile.
*Though there may be further optimisations, esp in the case of SQL code and also while conceptually e.g. Count() is equivalent to:
int c = 0;
foreach(item in src)
++c;
return c;
That this will be turned into a call to the Count property of an ICollection or the Length property of an array means the O(n) above is replaced by an O(1) (for most ICollection implementations) call, which is a massive gain for large sequences.
Your first code will only iterate as long the TakeWhile condition is met. It will not iterate until int.MaxValue.
int.MaxValue + 5 will result in a negative integer. Enumerable.Range throws an ArgumentOutOfRangeException if its second argument is negative. So that's why you get the exception (before any iteration takes place).

Does Lists in C# support slicing like in Python?

Sorry for such a basic question regarding lists, but do we have this feature in C#?
e.g. imagine this Python List:
a = ['a','b,'c']
print a[0:1]
>>>>['a','b']
Is there something like this in C#? I currently have the necessity to test some object properties in pairs. edit: pairs are always of two :P
Imagine a larger (python) list:
a = ['a','a','b','c','d','d']
I need to test for example if a[0] = a[1], and if a[1] = a[2] etc.
How this can be done in C#?
Oh, and a last question: what is the tag (here) i can use to mark some parts of my post as code?
You can use LINQ to create a lazily-evaluated copy of a segment of a list. What you can't do without extra code (as far as I'm aware) is take a "view" on an arbitrary IList<T>. There's no particular reason why this shouldn't be feasible, however. You'd probably want it to be a fixed size (i.e. prohibit changes via Add/Remove) and you could also make it optionally read-only - basically you'd just proxy various calls on to the original list.
Sounds like it might be quite useful, and pretty easy to code... let me know if you'd like me to do this.
Out of interest, does a Python slice genuinely represent a view, or does it take a copy? If you change the contents of the original list later, does that change the contents of the slice? If you really want a copy, the the LINQ solutions using Skip/Take/ToList are absolutely fine. I do like the idea of a cheap view onto a collection though...
I've been looking for something like Python-Slicing in C# with no luck.
I finally wrote the following string extensions to mimic the python slicing:
static class StringExtensions
{
public static string Slice(this string input, string option)
{
var opts = option.Trim().Split(':').Select(s => s.Length > 0 ? (int?)int.Parse(s) : null).ToArray();
if (opts.Length == 1)
return input[opts[0].Value].ToString(); // only one index
if (opts.Length == 2)
return Slice(input, opts[0], opts[1], 1); // start and end
if (opts.Length == 3)
return Slice(input, opts[0], opts[1], opts[2]); // start, end and step
throw new NotImplementedException();
}
public static string Slice(this string input, int? start, int? end, int? step)
{
int len = input.Length;
if (!step.HasValue)
step = 1;
if (!start.HasValue)
start = (step.Value > 0) ? 0 : len-1;
else if (start < 0)
start += len;
if (!end.HasValue)
end = (step.Value > 0) ? len : -1;
else if (end < 0)
end += len;
string s = "";
if (step < 0)
for (int i = start.Value; i > end.Value && i >= 0; i+=step.Value)
s += input[i];
else
for (int i = start.Value; i < end.Value && i < len; i+=step.Value)
s += input[i];
return s;
}
}
Examples of how to use it:
"Hello".Slice("::-1"); // returns "olleH"
"Hello".Slice("2:-1"); // returns "ll"

Categories