Checking whether a sequence of integers is increasing - c#

I'm stuck only partially passing the below problem.
Given a sequence of integers, check whether it is possible to obtain a strictly increasing sequence by erasing no more than one element from it.
Example
sequence = [1, 3, 2, 1]
almostIncreasingSequence(sequence) = false
sequence = [1, 3, 2]
almostIncreasingSequence(sequence) = true
My code that is only passing some examples:
bool almostIncreasingSequence(int[] sequence) {
int seqIncreasing = 0;
if (sequence.Length == 1) return true;
for (int i = 0;i < sequence.Length-2;i++)
{
if ((sequence[i] == sequence[++i]+1)||(sequence[i] == sequence[++i]))
{
seqIncreasing++;
}
}
return ((seqIncreasing == sequence.Length) || (--seqIncreasing == sequence.Length));
}
Failed Examples:
Input:
sequence: [1, 3, 2]
Output:
false
Expected Output:
true
Input:
sequence: [10, 1, 2, 3, 4, 5]
Output:
false
Expected Output:
true
Input:
sequence: [0, -2, 5, 6]
Output:
false
Expected Output:
true
Input:
sequence: [1, 1]
Output:
false
Expected Output:
true

The LINQ-based answer is fine, and expresses the basic problem well. It's easy to read and understand, and solves the problem directly. However, it does have the problem that it requires generating a new sequence for each element in the original. As the sequences get longer, this becomes dramatically more costly and eventually, intractable.
It doesn't help that it requires the use of Skip() and Take(), which themselves add to the overhead of handling the original sequence.
A different approach is to scan the sequence once, but keep track of whether a deletion has already been attempted and when finding an out-of-sequence element, to a) immediately return false if a deletion was already found, and b) don't include the deleted element in the determination of the sequence.
The code you tried almost accomplishes this. Here's a version that works:
static bool almostIncreasingSequence(int[] sequence)
{
bool foundOne = false;
for (int i = -1, j = 0, k = 1; k < sequence.Length; k++)
{
bool deleteCurrent = false;
if (sequence[j] >= sequence[k])
{
if (foundOne)
{
return false;
}
foundOne = true;
if (k > 1 && sequence[i] >= sequence[k])
{
deleteCurrent = true;
}
}
if (!foundOne)
{
i = j;
}
if (!deleteCurrent)
{
j = k;
}
}
return true;
}
Note: I originally thought your attempt could be fixed with a minor change. But ultimately, it turned out that it had to be essentially the same as the generic implementation I wrote (especially once I fixed that one too…see below). The only material difference is really just whether one uses an array or a generic IEnumerable<T>.
For grins, I wrote another approach that is in the vein of the LINQ-based solution, in that it works on any sequence, not just arrays. I also made it generic (albeit with the constraint that the type implements IComparable<T>). That looks like this:
static bool almostIncreasingSequence<T>(IEnumerable<T> sequence) where T : IComparable<T>
{
bool foundOne = false;
int i = 0;
T previous = default(T), previousPrevious = default(T);
foreach (T t in sequence)
{
bool deleteCurrent = false;
if (i > 0)
{
if (previous.CompareTo(t) >= 0)
{
if (foundOne)
{
return false;
}
// So, which one do we delete? If the element before the previous
// one is in sequence with the current element, delete the previous
// element. If it's out of sequence with the current element, delete
// the current element. If we don't have a previous previous element,
// delete the previous one.
if (i > 1 && previousPrevious.CompareTo(t) >= 0)
{
deleteCurrent = true;
}
foundOne = true;
}
}
if (!foundOne)
{
previousPrevious = previous;
}
if (!deleteCurrent)
{
previous = t;
}
i++;
}
return true;
}
Of course, if you're willing to copy the original sequence into a temporary array, if it's not already one, then you could easily make the array-based version generic, which would make the code a lot simpler but still generic. It just depends on what your priorities are.
Addendum:
The basic performance difference between the LINQ method and a linear method (such as mine above) is obvious, but I was curious and wanted to quantify this difference. So I ran some tests, using randomly generated sequences, to get a rough idea of the difference.
I performed two versions of the tests: the first, I ran a loop with 1000 trials, where the sequences could be anywhere between 10 and 100 elements long; and the second, with 10,000 trials and sequences between 100 and 1000 elements long. I performed the second version, because on my laptop the entire test of 1000 trials with shorter sequences completed in less than 1/20th of a second, too short a time for me to have confidence in the validity of the result.
With that first version, the code spent about 1ms calling the linear method of the check, and about 30ms calling the LINQ method, for a 30x difference in speed. Increasing the number of trials to 10,000 confirmed the result; the times scaled almost exactly 10x for each method, keeping a difference of 30x.
With the second version, the difference was closer to 400x. The linear version took about 0.07 seconds, while the LINQ version took 30 seconds.
As expected, the longer the sequence, the worse the disparity. For very short sequences, not only is the code unlikely to ever spend much time in the sequence-checking logic, the discrepancy between the linear and LINQ methods is going to be relatively small. But as the sequences get longer, the discrepancy will trend to very poor performance for the LINQ version while the linear version remains an excellent performer.
The LINQ version is very readable and concise. So in a situation where the inputs are always going to be relatively short (on the order of a dozen or two elements at the most), I'd go with the LINQ version. But if I expected to execute this test routinely with data that was any longer than that, I would avoid the LINQ and stick with the much more efficient linear approach.
A note on the randomly-generated sequences: I wrote the code to generate a monotonically increasing sequence of non-negative numbers, of the desired length, and then inserted between 0 and 2 (inclusive) new elements having a value of int.MinValue or int.MaxValue (also randomly selected, for each insertion). In this way, a third of the tests involved sequences that were trivially valid, a third involved sequences that required finding the correct single element to remove, and a third were not valid (i.e. did not meet the requirement that it could be made monotonically increasing by deleting at most one element).

UPDATE: Fixed a bug related to the way I was generating subsequences using Except. The obvious issue was that the subsequences generated when the original sequence contained duplicate items could be wrong; all positions of duplicate items could be potentially removed.
This problem seems deceptively simple but you can easily get bogged down in loops with ifs and elses that will never get it exactly right.
The best way to solve this is to take a step back and understand what the condition you are asking for really means. An almost strictly increasing sequence is one such that, of all possible subsequences created be removing one single item, at least one must be strictly increasing.
Ok, that seems to be sound reasoning, and its easy to implement, so lets do it:
First, a trivial method that tells us if a given sequence is strictly increasing:
private static bool IsStrictlyIncreasing<T>(this IEnumerable<T> sequence)
where T : IComparable<T>
{
using (var e = sequence.GetEnumerator())
{
if (!e.MoveNext())
return true;
var previous = e.Current;
while (e.MoveNext())
{
if (e.Current.CompareTo(previous) <= 0)
return false;
previous = e.Current;
}
return true;
}
}
Now we need a helper method to generate all possible subsequences removing one item (as stated above, simply using Except will not cut it if T has value equality semantics):
private static IEnumerable<IEnumerable<T>> GenerateSubsequences<T>(
this IEnumerable<T> sequence)
=> Enumerable.Range(0, sequence.Count())
.Select(i => sequence.Take(i)
.Concat(sequence.Skip(i + 1)))
And now, we simply need to check all subsequences and find at least one that is strictly increasing:
public static bool IsAlmostStrictlyIncreasing<T>(this IEnumerable<T> sequence)
where T : IComparable<T>
=> sequence.GenerateSubsequences()
.Any(s => s.IsStrictlyIncreasing());
That should do it.

Having solved that CodeSignal challenge using C# myself, I can tell you how I approached it.
First, a helper method to handle the logic of deciding when to remove an element from a sequence:
private static bool removeElement(IEnumerable<int> sequence, int i) {
// This method handles the logic for determining whether to remove an element from a sequence of integers.
// Initialize the return variable and declare some useful element aliases.
bool removeElement = false;
int c = sequence.ElementAt(i), p = sequence.ElementAtOrDefault(i - 1), n = sequence.ElementAtOrDefault(i + 1);
// Remove the first element if and only if it is greater than or equal to the next element.
if (i == 0) removeElement = (c >= n);
// Remove the last element if and only if it is less than or equal to the previous element.
else if (i == (sequence.Count() - 1)) removeElement = (c <= p);
// Removal logic for an element somewhere in the middle of the sequence:
else {
// If the current element is greater than the previous element...
// ...and the current element is less than the next element, then do not remove the current element.
if (c > p && c < n) removeElement = false;
// If the current element is greater than or equal to the next element, then it might need to be removed.
else if (c > p && c >= n) {
removeElement = true;
// Handle edge case for test 19.
// If the current element is the next-to-last element...
// ...and the only reason it's being considered for removal is because it is less than the last element...
// ...then skip it and remove the last element instead.
if (i == (sequence.Count() - 2)) removeElement = false;
// Handle edge case for test 16.
// If the current element occurs before the next-to-last element...
if (i < (sequence.Count() - 2))
// ...and both the current and next elements are less than the following element...
// ...then skip the current element and remove the next one instead.
if (n < sequence.ElementAt(i + 2) && c < sequence.ElementAt(i + 2)) removeElement = false;
// Otherwise, remove the current element.
} else removeElement = true;
}
return removeElement;
}
Then I wrote two versions of the main method: one using LINQ, and one without.
LINQ version:
bool almostIncreasingSequence(int[] sequence) {
// Eliminate the most trivial cases first.
if (sequence.Length <= 2) return true;
else if (sequence.SequenceEqual(sequence.Distinct().OrderBy(x => x))) return true;
else {
// Get the index of the first element that should be removed from the sequence.
int index = Enumerable.Range(0, sequence.Length).First(x => removeElement(sequence, x));
// Remove that element from the sequence.
sequence = sequence.Where((x, i) => i != index).ToArray();
}
// Return whether or not the remaining sequence is strictly increasing.
return sequence.SequenceEqual(sequence.Distinct().OrderBy(x => x));
}
Non-LINQ version:
bool almostIncreasingSequence(int[] sequence) {
// Eliminate the most trivial cases.
if (sequence.Length <= 2) return true;
// Make a copy of the input array in the form of a List collection.
var initSequence = new List<int>(sequence);
// Iterate through the List.
for (int i = 0; i < initSequence.Count; i++) {
// If the current element needs to be removed from the List, remove it.
if (removeElement(initSequence, i)) {
initSequence.RemoveAt(i);
// Now the entire sequence after the first removal must be strictly increasing.
// If this is not the case, return false.
for (int j = i; j < initSequence.Count; j++) {
if (removeElement(initSequence, j)) return false;
}
break;
}
}
return true;
}
Both variations pass all of the provided test cases:
38/38 tests passed.
Sample tests: 19/19
Hidden tests: 19/19
Score: 300/300

Here is my version. It has similarities with Peter Duniho's first solution.
static bool AlmostIncreasingSequence(int[] sequence)
{
int problemIndex = -1;
for (int i = 0; i < sequence.Length - 1; i++)
{
if (sequence[i] < sequence[i + 1])
continue; // The elements i and i + 1 are in order
if (problemIndex != -1)
return false; // The sequence has more than one problems, so it cannot be fixed
problemIndex = i; // This is the first problem found so far
}
if (problemIndex == -1)
return true; // The sequence has no problems
if (problemIndex == 0)
return true; // The sequence can be fixed by removing the first element
if (problemIndex == sequence.Length - 2)
return true; // The sequence can be fixed by removing the last element
if (sequence[problemIndex - 1] < sequence[problemIndex + 1])
return true; // The sequence can be fixed by removing the (problemIndex) element
if (sequence[problemIndex] < sequence[problemIndex + 2])
return true; // The sequence can be fixed by removing the (problemIndex + 1) element
return false; // The sequence cannot be fixed
}

I have applied a recursive method:
public bool IsAlmostIncreasingSequence(int[] sequence)
{
if (sequence.Length <= 2)
return true;
return IsAlmostIncreasingSequenceRecursive(sequence, 0);
}
private bool IsAlmostIncreasingSequenceRecursive(int[] sequence, int seed)
{
int count = seed;
if (count > 1) //condition met: not almost
return false;
for (int i = 1; i < sequence.Length; i++)
{
if (sequence[i] <= sequence[i - 1])
{
if (i >= 2 && sequence[i - 2] >= sequence[i])
sequence = RemoveAt(sequence, i);
else
sequence = RemoveAt(sequence, i - 1);
return IsAlmostIncreasingSequenceRecursive(sequence, ++count);
}
}
return true;
}
private static int[] RemoveAt(int[] sequence, int index)
{
for (int i = index; i < sequence.Length - 1; i++)
sequence[i] = sequence[i + 1];
Array.Resize(ref sequence, sequence.Length - 1);
return sequence;
}

Well I have seen many solutions but things were made complicated a bit so here is my short and precise solution for that particular c# code problem.
bool solution(int[] sequence) {
//if there is just one item return true
if (sequence.Length <= 2) return true;
//create list for sequence comparison, C# beauty
List<int> newList = new List<int>();
if (sequence.Length > 0)
{
newList = new List<int>(sequence);
}
//just check if array is already valid sequence
if (sequence.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
//count occurance of no sequence
int noSecCount = 0;
//for checking Gap
int lastGap = 0, thisGap = 0;
for (int n = 0; n < sequence.Count() - 1; n++)
{
thisGap = sequence[n + 1] - sequence[n];
//if current value is less then next one continue as array is in sequence by this point
//if not less then next one we have a situation here to further digging
if (!(sequence[n] < sequence[n + 1]))
{
noSecCount++;
//if we found more than one occurance of no sequence numbers, this array is not in sequence
if (noSecCount > 1) return false;
switch (n)
{
case 0: //First item at index 0
lastGap = thisGap;
newList = new List<int>(sequence);
newList.RemoveAt(n);
if (newList.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
break;
default: //any other item above index 0
//just remove current item and check the sequence
newList = new List<int>(sequence);
newList.RemoveAt(n);
if (newList.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
//remove the next item and check the sequencce
newList = new List<int>(sequence);
newList.RemoveAt(n + 1);
if (newList.SequenceEqual(newList.Distinct().OrderBy(x => x))) return true;
//if we reach here we need to check if gap between previous comparison and current one is same? if not we should quick as we find more then
//one out of sequence values.
if (thisGap != lastGap) return false;
lastGap = thisGap;
break;
}
}
}
//if we reach here and there is only one item which is out of sequence, we can remove it and get the sequence
return noSecCount == 1;
}

Thanks for the help strangers! I was able to get all my tests to pass first by removing all the increment/decrement operators for simplicity and simplifying my logic. If the iterator element is greater than or equal to the next element, increment my erasedElements variable. If that variable is 1, we know we've only removed one element and satisfied the increasing sequence.
bool almostIncreasingSequence(int[] sequence) {
int erasedElements = 0;
for (int i = 0; i < sequence.Length-1; i++)
{
if(sequence[i] >= sequence[i+1])
{
erasedElements += 1;
}
}
Console.Write(erasedElements);
return (erasedElements == 1);
}
All of the following sequences passed:
[1, 3, 2, 1]
[1, 3, 2]
[1, 4, 10, 4, 2]
[10, 1, 2, 3, 4, 5]
[1, 1, 1, 2, 3]
[0, -2, 5, 6]
[1, 1]

Related

How to implement a specialized overload of the List.RemoveAll method, with an index parameter in the predicate?

The List<T>.RemoveAll is a quite useful method, that allows to remove efficiently multiple items from a list. Unfortunately in some scenarios I needed some extra features that the method doesn't have, and some guarantees that the documentation doesn't provide. It also has a questionable behavior in case the match predicate fails, that causes me anxiety. So in this question I am asking for an implementation of the same method, in the form of an extension method, with these features and characteristics:
Instead of a Predicate<T> it accepts a Func<T, int, bool> delegate, where the int is the zero-based index of the T item.
It guarantees that the predicate will be invoked exactly once for each item, in a stricly ascending order.
In case the predicate returns true for some items and then fails for another item, the items that have been elected for removal are removed from the list before the propagation of the exception.
Here is the signature of the extension method that I am trying to implement:
public static int RemoveAll<T>(this List<T> list, Func<T, int, bool> predicate);
It returns the number of elements that were removed.
I attempted to implement it using as starting point the existing implementation, but it has some performance optimizations that make it quite complex, and injecting the desirable "exceptional" behavior is not obvious. I am interested for an implementation that is simple and reasonably efficient. Using LINQ in the implementation is not desirable, because it implies memory allocations that I would like to avoid.
Context: I should demonstrate the behavior of the built-in List<T>.RemoveAll method, and explain why I don't like it. In case the match predicate fails for an item in the middle of the list, the items that have already been elected for removal are either not removed, or they are replaced with duplicates of other elements. In all cases the list retains its original size. Here is a minimal demo:
List<int> list = new(Enumerable.Range(1, 15));
Console.WriteLine($"Before RemoveAll: [{String.Join(", ", list)}]");
try
{
list.RemoveAll(item =>
{
if (item == 10) throw new Exception();
bool removeIt = item % 2 == 1;
if (removeIt) Console.WriteLine($"Removing #{item}");
return removeIt;
});
}
catch { } // Ignore the error for demonstration purposes
finally
{
Console.WriteLine($"After RemoveAll: [{String.Join(", ", list)}]");
}
The list has 15 numbers, and the intention is to remove the odd numbers from the list. The predicate fails for the 10th number.
Output:
Before RemoveAll: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
Removing #1
Removing #3
Removing #5
Removing #7
Removing #9
After RemoveAll: [2, 4, 6, 8, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
Online demo.
As you can see the numbers 1 and 3 have been removed, the 5, 7 and 9 are still there, and the numbers 6 and 8 have been duplicated (there are two occurrences of each). On the contrary the output that I expected to see is:
After RemoveAll: [2, 4, 6, 8, 10, 11, 12, 13, 14, 15]
This would be a reasonable and predictable behavior I could count on. It keeps the levels of danger in a manageable level. I am not risking, for example, duplicating items in a virtual shopping cart, or printing twice some PDF documents from a selection. The existing behavior stretches a bit too much my comfort levels.
I have reported this behavior to Microsoft, and the feedback that I've got is that in case of failure the outcome is undefined. From their point of view there is no difference between the two above outputs (the actual and the expected). Both are equally corrupted, because both represent a state that is neither the original nor the final/correct state after a successful execution. So they don't think that there is any bug that needs to be fixed, and doing changes that could potentially affect negatively the performance of successful executions is not justified. They also believe that the existing behavior is not surprising or unexpected, so there is no reason to document it.
This solution is based on the idea to separate the selection of the items to be removed from the removal itself.
This has the following advantages:
If during the selection process, an exception occurs, the list will be left untouched
The removal process can only fail in catastrophic cases (OutOfMemoryException etc.)
But of course also some disadantages:
it requires extra memory to store the intermediate selection result
some optimizations might not be as effective
Because of the mentioned optimizations, I chose to base the selection result on ranges instead of individual indexes, so we can use List.RemoveRange which if more effective than individual RemoveAt calls (assumed that there are in fact ranges with more than one element).
public static List<(int start, int count)> GetIndexRanges<T>(this List<T> list,
Func<T, int, bool> predicate)
{
var result = new List<(int start, int count)>();
int start = -1;
for (var i = 0; i < list.Count; i++)
{
// see note 1 below
bool toBeRemoved = predicate(list[i], i);
if (toBeRemoved)
{
if (start < 0)
start = i; // new range starts
}
else if (start >= 0)
{
// range finished
result.Add((start, i - start));
start = -1;
}
}
if (start >= 0)
{
// orphan range at the end
result.Add((start, list.Count - start));
}
return result;
}
public static int RemoveIndexRanges<T>(this List<T> list,
List<(int start, int count)> ranges)
{
var removed = 0;
foreach (var range in ranges)
{
// the "- removed" is there to take into account
// that deletion moves the indexes.
list.RemoveRange(range.start - removed, range.count);
removed += range.count;
}
return removed;
}
Usage:
var ranges = list.GetIndexRanges((item, index) =>
{
//if (item == 10) throw new Exception();
return item % 2 == 1;
});
// See note 2 below
list.RemoveIndexRanges(ranges);
Note 1: As is, an exception in the predicate would just be propagated during the selection process, with no change to the ecollection. To give the caller more control over this, the following could be done: extend GetIndexRanges to still return everything collected so far, and in addition also return any exception as out parameter:
public static List<(int start, int count)> GetIndexRanges<T>(this List<T> list,
Func<T, int, bool> predicate, out Exception exception)
{
var result = new List<(int start, int count)>();
int start = -1;
for (var i = 0; i < list.Count; i++)
{
bool toBeRemoved = false;
try
{
toBeRemoved = predicate(list[i], i);
}
catch (Exception e)
{
exception = e;
break; // omit this line to continue with the selection process
}
if (toBeRemoved)
{
if (start < 0)
start = i; // new range starts
}
else if (start >= 0)
{
// range finished
result.Add((start, i - start));
start = -1;
}
}
if (start >= 0)
{
// orphan range at the end
result.Add((start, list.Count - start));
}
return result;
}
var ranges = list.GetIndexRanges((item, index) =>
{
if (item == 10) throw new Exception();
return item % 2 == 1;
}, out var exception);
// to fulfil requirement #3, we remove the ranges collected so far
// even in case of an exception
list.RemoveIndexRanges(ranges);
// and then throw the exception afterwards
if (exception != null)
ExceptionDispatchInfo.Capture(exception).Throw();
Note 2: As this is now a two-step process, it will fail if the list changes between the calls.
I think that I've managed to come up with an implementation that satisfies all three requirements:
/// <summary>
/// Removes all the elements that match the conditions defined by the specified
/// predicate. In case the predicate fails, the integrity of the list is preserved.
/// </summary>
public static int RemoveAll<T>(this List<T> list, Func<T, int, bool> predicate)
{
ArgumentNullException.ThrowIfNull(list);
ArgumentNullException.ThrowIfNull(predicate);
Span<T> span = CollectionsMarshal.AsSpan(list);
int i = 0, j = 0;
try
{
for (; i < span.Length; i++)
{
if (predicate(span[i], i)) continue;
if (j < i) span[j] = span[i];
j++;
}
}
finally
{
if (j < i)
{
for (; i < span.Length; i++, j++)
span[j] = span[i];
list.RemoveRange(j, span.Length - j);
}
}
return i - j;
}
For better performance it uses the CollectionsMarshal.AsSpan method (.NET 5) to get a Span<T> out of the list. The algorithm works just as well by using the indexer of the list instead of the span, and replacing the span.Length with list.Count.
Online demo.
I haven't benchmark this implementation, but I expect it to be only marginally slower than the native implementation.
So they don't think that there is any bug that needs to be fixed. They also believe that this behavior is not surprising or unexpected, so there is no need to document it.
They're correct. The method is documented as:
Removes all the elements that match the conditions defined by the specified predicate.
This supports two scenarios: the predicate returning true, removing an element, or false for leaving it as-is. A predicate throwing an exception is not a use case intended to be supported.
If you want to be able to pass a predicate that may throw, you could wrap it like this:
public static int RemoveAll<T>(this List<T> list, Func<T, int, bool> predicate)
{
Exception? caught = null;
int index = 0;
int removed = 0;
list.RemoveAll(item =>
{
// Ignore the rest of the list once thrown
if (caught != null) return false;
try
{
var remove = predicate(item, index);
if (remove)
{
removed++;
}
return remove;
}
catch (Exception e)
{
caught = e;
return false;
}
index++;
});
if (caught != null)
{
throw caught;
}
return removed;
}
I don't know microsoft is how to wrote this method.
I tried some code block. And i found case.
Actually problem is your throw new Exception(). If you dont this code that time yo code will run perfect. Exception trigger some another case. But i dont know what is that.
if (item >= 10) return false;
bool removeIt = item % 2 == 1;
if (removeIt) Console.WriteLine($"Removing #{item}");
return removeIt;
I found this. EDIT
Actually Func<T, int, bool> property is not deleted some item. It return boolean. As if return true he succesful deleted from list. If return false. it is not deleted from list.

Code Complexity Misunderstanding of Single Element in a Sorted Array

The problem which is on LeetCode says that
You are given a sorted array consisting of only integers where every
element appears exactly twice, except for one element which appears
exactly once.
Return the single element that appears only once.
Your solution must run in O(log n) time and O(1) space.
Example 1:
Input: nums = [1,1,2,3,3,4,4,8,8]
Output: 2
Example 2:
Input: nums = [3,3,7,7,10,11,11]
Output: 10
My friend said that leetcode has accepted it as one of its right solution as you can see below image. However, I can't understand how the code is O(logn). Could you explain me? I assert that the code is of O(n) because it iterates through one by on up to its size.
public class Solution {
public int SingleNonDuplicate(int[] nums) {
int result = nums[0];
for (int i = 0; i < nums.Length; i++)
{
if (nums[i] != result && i % 2 == 1)
{
result = nums[i - 1];
return result;
}
else
{
result = nums[i];
}
}
return result;
}
}

Interview function with time complexity

I had an interview question to write a program in C# that Outputs odd number of occurrences in an array.
Example: [2, 2, 3, 3, 3] => [3] (Considering the array is sorted)
My solution was:
public list<int> OddOccurance(list<int> InputList)
{
list<int> output = new list<int>();
for(int i=0; i<InputList.length; i++)
{
int Count = 0;
for(int j=1; j<(InputList.length-1); j++)
{
if(InputList[i] == InputList[j])
{
Count++;
}
}
if(Count % 2 != 0)
{
output.add(InputList[i]);
}
}
return output.distinct();
}
I am thinking the answer is correct only but the interviewer had asked me like different ways of how I can make the solution much faster.
Can anyone please tell me the time complexity of the above solution please.
If there is a way to make the above solution much faster then what can be the time complexity of that solution.
Your solution is O(n^2) - if you don't know why - evaluate sum:
This is an equation which describes the running time of your algorithm. You can solve it in linear time easily - just increment i instead of inner loop over all values in array.
for (int i=0; i<InputList.Length; ++i)
{
int currentValue = InputList[i];
int j=i+1;
int count = 1;
while (InputList[j] == currentValue && j<InputList.Length)
{
count++;
i++;
j++;
}
if (count % 2 == 0)
..
}
If array is not sorted - use dictionary (hash table - Dictionary in C#) - value is a dictionary key, count is a dictionary value. (that will give you Contains key check in O(1)) Another way to get linear time if implemented properly.
The root problem of your solution is seen on this line:
return output.Distinct();
The very fact that you are doing a Distinct means that you may be adding more entries than you should.
So how can you optimize it? Observe that since the array is sorted, the only place where you can find a number that's the same as the one you're looking at is next to it, or next to another number that's equal to your current number. In other words, your numbers go in "runs".
This observation lets you go from two nested loops and an O(N2) solution to a single loop and an O(N) solution. Simply walk the array, and check lengths of each "run": when you see a new number, store its index. If you come across a new number, see if the length of the "run" is odd, and start a new run:
int start = 0;
int pos = 1;
while (pos < InputList.Length) {
if (InputList[pos] != InputList[start]) {
if ((pos-start) % 2 == 1) {
output.Add(InputList[start]);
}
start = pos;
}
pos++;
}
// Process the last run
if ((InputList.Length-start) % 2 == 1) {
output.Add(InputList[start]);
}
Demo.

Removing masked entries from an array

The task is to keep an array of objects untouched if input is null and, otherwise, remove the elements that are on positions specified by the input. I've got it working but I'm vastly dissatisfied with the code quality.
List<Stuff> stuff = new List<Stuff>{ new Stuff(1), new Stuff(2), new Stuff(3) };
String input = "5";
if(input == null)
return stuff;
int mask = Int32.Parse(input);
for (int i = stuff.Count - 1; i >= 0; i--)
if ((mask & (int)Math.Pow(2, i)) == 0)
stuff.RemoveAt(i);
return stuff;
The actual obtaining input and the fact that e.g. String.Empty will cause problems need not to be regarded. Let's assume that those are handled.
How can I make the code more efficient?
How can I make the syntax more compact and graspable?
Instead of the backwards running loop, you could use Linq with the following statement.
stuff = stuff.Where( (iStuff, idx) => (mask & (int)Math.Pow(2, idx)) != 0 );
Or even cooler using bitwise shit.
stuff = stuff.Where((_, index) => (mask >> index & 1) == 1);
It uses an overload of Where which can access the position in the sequence, as documented here. For a similar task, there is also an overload of Select which gives access to the index, as documented here.
Untested, but you could make an extension method that iterates the collection and filters, returning matching elements as it goes. Repeatedly bit-shifting the mask and checking the 0th bit seems the easiest to follow - for me at least.
static IEnumerable<T> TakeMaskedItemsByIndex(this IEnumerable<T> collection, ulong mask)
{
foreach (T item in collection)
{
if((mask & 1) == 1)
yield return item;
mask = mask >> 1;
}
}

Function to look through list and determine trend

So I have a list of items. Each item on the list has a property called notional. Now, the list is already sorted. What I need to do is, develop a function that sets the type of list to one of the following:
Bullet - notional is the same for every item
Amortizing - notional decreases over the course of the schedule (might stay the same from element to element but it should never go up, and should end lower)
Accreting - notional increases over the course of the schedule (might stay the same from element to element but it should never go down, and should end higher)
Rollercoaster - notional goes up and down (could end the same, higher, or lower, but shouldn't be the same for each element and shouldn't be classfied as the other types)
What would this method look like and what would be the most efficient way to go through the list and figure this out?
Thanks!
This would be a straightforward way to do it:
bool hasGoneUp = false;
bool hasGoneDown = false;
T previous = null; // T is the type of objects in the list; assuming ref type
foreach(var item in list)
{
if (previous == null) {
previous = item;
continue;
}
hasGoneUp = hasGoneUp || item.notional > previous.notional;
hasGoneDown = hasGoneDown || item.notional < previous.notional;
if(hasGoneUp && hasGoneDown) {
return Trend.Rollercoaster;
}
previous = item;
}
if (!hasGoneUp && !hasGoneDown) {
return Trend.Bullet;
}
// Exactly one of hasGoneUp and hasGoneDown is true by this point
return hasGoneUp ? Trend.Accreting : Trend.Amortizing;
Let trendOut = Bullet
Loop from First Item to Last item
2.1. If previous notional < next notional
2.1.a. If trendOut = Amortizing return RollerCoaster
2.1.b. Else set trendOut = Accreting
2.2. if Previous Notional > next notional
2.2.a. If trendOut = Accreting return RollerCoaster
2.2.b. Else set trendOut = Amortizing
return trendOut.
You could probably do something as simple as this
var changeList = new List<Integer>
for(i = 0; i < yourList.Count() - 1; i++)
{
changeList.Add(yourList.Item(i + 1) - yourList.Item(i));
}
//Determine if the nature of the list
var positiveChangeCount = changeList.Where(x => x < 0);
var negativeChangeCount = changeList.Where(x => X > 0);
if (positiveChangeCount = yourList.Count)
{
Accreting;
}
elseif (negativeChangeCount = yourList.Count)
{
Amortizing;
}
elseif (negativeChangeCount + PositiveChangeCount = 0)
{
Bullet;
}
else
{
Rollercoaster;
}
I usually start of by optimizing for simplicity first and then performance. Hence, I would start by making a second list of N-1 elements, whose {elements} are differences between the {notionals} of the first list.
Hence, for the second list, I would expect the following for the list of your needs
Bullet - ALL elements are 0
Amortising - ALL elements stay 0 or negative
Accreting - ALL elements stay 0 or positive
Rollercoaster - Elements oscillate between negative & positive
You can probably optimize it an do it in one pass. Basically, this is a discrete differentiation over your data.
bool OnlyGreaterOrEqual=true;
bool OnlyLessOrEqual=true;
foreach(int i=1;i<itemList.Count;i++){
if(itemList[i].notional>itemList[i-1].notional){
OnlyLessOrEqual=false;
}else if(itemList[i].notional<itemList[i-1].notional){
OnlyGreaterOrEqual=false;
}
}
if(OnlyGreaterOrEqual && OnlyLessOrEqual){
return "Bullet";
}else if(OnlyGreaterOrEqual){
return "Accreting":
}else if(OnlyLessOrEqual){
return "Amortizing";
}else{
return "RollerCoast";
}
This is basically a Linq implementation of Danish's answer. It'll require (worst case) 3 passes through the list, but because they are so small it won't really matter from a performance point of view. (I wrote it to work on a list of ints so you'll have to modify it easily to work with your types).
var tmp = values
.Skip(1)
.Zip( values, (first, second) => first - second )
.ToList();
var up = tmp.Any( t => t > 0 );
var down = tmp.Any( t => t < 0 );
if( up && down )
// Rollercoaster
else if( up )
// Accreting
else if( down )
// Amortizing
else
// Bullet
You could also (ab)use the Aggregate operator and Tuple to do it as one query. However, this will fail if the collection is empty and is a bit weird to use in production code.
var result = values.Skip(1).Aggregate(
Tuple.Create<int, bool, bool>( values.First(), false, false ),
( last, current ) => {
return Tuple.Create(
current,
last.Item2 || (current - last.Item1) > 0,
last.Item3 || (current - last.Item1) < 0 );
});
result will be a tuple that contains:
the last element of the collection (which is of no use)
Item2 will contain a boolean indicating whether any element was bigger than the previous element
Item3 will contain a boolean indicating whether any element was smaller than the previous element
The same switch statement as above can be used to decide which pattern your data follows.

Categories