Code performance for "Max Product of 3" Question - c#

I'm working through some questions on Codility to improve my coding skills, I can answer most questions but the below has me confused, I can come up with a solution to get the expected answer but when I test with a really large array, it is really slow
https://app.codility.com/programmers/lessons/6-sorting/max_product_of_three/
----MY ATTEMPTED SOLUTION----
var sum = 0;
for(int P=0; P<(A.Length-2); P++)
{
for(int Q=(P+1); Q<(A.Length-1); Q++)
{
for(int R=(Q+1); R<(A.Length); R++)
{
if ((A[P] * A[Q] * A[R]) > sum)
sum = (A[P] * A[Q] * A[R]);
}
}
}
return sum;

The "obvious" solution is to sort and compute the product of the three highest numbers and the product of the highest number and the two lowest numbers.
The answer would then be the maximum of those two products.
The reason that you need to also check the two lowest numbers is in case they are both negative, in which case their product will be positive (and may be part of the maximum product).
The fastest general sort is O(N.Log(N)), but you can do better: You can make a single pass through the array and keep a note of the three highest and two lowest numbers in the array.
(Note: The fact that the question specifies 3 as the minimum size of the array means that you don't have to account for there being less than 3 elements, which simplifies the code.)
Thus you can solve it in O(N) time like so:
public static int MaxProduct(params int[] array)
{
int highest3 = int.MinValue; // Third highest.
int highest2 = int.MinValue; // Second highest.
int highest1 = int.MinValue; // Highest.
int lowest2 = int.MaxValue; // Second lowest.
int lowest1 = int.MaxValue; // Lowest.
foreach (int n in array)
{
if (n > highest1)
{
highest3 = highest2;
highest2 = highest1;
highest1 = n;
}
else if (n > highest2)
{
highest3 = highest2;
highest2 = n;
}
else if (n > highest3)
{
highest3 = n;
}
if (n < lowest1)
{
lowest2 = lowest1;
lowest1 = n;
}
else if (n < lowest2)
{
lowest2 = n;
}
}
// Answer is either the highest 3 or the lowest 2 and the highest 1
// (because the product of two negatives is positive).
int prodHighestOnly = highest3 * highest2 * highest1;
int prodHighestLowest = highest1 * lowest2 * lowest1;
return Math.Max(prodHighestOnly, prodHighestLowest);
}
(Essentially the highest1, highest2 and highest3 values are being maintained in sorted order, but as discreet variables rather than in an array.)
You can call it like so: Console.WriteLine(MaxProduct(-3, 1, 2, -2, 5, 6));, which outputs 60.

Related

Find the number whose digits have the highest sum in a given range [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
For a given input n, the task is to find the largest integer that is <= n and has the highest digit sum.
For example:
solve(100) = 99. Digit Sum for 99 = 9 + 9 = 18. No other number <= 100 has a higher digit sum.
solve(10) = 9
solve(48) = 48. Note that 39 is also an option, but 48 is larger.
Input range is 0 < n < 1e11
What have I tried?
I tried 2 methods. Firstly, I tried getting each digit with Math operations like this:
public static long solve(long n)
{
var answer = 0;
var highestSum = 0;
for (var i = 1; i <= n; i++)
{
var temp = i;
var sum = 0;
while (temp > 0)
{
sum += temp % 10;
temp /= 10;
}
if (sum >= highestSum)
{
highestSum = sum;
answer = i;
}
}
return answer;
}
My second try, I tried using Linq extensions, like this:
public static long solve(long n)
{
var answer = 0;
var highestSum = 0;
for (var i = 1; i <= n; i++)
{
var sum = i.ToString().Sum(x => x - '0');
if (sum >= highestSum)
{
highestSum = sum;
answer = i;
}
}
return answer;
}
Both of my solutions seem to return the correct value and work for smaller values, but for larger input, they seem to take a very long time to execute. How to make it run through numbers faster? Is there a specific algorithm for this task, or am I doing something else wrong?
We can achieve this O(number of digits in n)
We can achieve this if we iteratively reduce a digit and change all other digits on its right to 9.
Let n be our current number.
We can find next number using the below :
b is a power of 10 to represent position of current digit. After every iteration we reduce n to n/10 and change b to b*10.
We use (n – 1) * b + (b – 1);
For eg, if the number is n = 521 and b = 1, then
(521 – 1) * 1 + (1-1) which gives you 520, which is the thing we need to do, reduce the position number by 1 and replace all other numbers to the right by 9.
After n /= 10 gives you n as 52 and b*=10 gives you b as 10, which is again executed as (52-1)*(10) + 9 which gives you 519, which is what we have to do, reduce the current index by 1 and increase all other rights by 9.
static int findMax(int x)
{
int b = 1, ans = x;
while (x!=0)
{
int cur = (x - 1) * b + (b - 1);
if (sumOfDigits(cur) >= sumOfDigits(ans) && cur > ans))
ans = cur;
x /= 10;
b *= 10;
}
return ans;
}
int sumOfDigits(int a)
{
int sum = 0;
while (a)
{
sum += a % 10;
a /= 10;
}
return sum;
}
The accepted answer is brilliant, but I was dead-set on figuring out a way to determine the correct answer without actually summing the digits and comparing the sums to each other.
I tried a few things (as you can see if you look at the edit history), but I couldn't find the formula. In desperation, I wrote a utility to show me all the numbers from 1 to 9999999 that did not have a smaller number with a larger sum to see what pattern I was missing by not looking on a large enough scale.
I was somewhat surprised that only 253 numbers out of the first 10 million have the largest sum compared to their lessers! Somehow I thought that number would be bigger.
Also, it turns out that there is an obvious pattern that appears fairly quickly, and it remained constant for 10 million iterations, so I think it's a good one.
Here's a small sample of some blocks of consecutive output:
0,1,2,3,4,5,6,7,8,9,
18,19,28,29,38,39,48,49,
58,59,68,69,78,79,88,89,98,99,189,198
8899,8989,8998,8999,
9899,9989,9998,9999,
18999,19899,19989,19998,19999
98999,99899,99989,99998,99999,
189999,198999,199899,199989,199998,199999
7899999,7989999,7998999,7999899,7999989,7999998,7999999,
8899999,8989999,8998999,8999899,8999989,8999998,8999999,
9899999,9989999,9998999,9999899,9999989,9999998,9999999
It's so obviously clear!
If the number is one digit, then it's the highest.
If all but the first digit are either all 9's or all 9's with a single 8, then it's sum is the highest.
Otherwise the highest number is the one whose first digit is one less than the original, followed by all 9's.
Here's a code implementation:
public static long Solve(long n)
{
if (HasValidSuffix(n)) return n;
long firstDigit;
int numDigits;
// Loop to determine the first digit and number of digits in the input
for (firstDigit = n, numDigits = 1; firstDigit > 9; firstDigit /= 10, numDigits++) ;
return Enumerable.Range(0, numDigits - 1)
.Aggregate(firstDigit - 1, (accumulator, next) => accumulator * 10 + 9);
}
// Returns true for positive numbers less than 10 or
// numbers that end in either all 9's or all 9's and one 8
public static bool HasValidSuffix(long input)
{
var foundAnEight = false;
for (var n = input; n > 9; n /= 10)
{
var lastDigit = n % 10;
if (lastDigit < 8) return false;
if (lastDigit == 9) continue;
if (foundAnEight) return false;
foundAnEight = true;
}
return true;
}

Getting a List<int> from an integer which modulo result is equal to 0 without using loop [duplicate]

All numbers that divide evenly into x.
I put in 4 it returns: 4, 2, 1
edit: I know it sounds homeworky. I'm writing a little app to populate some product tables with semi random test data. Two of the properties are ItemMaximum and Item Multiplier. I need to make sure that the multiplier does not create an illogical situation where buying 1 more item would put the order over the maximum allowed. Thus the factors will give a list of valid values for my test data.
edit++:
This is what I went with after all the help from everyone. Thanks again!
edit#: I wrote 3 different versions to see which I liked better and tested them against factoring small numbers and very large numbers. I'll paste the results.
static IEnumerable<int> GetFactors2(int n)
{
return from a in Enumerable.Range(1, n)
where n % a == 0
select a;
}
private IEnumerable<int> GetFactors3(int x)
{
for (int factor = 1; factor * factor <= x; factor++)
{
if (x % factor == 0)
{
yield return factor;
if (factor * factor != x)
yield return x / factor;
}
}
}
private IEnumerable<int> GetFactors1(int x)
{
int max = (int)Math.Ceiling(Math.Sqrt(x));
for (int factor = 1; factor < max; factor++)
{
if(x % factor == 0)
{
yield return factor;
if(factor != max)
yield return x / factor;
}
}
}
In ticks.
When factoring the number 20, 5 times each:
GetFactors1-5,445,881
GetFactors2-4,308,234
GetFactors3-2,913,659
When factoring the number 20000, 5 times each:
GetFactors1-5,644,457
GetFactors2-12,117,938
GetFactors3-3,108,182
pseudocode:
Loop from 1 to the square root of the number, call the index "i".
if number mod i is 0, add i and number / i to the list of factors.
realocode:
public List<int> Factor(int number)
{
var factors = new List<int>();
int max = (int)Math.Sqrt(number); // Round down
for (int factor = 1; factor <= max; ++factor) // Test from 1 to the square root, or the int below it, inclusive.
{
if (number % factor == 0)
{
factors.Add(factor);
if (factor != number/factor) // Don't add the square root twice! Thanks Jon
factors.Add(number/factor);
}
}
return factors;
}
As Jon Skeet mentioned, you could implement this as an IEnumerable<int> as well - use yield instead of adding to a list. The advantage with List<int> is that it could be sorted before return if required. Then again, you could get a sorted enumerator with a hybrid approach, yielding the first factor and storing the second one in each iteration of the loop, then yielding each value that was stored in reverse order.
You will also want to do something to handle the case where a negative number passed into the function.
The % (remainder) operator is the one to use here. If x % y == 0 then x is divisible by y. (Assuming 0 < y <= x)
I'd personally implement this as a method returning an IEnumerable<int> using an iterator block.
Very late but the accepted answer (a while back) didn't not give the correct results.
Thanks to Merlyn, I got now got the reason for the square as a 'max' below the corrected sample. althought the answer from Echostorm seems more complete.
public static IEnumerable<uint> GetFactors(uint x)
{
for (uint i = 1; i * i <= x; i++)
{
if (x % i == 0)
{
yield return i;
if (i != x / i)
yield return x / i;
}
}
}
As extension methods:
public static bool Divides(this int potentialFactor, int i)
{
return i % potentialFactor == 0;
}
public static IEnumerable<int> Factors(this int i)
{
return from potentialFactor in Enumerable.Range(1, i)
where potentialFactor.Divides(i)
select potentialFactor;
}
Here's an example of usage:
foreach (int i in 4.Factors())
{
Console.WriteLine(i);
}
Note that I have optimized for clarity, not for performance. For large values of i this algorithm can take a long time.
Another LINQ style and tying to keep the O(sqrt(n)) complexity
static IEnumerable<int> GetFactors(int n)
{
Debug.Assert(n >= 1);
var pairList = from i in Enumerable.Range(1, (int)(Math.Round(Math.Sqrt(n) + 1)))
where n % i == 0
select new { A = i, B = n / i };
foreach(var pair in pairList)
{
yield return pair.A;
yield return pair.B;
}
}
Here it is again, only counting to the square root, as others mentioned. I suppose that people are attracted to that idea if you're hoping to improve performance. I'd rather write elegant code first, and optimize for performance later, after testing my software.
Still, for reference, here it is:
public static bool Divides(this int potentialFactor, int i)
{
return i % potentialFactor == 0;
}
public static IEnumerable<int> Factors(this int i)
{
foreach (int result in from potentialFactor in Enumerable.Range(1, (int)Math.Sqrt(i))
where potentialFactor.Divides(i)
select potentialFactor)
{
yield return result;
if (i / result != result)
{
yield return i / result;
}
}
}
Not only is the result considerably less readable, but the factors come out of order this way, too.
I did it the lazy way. I don't know much, but I've been told that simplicity can sometimes imply elegance. This is one possible way to do it:
public static IEnumerable<int> GetDivisors(int number)
{
var searched = Enumerable.Range(1, number)
.Where((x) => number % x == 0)
.Select(x => number / x);
foreach (var s in searched)
yield return s;
}
EDIT: As Kraang Prime pointed out, this function cannot exceed the limit of an integer and is (admittedly) not the most efficient way to handle this problem.
Wouldn't it also make sense to start at 2 and head towards an upper limit value that's continuously being recalculated based on the number you've just checked? See N/i (where N is the Number you're trying to find the factor of and i is the current number to check...) Ideally, instead of mod, you would use a divide function that returns N/i as well as any remainder it might have. That way you're performing one divide operation to recreate your upper bound as well as the remainder you'll check for even division.
Math.DivRem
http://msdn.microsoft.com/en-us/library/wwc1t3y1.aspx
If you use doubles, the following works: use a for loop iterating from 1 up to the number you want to factor. In each iteration, divide the number to be factored by i. If (number / i) % 1 == 0, then i is a factor, as is the quotient of number / i. Put one or both of these in a list, and you have all of the factors.
And one more solution. Not sure if it has any advantages other than being readable..:
List<int> GetFactors(int n)
{
var f = new List<int>() { 1 }; // adding trivial factor, optional
int m = n;
int i = 2;
while (m > 1)
{
if (m % i == 0)
{
f.Add(i);
m /= i;
}
else i++;
}
// f.Add(n); // adding trivial factor, optional
return f;
}
I came here just looking for a solution to this problem for myself. After examining the previous replies I figured it would be fair to toss out an answer of my own even if I might be a bit late to the party.
The maximum number of factors of a number will be no more than one half of that number.There is no need to deal with floating point values or transcendent operations like a square root. Additionally finding one factor of a number automatically finds another. Just find one and you can return both by just dividing the original number by the found one.
I doubt I'll need to use checks for my own implementation but I'm including them just for completeness (at least partially).
public static IEnumerable<int>Factors(int Num)
{
int ToFactor = Num;
if(ToFactor == 0)
{ // Zero has only itself and one as factors but this can't be discovered through division
// obviously.
yield return 0;
return 1;
}
if(ToFactor < 0)
{// Negative numbers are simply being treated here as just adding -1 to the list of possible
// factors. In practice it can be argued that the factors of a number can be both positive
// and negative, i.e. 4 factors into the following pairings of factors:
// (-4, -1), (-2, -2), (1, 4), (2, 2) but normally when you factor numbers you are only
// asking for the positive factors. By adding a -1 to the list it allows flagging the
// series as originating with a negative value and the implementer can use that
// information as needed.
ToFactor = -ToFactor;
yield return -1;
}
int FactorLimit = ToFactor / 2; // A good compiler may do this optimization already.
// It's here just in case;
for(int PossibleFactor = 1; PossibleFactor <= FactorLimit; PossibleFactor++)
{
if(ToFactor % PossibleFactor == 0)
{
yield return PossibleFactor;
yield return ToFactor / PossibleFactor;
}
}
}
Program to get prime factors of whole numbers in javascript code.
function getFactors(num1){
var factors = [];
var divider = 2;
while(num1 != 1){
if(num1 % divider == 0){
num1 = num1 / divider;
factors.push(divider);
}
else{
divider++;
}
}
console.log(factors);
return factors;
}
getFactors(20);
In fact we don't have to check for factors not to be square root in each iteration from the accepted answer proposed by chris fixed by Jon, which could slow down the method when the integer is large by adding an unnecessary Boolean check and a division. Just keep the max as double (don't cast it to an int) and change to loop to be exclusive not inclusive.
private static List<int> Factor(int number)
{
var factors = new List<int>();
var max = Math.Sqrt(number); // (store in double not an int) - Round down
if (max % 1 == 0)
factors.Add((int)max);
for (int factor = 1; factor < max; ++factor) // (Exclusice) - Test from 1 to the square root, or the int below it, inclusive.
{
if (number % factor == 0)
{
factors.Add(factor);
//if (factor != number / factor) // (Don't need check anymore) - Don't add the square root twice! Thanks Jon
factors.Add(number / factor);
}
}
return factors;
}
Usage
Factor(16)
// 4 1 16 2 8
Factor(20)
//1 20 2 10 4 5
And this is the extension version of the method for int type:
public static class IntExtensions
{
public static IEnumerable<int> Factors(this int value)
{
// Return 2 obvious factors
yield return 1;
yield return value;
// Return square root if number is prefect square
var max = Math.Sqrt(value);
if (max % 1 == 0)
yield return (int)max;
// Return rest of the factors
for (int i = 2; i < max; i++)
{
if (value % i == 0)
{
yield return i;
yield return value / i;
}
}
}
}
Usage
16.Factors()
// 4 1 16 2 8
20.Factors()
//1 20 2 10 4 5
Linq solution:
IEnumerable<int> GetFactors(int n)
{
Debug.Assert(n >= 1);
return from i in Enumerable.Range(1, n)
where n % i == 0
select i;
}

(Relatively) Quickly find some divisor for a number < 10 000 000

Assume for everything that I'm talking only about natural numbers less than 10 million.
I'm looking to pre-generate a list of the Lowest Prime Divisor (LPD) for all numbers under 10 000 000. For example, LPD(14) == 2, LPD(15) == 3, and the LPD of any prime is itself.
I have pre-generated all of the primes. Accessing the nth prime is a simple array lookup. With an efficiency of: O(1)
I have pre-generated a lookup table for determining if a given number is prime. Accessing the nth prime is a simple array lookup. With an efficiency of: O(1)
Now, my naive algorithm to calculate the LPD of a given number is to loop through all the primes until one prime divides the number. But this takes a really long time. I can generate all primes under 10 million in half the time it takes to find the lowest divisor for all the numbers (using the Sieve of Atkin, which I don't understand, but implemented from pseudocode).
Is there a better algorithm for calculating the Lowest Prime Divisor?
Not actually sure why you expect higher performance for much the same problem.
Rather than divide, a sieve approach would take each prime, mark all its multiples as having itself as the lowest prime factor, unless already marked.
int lpf[MAX] = {};
int primes[MAX_PRIME];
for(int i = 0; i < MAX_PRIME; ++i)
{
int mult = primes[i];
while(mult < MAX)
{
if (lpf[mult] == 0)
{
lpf[mult] = primes[i];
}
mult += primes[i];
}
}
Any unmarked number at the end is itself prime, so this approach takes the same time as finding all the primes under MAX.
As adapted from #Keith's answer, the new code runs much faster (13% the old speed!):
public void SieveDivisors() {
int iNum, iPrime, i6Prime;
_iaFirstDivisors = new int[_iLimit];
_iaFirstDivisors[1] = 1;
//Start at the largest primes, then work down. This way, we never need to check if the
// lowest prime multiple is already found, we just overwrite it
//Also, skip any multiples of 2 or 3, because setting those is a waste of time
for (int iPrimeIndex = _iaPrimes.Length - 1; iPrimeIndex >= 1; iPrimeIndex--) {
iPrime = _iaPrimes[iPrimeIndex];
i6Prime = iPrime * 6;
for (iNum = iPrime; iNum < _iLimit; iNum += i6Prime) {
_iaFirstDivisors[iNum] = iPrime;
}
for (iNum = iPrime * 5; iNum < _iLimit; iNum += i6Prime) {
_iaFirstDivisors[iNum] = iPrime;
}
}
//Then record all multiples of 2 or 3
for (iNum = 3; iNum < _iLimit; iNum += 6) {
_iaFirstDivisors[iNum] = 3;
}
for (iNum = 2; iNum < _iLimit; iNum += 2) {
_iaFirstDivisors[iNum] = 2;
}
}
You say you are using the Sieve of Atkin to produce the list of primes. If you use the Sieve of Eratosthenes you automatically get your LPD array - it is simply the array you use for sieve. Instead of storing a boolean track the first prime which makes the number composite.
Here is some pseudo C code:
int lpd[MAX] = {};
int primes[MAX_PRIMES];
int nprimes = 0;
void sieve() {
for (int p = 2; p*p < MAX; ++p) {
if (lpd[p] == 0) {
primes[nprimes++] = p;
lpd[p] = p;
for (int q = p*p; q < MAX; q += p) {
if (lpd[q] == 0) { lpd[q] = p; }
}
}
}
}
At the end the array lpd[] will contain the lowest prime divisors and primes[] will contain the list of prime numbers.

How to efficiently generate all combinations (at all depths) whose sum is within a specified range

Suppose you have a set of values (1,1,1,12,12,16) how would you generate all possible combinations (without repetition) whose sum is within a predefined range [min,max]. For example, here are all the combinations (of all depths) that have a range between 13 and 17:
1 12
1 1 12
1 1 1 12
16
1 16
This assumes that each item of the same value is indistinguishable, so you don't have three results of 1 12 in the final output. Brute force is possible, but in situations where the number of items is large, the number of combinations at all depths is astronomical. In the example above, there are (3 + 1) * (2 + 1) * (1 + 1) = 24 combinations at all depths. Thus, the total combinations is the product of the number of items of any given value + 1. Of course we can logically throw out huge number of combinations whose partial sum is greater than the max value (e.g. the set 16 12 is already bigger than the max value of 17, so skip any combinations that have a 16 and 12 in them).
I originally thought I could convert the input array into two arrays and increment them kind of like an odometer. But I am getting completely stuck on this recursive algorithm that breaks early. Any suggestions?
{
int uniqueValues = 3;
int[] maxCounts = new int[uniqueValues];
int[] values = new int[uniqueValues];
// easy code to bin the data, just hardcoding for example
maxCounts[0] = 3;
values[0] = 1;
maxCounts[1] = 2;
values[1] = 12;
maxCounts[2] = 1;
values[2] = 16;
GenerateCombinationsHelper(new List<int[]>(), 13, 17, 0, 0, maxCounts, new int[3], values);
}
private void GenerateCombinationsHelper(List<int[]> results, int min, int max, int currentValue, int index, int[] maxValues, int[] currentCombo, int[] values)
{
if (index >= maxValues.Length)
{
return;
}
while (currentCombo[index] < maxValues[index])
{
currentValue += values[index];
if (currentValue> max)
{
return;
}
currentCombo[index]++;
if (currentValue< min)
{
GenerateCombinationsHelper(results, min, max, currentValue, index + 1, maxValues, currentCombo, values);
}
else
{
results.Add((int[])currentCombo.Clone());
}
}
}
Edit
The integer values are just for demonstration. It can be any object that has a some sort of numerical value (int, double, float, etc...)
Typically there will only be a handful of unique values (~10 or so) but there can be several thousands total items.
Switch the main call to:
GenerateCombinationsHelper2(new List<int[]>(), 13, 17, 0, maxCounts, new int[3], values);
and then add this code:
private void GenerateCombinationsHelper2(List<int[]> results, int min, int max, int index, int[] maxValues, int[] currentCombo, int[] values)
{
int max_count = Math.Min((int)Math.Ceiling((double)max / values[index]), maxValues[index]);
for(int count = 0; count <= max_count; count++)
{
currentCombo[index] = count;
if(index < currentCombo.Length - 1)
{
GenerateCombinationsHelper2(results, min, max, index + 1, maxValues, currentCombo, values);
}
else
{
int sum = Sum(currentCombo, values);
if(sum >= min && sum <= max)
{
int[] copy = new int[currentCombo.Length];
Array.Copy(currentCombo, copy, copy.Length);
results.Add(copy);
}
}
}
}
private static int Sum(int[] combo, int[] values)
{
int sum = 0;
for(int i = 0; i < combo.Length; i++)
{
sum += combo[i] * values[i];
}
return sum;
}
It returns the 5 valid answers.
The general tendency with this kind of problem is that there are relatively few values that will show up, but each value shows up many, many times. Therefore you first want to create a data structure that efficiently describes the combinations that will add up to the desired values, and only then figure out all of the combinations that do so. (If you know the term "dynamic programming", that's exactly the approach I'm describing.)
The obvious data structure in C# terms would be a Hashtable whose keys are the totals that the combination adds up to, and whose values are arrays listing the positions of the last elements that can be used in a combination that could add up to that particular total.
How do you build that data structure?
First you start with a Hashtable which contains the total 0 as a key, and an empty array as a value. Then for each element of your array you create a list of the new totals you can reach from the previous totals, and append your element's position to each one of their values (inserting a new one if needed). When you've gone through all of your elements, you have your data structure.
Now you can search that data structure just for the totals that are in the range you want. And for each such total, you can write a recursive program that will go through your data structure to produce the combinations. This step can indeed have a combinatorial explosion, but the nice thing is that EVERY combination produced is actually a combination in your final answer. So if this phase takes a long time, it is because you have a lot of final answers!
Try this algo
int arr[] = {1,1,1,12,12,16}
for(int i = 0;i<2^arr.Length;i++)
{
int[] arrBin = BinaryFormat(i); // binary format i
for(int j = 0;j<arrBin.Length;j++)
if (arrBin[j] == 1)
Console.Write("{0} ", arr[j]);
Console.WriteLine();
}
This is quite similar to the subset sum problem which just happens to be NP-complete.
Wikipedia says the following about NP-complete problems:
Although any given solution to such a problem can be verified quickly,
there is no known efficient way to locate a solution in the first
place; indeed, the most notable characteristic of NP-complete problems
is that no fast solution to them is known. That is, the time required
to solve the problem using any currently known algorithm increases
very quickly as the size of the problem grows. This means that the
time required to solve even moderately sized versions of many of these
problems can easily reach into the billions or trillions of years,
using any amount of computing power available today. As a consequence,
determining whether or not it is possible to solve these problems
quickly, called the P versus NP problem, is one of the principal
unsolved problems in computer science today.
If indeed there is a way to solve this besides brute-forcing through the powerset and finding all subsets which sum up to a value within the given range, then I would be very interested in hearing it.
An idea for another implementation:
Create from the list of numbers a list of stacks, each stack represents a number that appear in the list, and this number is pushed into the stack as many times as he appeared in the numbers list. more so, this list is sorted.
The idea is that you iterate through the stack list, in each stack you pop one number at a time if it doesn't exceed the max value and recall the function, and perform an additional call of skipping the current stack.
This algorithm reduces many redundant computations like trying to add different elements which have the same value when adding this value exceeds the maximal value.
I was able to solve pretty large problems with this algorithm (50 numbers and more), depending on the min and max values, obviously when the interval is very big the number of combinations may be huge.
Here's the code:
static void GenerateLimitedCombinations(List<int> intList, int minValue, int maxValue)
{
intList.Sort();
List<Stack<int>> StackList = new List<Stack<int>>();
Stack<int> NewStack = new Stack<int>();
NewStack.Push(intList[0]);
StackList.Add(NewStack);
for (int i = 1; i < intList.count; i++)
{
if (intList[i - 1] == intList[i])
StackList[StackList.count - 1].Push(intList[i]);
else
{
NewStack = new Stack<int>();
NewStack.Push(intList[i]);
StackList.Add(NewStack);
}
}
GenerateLimitedCombinations(StackList, minValue, maxValue, 0, new List<int>(), 0);
}
static void GenerateLimitedCombinations(List<Stack<int>> stackList, int minValue, int maxValue, int currentStack, List<int> currentCombination, int currentSum)
{
if (currentStack == stackList.count)
{
if (currentSum >= minValue)
{
foreach (int tempInt in CurrentCombination)
{
Console.Write(tempInt + " ");
}
Console.WriteLine(;
}
}
else
{
int TempSum = currentSum;
List<int> NewCombination = new List<int>(currentCombination);
Stack<int> UndoStack = new Stack<int>();
while (stackList[currentStack].Count != 0 && stackList[currentStack].Peek() + TempSum <= maxValue)
{
int AddedValue = stackList[currentStack].Pop();
UndoStack.Push(AddedValue);
NewCombination.Add(AddedValue);
TempSum += AddedValue;
GenerateLimitedCombinations(stackList, minValue, maxValue, currentStack + 1, new List<int>(NewCombination), TempSum);
}
while (UndoStack.Count != 0)
{
stackList[currentStack].Push(UndoStack.Pop());
}
GenerateLimitedCombinations(stackList, minValue, maxValue, currentStack + 1, currentCombination, currentSum);
}
}
Here's a test program:
static void Main(string[] args)
{
Random Rnd = new Random();
List<int> IntList = new List<int>();
int NumberOfInts = 10, MinValue = 19, MaxValue 21;
for (int i = 0; i < NumberOfInts; i++) { IntList.Add(Rnd.Next(1, 10));
for (int i = 0; i < NumberOfInts; i++) { Console.Write(IntList[i] + " "); } Console.WriteLine(); Console.WriteLine();
GenerateLimitedCombinations(IntList, MinValue, MaxValue);
Console.ReadKey();
}

Find the contiguous sequence with the largest product in an integer array

I have come up with the code below but that doesn't satisfy all cases, e.g.:
Array consisting all 0's
Array having negative values(it's bit tricky since it's about finding product as two negative ints give positive value)
public static int LargestProduct(int[] arr)
{
//returning arr[0] if it has only one element
if (arr.Length == 1) return arr[0];
int product = 1;
int maxProduct = Int32.MinValue;
for (int i = 0; i < arr.Length; i++)
{
//this block store the largest product so far when it finds 0
if (arr[i] == 0)
{
if (maxProduct < product)
{
maxProduct = product;
}
product = 1;
}
else
{
product *= arr[i];
}
}
if (maxProduct > product)
return maxProduct;
else
return product;
}
How can I incorporate the above cases/correct the code. Please suggest.
I am basing my answer on the assumption that if you have more than 1 element in the array, you would want to multiply at least 2 contiguous integers for checking the output, i.e. in array of {-1, 15}, the output that you want is -15 and not 15).
The problem that we need to solve is to look at all possible multiplication combinations and find out the max product out of them.
The total number of products in an array of n integers would be nC2 i.e. if there are 2 elements, then the total multiplication combinations would be 1, for 3, it would be 3, for 4, it would be 6 and so on.
For each number that we have in the incoming array, it has to multiply with all the multiplications that we did with the last element and keep the max product till now and if we do it for all the elements, at the end we would be left with the maximum product.
This should work for negatives and zeros.
public static long LargestProduct(int[] arr)
{
if (arr.Length == 1)
return arr[0];
int lastNumber = 1;
List<long> latestProducts = new List<long>();
long maxProduct = Int64.MinValue;
for (int i = 0; i < arr.Length; i++)
{
var item = arr[i];
var latest = lastNumber * item;
var temp = new long[latestProducts.Count];
latestProducts.CopyTo(temp);
latestProducts.Clear();
foreach (var p in temp)
{
var product = p * item;
if (product > maxProduct)
maxProduct = product;
latestProducts.Add(product);
}
if (i != 0)
{
if (latest > maxProduct)
maxProduct = latest;
latestProducts.Add(latest);
}
lastNumber = item;
}
return maxProduct;
}
If you want the maximum product to also incorporate the single element present in the array i.e. {-1, 15} should written 15, then you can compare the max product with the element of the array being processed and that should give you the max product if the single element is the max number.
This can be achieved by adding the following code inside the for loop at the end.
if (item > maxProduct)
maxProduct = item;
Your basic problem is 2 parts. Break them down and solving it becomes easier.
1) Find all contiguous subsets.
Since your source sequence can have negative values, you are not all that equipped to make any value judgments until you're found each subset, as a negative can later be "cancelled" by another. So let the first phase be to only find the subsets.
An example of how you might do this is the following code
// will contain all contiguous subsets
var sequences = new List<Tuple<bool, List<int>>>();
// build subsets
foreach (int item in source)
{
var deadCopies = new List<Tuple<bool, List<int>>>();
foreach (var record in sequences.Where(r => r.Item1 && !r.Item2.Contains(0)))
{
// make a copy that is "dead"
var deadCopy = new Tuple<bool, List<int>>(false, record.Item2.ToList());
deadCopies.Add(deadCopy);
record.Item2.Add(item);
}
sequences.Add(new Tuple<bool, List<int>>(true, new List<int> { item }));
sequences.AddRange(deadCopies);
}
In the above code, I'm building all my contiguous subsets, while taking the liberty of not adding anything to a given subset that already has a 0 value. You can omit that particular behavior if you wish.
2) Calculate each subset's product and compare that to a max value.
Once you have found all of your qualifying subsets, the next part is easy.
// find subset with highest product
int maxProduct = int.MinValue;
IEnumerable<int> maxSequence = Enumerable.Empty<int>();
foreach (var record in sequences)
{
int product = record.Item2.Aggregate((a, b) => a * b);
if (product > maxProduct)
{
maxProduct = product;
maxSequence = record.Item2;
}
}
Add whatever logic you wish to restrict the length of the original source or the subset candidates or product values. For example, if you wish to enforce minimum length requirements on either, or if a subset product of 0 is allowed if a non-zero product is available.
Also, I make no claims as to the performance of the code, it is merely to illustrate breaking the problem down into its parts.
I think you should have 2 products at the same time - they will differ in signs.
About case, when all values are zero - you can check at the end if maxProduct is still Int32.MinValue (if Int32.MinValue is really not possible)
My variant:
int maxProduct = Int32.MinValue;
int? productWithPositiveStart = null;
int? productWithNegativeStart = null;
for (int i = 0; i < arr.Length; i++)
{
if (arr[i] == 0)
{
productWithPositiveStart = null;
productWithNegativeStart = null;
}
else
{
if (arr[i] > 0 && productWithPositiveStart == null)
{
productWithPositiveStart = arr[i];
}
else if (productWithPositiveStart != null)
{
productWithPositiveStart *= arr[i];
maxProduct = Math.max(maxProduct, productWithPositiveStart);
}
if (arr[i] < 0 && productWithNegativeStart == null)
{
productWithNegativeStart = arr[i];
}
else if (productWithNegativeStart != null)
{
productWithNegativeStart *= arr[i];
maxProduct = Math.max(maxProduct, productWithNegativeStart);
}
maxProduct = Math.max(arr[i], maxProduct);
}
}
if (maxProduct == Int32.MinValue)
{
maxProduct = 0;
}
At a high level, your current algorithm splits the array upon a 0 and returns the largest contiguous product of these sub-arrays. Any further iterations will be on the process of finding the largest contiguous product of a sub-array where no elements are 0.
To take into account negative numbers, we obviously first need to test if the product of one of these sub-arrays is negative, and take some special action if it is.
The negative result comes from an odd number of negative values, so we need to remove one of these negative values to make the result positive again. To do this we remove all elements up the the first negative number, or the last negative number and all elements after that, whichever results in the highest product.
To take into account an array of all 0's, simply use 0 as your starting maxProduct. If the array is a single negative value, you're special case handling of a single element will mean that is returned. After that, there will always be a positive sub-sequence product, or else the whole array is 0 and it should return 0 anyway.
it can be done in O(N). it is based on the simple idea: calculate the minimum (minCurrent) and maximum (maxCurrent) till i. This can be easily changed to fit for the condition like: {0,0,-2,0} or {-2,-3, -8} or {0,0}
a[] = {6, -3, 2, 0, 3, -2, -4, -2, 4, 5}
steps of the algorithm given below for the above array a :
private static int getMaxProduct(int[] a) {
if (a.length == 0) {
throw new IllegalArgumentException();
}
int minCurrent = 1, maxCurrent = 1, max = Integer.MIN_VALUE;
for (int current : a) {
if (current > 0) {
maxCurrent = maxCurrent * current;
minCurrent = Math.min(minCurrent * current, 1);
} else if (current == 0) {
maxCurrent = 1;
minCurrent = 1;
} else {
int x = maxCurrent;
maxCurrent = Math.max(minCurrent * current, 1);
minCurrent = x * current;
}
if (max < maxCurrent) {
max = maxCurrent;
}
}
//System.out.println(minCurrent);
return max;
}

Categories