Algorithm to match buy and sell orders - c#

I am building an online stock game. All orders are at exactly market price. There is no real "bidding", only straight buy and sell. So this should be easier. Is there an algorithm that tackles the following problem:
Different orders with different volume. For example, the following buy orders are made...
order A for 50 shares
order B for 25 shares
order C for 10 shares
order D for 5 shares
order E for 5 shares
order F for 30 shares
There is a sell order G for 100 shares.
I need to find the right combination of the above buy orders in a way that gets as close to 100 shares as possible, without going over....
The Knapsack algorithm would work, but the performance will degrade very fast with a large number of users and orders being made. Is there a more efficient way to do this?
EDIT:
Here is my modified knapsack algorithm:
static int KnapSack(int capacity, int[] weight, int itemsCount)
{
int[,] K = new int[itemsCount + 1, capacity + 1];
for (int i = 0; i <= itemsCount; ++i)
{
for (int w = 0; w <= capacity; ++w)
{
if (i == 0 || w == 0)
K[i, w] = 0;
else if (weight[i - 1] <= w)
K[i, w] = Math.Max(weight[i - 1] + K[i - 1, w - weight[i - 1]], K[i - 1, w]);
else
K[i, w] = K[i - 1, w];
}
}
return K[itemsCount, capacity];
}
The only problem is that it is really bad on performance when the numbers are high.

/*
Given array prices, return max profit w/ 1 buy & 1 sell
Ex. prices = [7,1,5,3,6,4] -> 5 (buy at $1, sell at $6)
For each, get diff b/w that & min value before, store max
Time: O(n)
Space: O(1)
*/
class Solution {
public:
int maxProfit(vector<int>& prices) {
int minValue = prices[0];
int maxDiff = 0;
for (int i = 1; i < prices.size(); i++) {
minValue = min(minValue, prices[i]);
maxDiff = max(maxDiff, prices[i] - minValue);
}
return maxDiff;
}
};
ref: https://github.com/neetcode-gh/leetcode/blob/main/cpp/neetcode_150/03_sliding_window/best_time_to_buy_and_sell_stock.cpp

I am still not very clear with the example you gave in the question description, for any knapsack problem we need 2 things capacity and profit. Here you just provided capacity.
Considering you just need to reach 100 as close as possible without worrying about the profit then it's much simple and can have multiple ways to do it.
One way is to just take all the bigger one which is smaller than the remaining capacity. If they are bigger than the remaining capacity then go to the next smaller one.
Time: O(NlogN) for sorting
Space: O(1)
function getMax(arr, maxCap) {
arr.sort((a, b) => b - a);
let index = 0;
let cap = 0;
while (cap !== maxCap && index < arr.length) {
const remainingCap = maxCap - cap;
if (remainingCap >= arr[index]) {
cap += arr[index];
}
index++;
}
}

Related

C# WinsForm, Frequency Distribution Table [Updated]

Update 01
Thanks to Caius, found the main problem, the logic on the "if" was wrong, now fixed and giving the correct results. The loop still create more positions than needed on the secondary List, an extra position for each number on the main List.
I've updated the code bellow for refence for the following question:
-001 I can figure out why it create positions that needed, the for loop should run only after the foreach finishes its loops correct?
-002 To kind of solving this issue, I've used a List.Remove() to remove all the 0's, so far no crashes, but, the fact that I'm creating the extra indexes, and than removing them, does means a big performance down if I have large list of numbers? Or is an acceptable solution?
Description
It supposed to read all numbers in a central List1 (numberList), and count how many numbers are inside a certain (0|-15 / 15|-20) range, for that I use another List, that each range is a position on the List2 (numberSubList), where each number on List2, tells how many numbers exists inside that range.
-The range changes as the numbers grows or decrease
Code:
void Frequency()
{
int minNumb = numberList.Min();
int maxNumb = numberList.Max();
int size = numberList.Count();
numberSubList.Clear();
dGrdVFrequency.Rows.Clear();
dGrdVFrequency.Refresh();
double k = (1 + 3.3 * Math.Log10(size));
double h = (maxNumb - minNumb) / k;
lblH.Text = $"H: {Math.Round(h, 2)} / Rounded = {Math.Round(h / 5) * 5}";
lblK.Text = $"K: {Math.Round(k, 4)}";
if (h <= 5) { h = 5; }
else { h = Math.Round(h / 5) * 5; }
int counter = 1;
for (int i = 0; i < size; i++)
{
numberSubList.Add(0); // 001 HERE, creating more positions than needed, each per number.
foreach (int number in numberList)
{
if (number >= (h * i) + minNumb && number < (h * (i + 1)) + minNumb)
{
numberSubList[i] = counter++;
}
}
numberSubList.Remove(0); // 002-This to remove all the extra 0's that are created.
counter = 1;
}
txtBoxSubNum.Clear();
foreach (int number in numberSubList)
{
txtBoxSubNum.AppendText($"{number.ToString()} , ");
}
lblSubTotalIndex.Text = $"Total in List: {numberSubList.Count()}";
lblSubSumIndex.Text = $"Sum of List: {numberSubList.Sum()}";
int inc = 0;
int sum = 0;
foreach (int number in numberSubList)
{
sum = sum + number;
int n = dGrdVFrequency.Rows.Add();
dGrdVFrequency.Rows[n].Cells[0].Value = $"{(h * inc) + minNumb} |- {(h * (1 + inc)) + minNumb}";
dGrdVFrequency.Rows[n].Cells[1].Value = $"{number}";
dGrdVFrequency.Rows[n].Cells[2].Value = $"{sum}";
dGrdVFrequency.Rows[n].Cells[3].Value = $"{(number * 100) / size} %";
dGrdVFrequency.Rows[n].Cells[4].Value = $"{(sum * 100) / size} %";
inc++;
}
}
Screen shot showing the updated version.
I think, if your aim is to only store eg 17 in the "15 to 25" slot, this is wonky:
if (number <= (h * i) + minNumb) // Check if number is smaller than the range limit
Because it's found inside a loop that will move on to the next range, "25 to 35" and it only asks if the number 17 is less than the upper limit (and 17 is less than 35) so 17 is accorded to the 25-35 range too
FWIW the range a number should be in can be derived from the number, with (number - min) / number_of_ranges - at the moment you create your eg 10 ranges and then you visit each number 10 times looking to put it in a range, so you do 9 times more operations than you really need to

Code performance for "Max Product of 3" Question

I'm working through some questions on Codility to improve my coding skills, I can answer most questions but the below has me confused, I can come up with a solution to get the expected answer but when I test with a really large array, it is really slow
https://app.codility.com/programmers/lessons/6-sorting/max_product_of_three/
----MY ATTEMPTED SOLUTION----
var sum = 0;
for(int P=0; P<(A.Length-2); P++)
{
for(int Q=(P+1); Q<(A.Length-1); Q++)
{
for(int R=(Q+1); R<(A.Length); R++)
{
if ((A[P] * A[Q] * A[R]) > sum)
sum = (A[P] * A[Q] * A[R]);
}
}
}
return sum;
The "obvious" solution is to sort and compute the product of the three highest numbers and the product of the highest number and the two lowest numbers.
The answer would then be the maximum of those two products.
The reason that you need to also check the two lowest numbers is in case they are both negative, in which case their product will be positive (and may be part of the maximum product).
The fastest general sort is O(N.Log(N)), but you can do better: You can make a single pass through the array and keep a note of the three highest and two lowest numbers in the array.
(Note: The fact that the question specifies 3 as the minimum size of the array means that you don't have to account for there being less than 3 elements, which simplifies the code.)
Thus you can solve it in O(N) time like so:
public static int MaxProduct(params int[] array)
{
int highest3 = int.MinValue; // Third highest.
int highest2 = int.MinValue; // Second highest.
int highest1 = int.MinValue; // Highest.
int lowest2 = int.MaxValue; // Second lowest.
int lowest1 = int.MaxValue; // Lowest.
foreach (int n in array)
{
if (n > highest1)
{
highest3 = highest2;
highest2 = highest1;
highest1 = n;
}
else if (n > highest2)
{
highest3 = highest2;
highest2 = n;
}
else if (n > highest3)
{
highest3 = n;
}
if (n < lowest1)
{
lowest2 = lowest1;
lowest1 = n;
}
else if (n < lowest2)
{
lowest2 = n;
}
}
// Answer is either the highest 3 or the lowest 2 and the highest 1
// (because the product of two negatives is positive).
int prodHighestOnly = highest3 * highest2 * highest1;
int prodHighestLowest = highest1 * lowest2 * lowest1;
return Math.Max(prodHighestOnly, prodHighestLowest);
}
(Essentially the highest1, highest2 and highest3 values are being maintained in sorted order, but as discreet variables rather than in an array.)
You can call it like so: Console.WriteLine(MaxProduct(-3, 1, 2, -2, 5, 6));, which outputs 60.

how many numbers between 1 to 10 billion contains 14

i tried this code but it takes so long and I can not get the result
public long getCounter([FromBody]object req)
{
JObject param = Utility.GetRequestParameter(req);
long input = long.Parse(param["input"].ToString());
long counter = 0;
for (long i = 14; i <= input; i++)
{
string s = i.ToString();
if (s.Contains("14"))
{
counter += 1;
}
}
return counter;
}
please help
We can examine all non-negative numbers < 10^10. Every such number can be represented with the sequence of 10 digits (with leading zeroes allowed).
How many numbers include 14
Dynamic programming solution. Let's find the number of sequences of a specific length that ends with the specific digit and contains (or not) subsequence 14:
F(len, digit, 0) is the number of sequences of length len that ends with digit and do not contain 14, F(len, digit, 1) is the number of such sequences that contain 14. Initially F(0, 0, 0) = 1. The result is the sum of all F(10, digit, 1).
C++ code to play with: https://ideone.com/2aS17v. The answer seems to be 872348501.
How many times the numbers include 14
First, let's place 14 at the end of the sequence:
????????14
Every '?' can be replaced with any digit from 0 to 9. Thus, there are 10^8 numbers in the interval that contains 14 at the end. Then consider ???????14?, ??????14??, ..., 14???????? numbers. There are 9 possible locations of 14 sequence. The answer is 10^8 * 9 = 90000000.
[Added by Matthew Watson]
Here's the C# version of the C++ implementation; it runs in less than 100ms:
using System;
namespace Demo
{
public static class Program
{
public static void Main(string[] args)
{
const int M = 10;
int[,,] f = new int [M + 1, 10, 2];
f[0, 0, 0] = 1;
for (int len = 1; len <= M; ++len)
{
for (int d = 0; d <= 9; ++d)
{
for (int j = 0; j <= 9; ++j)
{
f[len,d,0] += f[len - 1,j,0];
f[len,d,1] += f[len - 1,j,1];
}
}
f[len,4,0] -= f[len - 1,1,0];
f[len,4,1] += f[len - 1,1,0];
}
int sum = 0;
for (int i = 0; i <= 9; ++i)
sum += f[M,i,1];
Console.WriteLine(sum); // 872,348,501
}
}
}
If you want a brute force solution it could be something like this (please, notice, that we should avoid time consuming string operations like ToString, Contains):
int count = 0;
// Let's use all CPU's cores: Parallel.For
Parallel.For(0L, 10000000000L, (v) => {
for (long x = v; x > 10; x /= 10) {
// Get rid of ToString and Contains here
if (x % 100 == 14) {
Interlocked.Increment(ref count); // We want an atomic (thread safe) operation
break;
}
}
});
Console.Write(count);
It returns 872348501 within 6 min (Core i7 with 4 cores at 3.2GHz)
UPDATE
My parallel code calculated the result as 872,348,501 in 9 minutes on my 8- processor-core Intel Core I7 PC.
(There is a much better solution above that takes less than 100ms, but I shall leave this answer here since it provides corroborating evidence for the fast answer.)
You can use multiple threads (one per processor core) to reduce the calculation time.
At first I thought that I could use AsParallel() to speed this up - however, it turns out that you can't use AsParallel() on sequences with more than 2^31 items.
(For completeness I'm including my faulty implementation using AsParallel at the end of this answer).
Instead, I've written some custom code to break the problem down into a number of chunks equal to the number of processors:
using System;
using System.Linq;
using System.Threading.Tasks;
namespace Demo
{
class Program
{
static void Main()
{
int numProcessors = Environment.ProcessorCount;
Task<long>[] results = new Task<long>[numProcessors];
long count = 10000000000;
long elementsPerProcessor = count / numProcessors;
for (int i = 0; i < numProcessors; ++i)
{
long end;
long start = i * elementsPerProcessor;
if (i != (numProcessors - 1))
end = start + elementsPerProcessor;
else // Last thread - go right up to the last element.
end = count;
results[i] = Task.Run(() => processElements(start, end));
}
long sum = results.Select(r => r.Result).Sum();
Console.WriteLine(sum);
}
static long processElements(long inclusiveStart, long exclusiveEnd)
{
long total = 0;
for (long i = inclusiveStart; i < exclusiveEnd; ++i)
if (i.ToString().Contains("14"))
++total;
return total;
}
}
}
The following code does NOT work because AsParallel() doesn't work on sequences with more than 2^31 items.
static void Main(string[] args)
{
var numbersContaining14 =
from number in numbers(0, 100000000000).AsParallel()
where number.ToString().Contains("14")
select number;
Console.WriteLine(numbersContaining14.LongCount());
}
static IEnumerable<long> numbers(long first, long count)
{
for (long i = first, last = first + count; i < last; ++i)
yield return i;
}
You compute the count of numbers of a given length ending in 1, 4 or something else that don't contain 14. Then you can extend the length by 1.
Then the count of numbers that do contain 14 is the count of all numbers minus those that don't contain a 14.
private static long Count(int len) {
long e1=0, e4=0, eo=1;
long N=1;
for (int n=0; n<len; n++) {
long ne1 = e4+e1+eo, ne4 = e4+eo, neo = 8*(e1+e4+eo);
e1 = ne1; e4 = ne4; eo = neo;
N *= 10;
}
return N - e1 - e4 - eo;
}
You can reduce this code a little, noting that eo = 8*e1 except for the first iteration, and then avoiding the local variables.
private static long Count(int len) {
long e1=1, e4=1, N=10;
for (int n=1; n<len; n++) {
e4 += 8*e1;
e1 += e4;
N *= 10;
}
return N - 9*e1 - e4;
}
For both of these, Count(10) returns 872348501.
One easy way to calculate the answer is,
You can fix 14 at a place and count the combination of the remaining numbers right to it,
and do this for all the possible positions where 14 can be place such that the number is still less than 10000000000,Lets take a example,
***14*****,
Here the '*' before 14 can be filled by 900 ways and the * after 14 can be filled by 10^5 ways so total occurrence will be 10^5*(900),
Similarly you can fix 14 at other positions to calculate the result and this solution will be very fast O(10) or simply O(1), while the previous solution was O(N), where N is 10000000000
You can use the fact that in each 1000 (that is from 1 to 1000 and from 1001 to 2000 etc)
the 14 is found: 19 times so when you receive your input divide it by 1000 for example you received 1200 so 1200/1000
the result is 1 and remainder 200, so we have 1 * 19 "14"s and then you can loop over the 200.
you can extend for 10000 (that is count how many "14"s there are in 10000 and fix it to a global variable) and start dividing by 10000 then and apply the equation above, then you divide the remainder by 1000 and apply the equation and add the two results.
You can extend it as the fact that for all hundreds (that is from 1 to 100 and from 201 to 300) the "14" is found only 1 except for the second hundred (101 to 200).

Segmented Aggregation within an Array

I have a large array of primitive value-types. The array is in fact one dimentional, but logically represents a 2-dimensional field. As you read from left to right, the values need to become (the original value of the current cell) + (the result calculated in the cell to the left). Obviously with the exception of the first element of each row which is just the original value.
I already have an implementation which accomplishes this, but is entirely iterative over the entire array and is extremely slow for large (1M+ elements) arrays.
Given the following example array,
0 0 1 0 0
2 0 0 0 3
0 4 1 1 0
0 1 0 4 1
Becomes
0 0 1 1 1
2 2 2 2 5
0 4 5 6 6
0 1 1 5 6
And so forth to the right, up to problematic sizes (1024x1024)
The array needs to be updated (ideally), but another array can be used if necessary. Memory footprint isn't much of an issue here, but performance is critical as these arrays have millions of elements and must be processed hundreds of times per second.
The individual cell calculations do not appear to be parallelizable given their dependence on values starting from the left, so GPU acceleration seems impossible. I have investigated PLINQ but requisite for indices makes it very difficult to implement.
Is there another way to structure the data to make it faster to process?
If efficient GPU processing is feasible using an innovative teqnique, this would be vastly preferable, as this is currently texture data which is having to be pulled from and pushed back to the video card.
Proper coding and a bit of insight in how .NET knows stuff helps as well :-)
Some rules of thumb that apply in this case:
If you can hint the JIT that the indexing will never get out of bounds of the array, it will remove the extra branche.
You should vectorize it only in multiple threads if it's really slow (f.ex. >1 second). Otherwise task switching, cache flushes etc will probably just eat up the added speed and you'll end up worse.
If possible, make memory access predictable, even sequential. If you need another array, so be it - if not, prefer that.
Use as few IL instructions as possible if you want speed. Generally this seems to work.
Test multiple iterations. A single iteration might not be good enough.
using these rules, you can make a small test case as follows. Note that I've upped the stakes to 4Kx4K since 1K is just so fast you cannot measure it :-)
public static void Main(string[] args)
{
int width = 4096;
int height = 4096;
int[] ar = new int[width * height];
Random rnd = new Random(213);
for (int i = 0; i < ar.Length; ++i)
{
ar[i] = rnd.Next(0, 120);
}
// (5)...
for (int j = 0; j < 10; ++j)
{
Stopwatch sw = Stopwatch.StartNew();
int sum = 0;
for (int i = 0; i < ar.Length; ++i) // (3) sequential access
{
if ((i % width) == 0)
{
sum = 0;
}
// (1) --> the JIT will notice this won't go out of bounds because [0<=i<ar.Length]
// (5) --> '+=' is an expression generating a 'dup'; this creates less IL.
ar[i] = (sum += ar[i]);
}
Console.WriteLine("This took {0:0.0000}s", sw.Elapsed.TotalSeconds);
}
Console.ReadLine();
}
One of these iterations wil take roughly 0.0174 sec here, and since this is about 16x the worst case scenario you describe, I suppose your performance problem is solved.
If you really want to parallize it to make it faster, I suppose that is possible, even though you will loose some of the optimizations in the JIT (Specifically: (1)). However, if you have a multi-core system like most people, the benefits might outweight these:
for (int j = 0; j < 10; ++j)
{
Stopwatch sw = Stopwatch.StartNew();
Parallel.For(0, height, (a) =>
{
int sum = 0;
for (var i = width * a + 1; i < width * (a + 1); i++)
{
ar[i] = (sum += ar[i]);
}
});
Console.WriteLine("This took {0:0.0000}s", sw.Elapsed.TotalSeconds);
}
If you really, really need performance, you can compile it to C++ and use P/Invoke. Even if you don't use the GPU, I suppose the SSE/AVX instructions might already give you a significant performance boost that you won't get with .NET/C#. Also I'd like to point out that the Intel C++ compiler can automatically vectorize your code - even to Xeon PHI's. Without a lot of effort, this might give you a nice boost in performance.
Well, I don't know too much about GPU, but I see no reason why you can't parallelize it as the dependencies are only from left to right.
There are no dependencies between rows.
0 0 1 0 0 - process on core1 |
2 0 0 0 3 - process on core1 |
-------------------------------
0 4 1 1 0 - process on core2 |
0 1 0 4 1 - process on core2 |
Although the above statement is not completely true. There's still hidden dependencies between rows when it comes to memory cache.
It's possible that there's going to be cache trashing. You can read about "cache false sharing", in order to understand the problem, and see how to overcome that.
As #Chris Eelmaa told you it is possible to do a parallel execution by row. Using Parallel.For could be rewritten like this:
static int[,] values = new int[,]{
{0, 0, 1, 0, 0},
{2, 0, 0, 0, 3},
{0, 4, 1, 1, 0},
{0, 1, 0, 4 ,1}};
static void Main(string[] args)
{
int rows=values.GetLength(0);
int columns=values.GetLength(1);
Parallel.For(0, rows, (row) =>
{
for (var column = 1; column < columns; column++)
{
values[row, column] += values[row, column - 1];
}
});
for (var row = 0; row < rows; row++)
{
for (var column = 0; column < columns; column++)
{
Console.Write("{0} ", values[row, column]);
}
Console.WriteLine();
}
So, as stated in your question, you have a one dimensional array, the code would be a bit faster:
static void Main(string[] args)
{
var values = new int[1024 * 1024];
Random r = new Random();
for (int i = 0; i < 1024; i++)
{
for (int j = 0; j < 1024; j++)
{
values[i * 1024 + j] = r.Next(25);
}
}
int rows = 1024;
int columns = 1024;
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < 100; i++)
{
Parallel.For(0, rows, (row) =>
{
for (var column = 1; column < columns; column++)
{
values[(row * columns) + column] += values[(row * columns) + column - 1];
}
});
}
Console.WriteLine(sw.Elapsed);
}
But not as fast as a GPU. To use parallel GPU processing you will have to rewrite it in C++ AMP or take a look on how to port this parallel for to cudafy: http://w8isms.blogspot.com.es/2012/09/cudafy-me-part-3-of-4.html
You may as well store the array as a jagged array, the memory layout will be the same. So, instead of,
int[] texture;
you have,
int[][] texture;
Isolate the row operation as,
private static Task ProcessRow(int[] row)
{
var v = row[0];
for (var i = 1; i < row.Length; i++)
{
v = row[i] += v;
}
return Task.FromResult(true);
}
then you can write a function that does,
Task.WhenAll(texture.Select(ProcessRow)).Wait();
If you want to remain with a 1-dimensional array, a similar approach will work, just change ProcessRow.
private static Task ProcessRow(int[] texture, int start, int limit)
{
var v = texture[start];
for (var i = start + 1; i < limit; i++)
{
v = texture[i] += v;
}
return Task.FromResult(true);
}
then once,
var rowSize = 1024;
var rows =
Enumerable.Range(0, texture.Length / rowSize)
.Select(i => Tuple.Create(i * rowSize, (i * rowSize) + rowSize))
.ToArray();
then on each cycle.
Task.WhenAll(rows.Select(t => ProcessRow(texture, t.Item1, t.Item2)).Wait();
Either way, each row is processed in parallel.

Calculate all possible permutations/combinations, then check if the result is equal to a value

Best way I can explain it is using an example:
You are visiting a shop with $2000, your goal is to have $0 at the end of your trip.
You do not know how many items are going to be available, nor how much they cost.
Say that there are currently 3 items costing $1000, $750, $500.
(The point is to calculate all possible solutions, not the most efficient one.)
You can spend $2000, this means:
You can buy the $1000 item 0, 1 or 2 times.
You can buy the $750 item 0, 1 or 2 times.
You can buy the $500 item 0, 1, 2, 3 or 4 times.
At the end I need to be able to have all solutions, in this case it will be
2*$1000
1*$1000 and 2*$500
2*$750 and 1*$500
4*$500
Side note: you can't have a duplicate solution (like this)
1*$1000 and 2*$500
2*$500 and 1*$1000
This is what I tried:
You first call this function using
goalmoney = convert.ToInt32(goalMoneyTextBox.Text);
totalmoney = Convert.ToInt32(totalMoneyTextBox.Text);
int[] list = new int[usingListBox.Items.Count];
Calculate(0, currentmoney, list);
The function:
public void Calculate(int level, int money, int[] list)
{
string item = usingListBox.Items[level].ToString();
int cost = ItemDict[item];
for (int i = 0; i <= (totalmoney / cost); i++)
{
int[] templist = list;
int tempmoney = money - (cost * i);
templist[level] = i;
if (tempmoney == goalmoney)
{
resultsFound++;
}
if (level < usingListBox.Items.Count - 1 && tempmoney != goalmoney) Calculate(level + 1, tempmoney, templist);
}
}
Your problem can be reduced to a well known mathematical problem labeled Frobenius equation which is closely related to the well known Coin problem. Suppose you have N items, where i-th item costs c[i] and you need to spent exactly S$. So you need to find all non negative integer solutions (or decide whether there are no solutions at all) of equation
c[1]*n[1] + c[2]*n[2] + ... + c[N]*n[N] = S
where all n[i] are unknown variables and each n[i] is the number of bought items of i-th type.
This equation can be solved in a various ways. The following function allSolutions (I suppose it can be additionally simplified) finds all solutions of a given equation:
public static List<int[]> allSolutions(int[] system, int total) {
ArrayList<int[]> all = new ArrayList<>();
int[] solution = new int[system.length];//initialized by zeros
int pointer = system.length - 1, temp;
out:
while (true) {
do { //the following loop can be optimized by calculation of remainder
++solution[pointer];
} while ((temp = total(system, solution)) < total);
if (temp == total && pointer != 0)
all.add(solution.clone());
do {
if (pointer == 0) {
if (temp == total) //not lose the last solution!
all.add(solution.clone());
break out;
}
for (int i = pointer; i < system.length; ++i)
solution[i] = 0;
++solution[--pointer];
} while ((temp = total(system, solution)) > total);
pointer = system.length - 1;
if (temp == total)
all.add(solution.clone());
}
return all;
}
public static int total(int[] system, int[] solution) {
int total = 0;
for (int i = 0; i < system.length; ++i)
total += system[i] * solution[i];
return total;
}
In the above code system is array of coefficients c[i] and total is S. There is an obvious restriction: system should have no any zero elements (this lead to infinite number of solutions). A slight modification of the above code avoids this restriction.
Assuming you have class Product which exposes a property called Price, this is a way to do it:
public List<List<Product>> GetAffordableCombinations(double availableMoney, List<Product> availableProducts)
{
List<Product> sortedProducts = availableProducts.OrderByDescending(p => p.Price).ToList();
//we have to cycle through the list multiple times while keeping track of the current
//position in each subsequent cycle. we're using a list of integers to save these positions
List<int> layerPointer = new List<int>();
layerPointer.Add(0);
int currentLayer = 0;
List<List<Product>> affordableCombinations = new List<List<Product>>();
List<Product> tempList = new List<Product>();
//when we went through all product on the top layer, we're done
while (layerPointer[0] < sortedProducts.Count)
{
//take the product in the current position on the current layer
var currentProduct = sortedProducts[layerPointer[currentLayer]];
var currentSum = tempList.Sum(p => p.Price);
if ((currentSum + currentProduct.Price) <= availableMoney)
{
//if the sum doesn't exeed our maximum we add that prod to a temp list
tempList.Add(currentProduct);
//then we advance to the next layer
currentLayer++;
//if it doesn't exist, we create it and set the 'start product' on that layer
//to the current product of the current layer
if (currentLayer >= layerPointer.Count)
layerPointer.Add(layerPointer[currentLayer - 1]);
}
else
{
//if the sum would exeed our maximum we move to the next prod on the current layer
layerPointer[currentLayer]++;
if (layerPointer[currentLayer] >= sortedProducts.Count)
{
//if we've reached the end of the list on the current layer,
//there are no more cheaper products to add, and this cycle is complete
//so we add the list we have so far to the possible combinations
affordableCombinations.Add(tempList);
tempList = new List<Product>();
//move to the next product on the top layer
layerPointer[0]++;
currentLayer = 0;
//set the current products on each subsequent layer to the current of the top layer
for (int i = 1; i < layerPointer.Count; i++)
{
layerPointer[i] = layerPointer[0];
}
}
}
}
return affordableCombinations;
}

Categories