I have a fixed list of weights:
int[] weights = new int[] { 10, 15, 20 };
and a target:
int target = 28;
I am looking for an algorithm to express target as the sum of elements from weights (with repeats allowed) such that target is either matched or exceeded, the closest possible match to target is achieved, and within that, the number of weights used is minimised.
So with the above input I would like either 10 20 or 15 15 to be returned, since 30 is as close as we can get, and of the options for making 30, these two are better than 10 10 10.
With a target of 39, the output should be 20 20 rather than, say, 15 15 10 or 10 10 10 10.
With a target of 14, the output should be 15.
Is there a good approach here other than regular foreach loops? I was thinking of retreiving the largest value available in the array and check if the target is negative, if not then let's go for the next value.
This is not homework :)
This is known as the knapsack problem. The only difference is that you're looking for the nearest match, instead of the nearest lower match. Also fortunately none of the weights have a different value. The difficulty lies in that you cannot simply use one of the weights that comes closest and recurse using the remaining value (a combination of smaller values would sometimes make a better match).
In your example the weights all have 5 "units" in between, if this is always the case, the problem will become alot easier to solve.
I've managed to find a solution thanks to everyone here making it a bit more clear what I actually needed. It's not the prettiest code I've written but this is MVP development anyway!
private static List<int> WeightsJuggle(List<int> packages, IOrderedEnumerable<int> weights, int weight)
{
if (weight == 0)
return packages;
foreach (int i in weights.Where(i => i >= weight))
{
packages.Add(i);
return packages;
}
packages.Add(weights.Max());
return WeightsJuggle(packages, weights, weight - weights.Max());
}
I call it like this
IOrderedEnumerable<int> weights = new int[] { 10, 15, 20 }.OrderBy(x => x);
int weight = 65;
List<int> packages = new List<int>();
Test with weight 65
Test with weight 123
Related
While doing a code Kata, I've run into a slight logical problem that I can't figure out a solution to. It isn't a hard-and-fast requirement of completing the task but it has me intrigued as to how it could be handled.
The Kata is simulating applying pricing discounts and a supermarket checkout (see full Kata here) through a collection of pricing rules. To play around with some inheritance and interface capabilities, I've added a "Buy X, get Y free" rule. It works fine when the Y in question is 1, but starts getting a little hazy after that...
For example, I experimented with the idea of "Buy 3, get 2 free". I tried doing this by grouping the items in groups of 5, and working out the discount of each group by subtracting the price of two of the items:
public override double CalculateDiscount(Item[] items)
{
//per the example described above
//_groupSize = 5, _buy = 3
//meaning 2 out of every group of 5 items should be free
var discountGroups = new List<IEnumerable<Item>>();
for (var i = 0; i < items.Length / _groupSize; i++)
{
discountGroups.Add(items.Skip(i * _groupSize).Take(_groupSize));
}
return discountGroups
.Sum(group => group
.Skip(_buy)
.Sum(item => item.Price)
);
}
What I found is that the above code works as expected (if each item has a Price property of 1.00, the above would return 2.00). An edge case that I am looking to solve is that it doesn't take affect until the fifth item is added (so the price as you ad each item would go 1.00, 2.00, 3.00, 4.00, 3.00).
What I would ideally like is that, once you have three items in your collection, the next two items are free, whether you choose to take just one or two of them shouldn't affect the price. I understand this isn't hugely realistic to the domain, but I was interested in trying to solve it as a technical problem.
I've had a few cracks at it but haven't successfully gotten any closer than the above code. I figure that what I need to do is group the array into the minimum number of bought items required, then a variable number of free items. I could probably hard-code something to solve the issue once, but this gets complicated if I were to simulate buying 3 items and getting 2 free, then buying 3 items but only taking one free one.
Any advice on how to go about this would be really appreciated!
Thanks,
Mark
Your discount-calculation has some bugs, for example you dont create groups if the item-count is less than groupSize. So change i < to i <=:
for (var i = 0; i <= items.Length / groupSize; i++)
{
discountGroups.Add(items.Skip(i * groupSize).Take(groupSize));
}
Maybe that's already all.
I would just like to add that the extension method .Chunk() was added to the System.Linq namespace in .NET 6, and it does exactly what you are doing to create discountGroups; it splits the source collection into an IEnumerable of chunks of the requested chunk size:
source: { 1, 2, 3, 4, 5, 6, 7, 8 }
var chunks = source.Chunk(3);
chunks: { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8 } }
If the item count in the source enumerable is not exactly divisible by the wanted chunk size, the last chunk will simply consist of the remaining items (i.e. the last chunk may be smaller in size than the other chunks).
By using .Chunk(), you could therefore replace this:
var discountGroups = new List<IEnumerable<Item>>();
for (...)
{
discountGroups.Add(items.Skip(i * _groupSize).Take(_groupSize));
}
with this:
var discountGroups = items.Chunk(_groupSize).ToList();
Prime Number Generator Code
Do know that this question should be quite basic but i have spent hours trying to figure out why my code is stuck in the loop as below. Have added a Console.WriteLine($"{counttemp} , {count1} "); in the if loop to check the 2 numbers and seems like it is not breaking out of the if condition when the condition is true
this is the console output for the writeline in the if loop
5 , 5
6 , 2
7 , 7
8 , 2
9 , 3
10 , 2
11 , 11
12 , 2
13 , 13
14 , 2
15 , 3
16 , 2
17 , 17
18 , 2
19 , 19
Problematic Loop??
for (count1 = 2; count1 <= counttemp ; ++count1)
{
if(counttemp % count1 == 0)
{
Console.WriteLine($"{counttemp} , {count1} ");
Console.ReadKey();
primetest1 = 0;
break;
}
}
full code sequence
static void Main(string[] args)
{
int prime1 = 10000, count1, primetest1, counttemp;
for (counttemp = 5; counttemp <= prime1; counttemp++)
{
primetest1 = 1;
for (count1 = 2; count1 <= counttemp ; ++count1)
{
if(counttemp % count1 == 0)
{
Console.WriteLine($"{counttemp} , {count1} ");
Console.ReadKey();
primetest1 = 0;
break;
}
}
if (primetest1 == 1)
{
Console.Write($"{counttemp}");
}
}
}
You're almost there. The problem is that you're checking if your candidate number is a prime by getting the remainder when divided by each number up to and including the number itself.
I think you'll find that N is a factor of N for all values of N. To fix this, you should only be checking up to but excluding the number.
And, as an aside, you don't really need to check all the way up to N - 1. You only need to go to the square root of N, adjusted up to the nearest integer. That's because, if it has a factor above the square root, you would already have found a factor below it.
Consider 24 as an example. It has 6, 8, and 12 as factors above the square root, but the matching values below the square root are 4, 3, and 2 respectively.
And there's a another trick you can use by realising that if a number is a multiple of a non-prime, it's also a multiple of every prime factor of that non-prime. In other words, every multiple of 12 is also a multiple of 2 and 3.
So you only need to check prime numbers up to the square root, to see if there's a factor. And prime numbers, other than two or three, are guaranteed to be of the form 6x-1 or 6x+1, so it's quite easy to filter out a large chunk of candidates very quickly, by checking only for those values.
In other words, check two and three as special cases. Then start at 5 and alternately add 2 and 4: 5, 7, 11, 13, 17, 19, .... Not every number in that set is prime (e.g, 25) every prime is guaranteed to be in that set.
You can check out an earlier answer of mine for more detail on why this is so, and how to do this sequence efficiently.
im trying to program an easy dice game for mobile in unity with c# (its called 10000, maybe you know it).
In the game you have 6 dices, you get when you roll for exapmle three times a 6 600 points for four times a 3 3000 points and so on.
Thats why i have to check if there are at least 3 dices with the same number, and with Mathf.Approximately
and a lot of if's the code would be really ugly and long.
So whats the easiest way to get around this problem?
// cubes values
int[] values = new int[6]
{
3, 4, 3, 6, 1, 3
};
// group values (key => value, value => count key)
var group = values.GroupBy(v => v);
// if there are 3 or more identical cubes
if (group.Any(g => g.Count() >= 3))
{
}
Here is an example:
Customer orders 57 single peaces of an item. The company only sells in
units of 15 and 6.
The algorithm has to figure out the best possible combination of UOMs (unit of measure) with the following priorities in order of importance
least amount of overage
using highest unit of measure
In this example the expected result is List<int>:
{ 15, 15, 15, 6, 6 } //sum=57
I've researched "bin packing" and "knapsack problem" but couldn't figure out how it could be applied in this case.
So far I have this which is clearly doesn't accomplish the best combination.
void Main()
{
var solver = new Solver();
var r = solver.Solve(57, new decimal [] {6, 15}).Dump();
}
public class Solver
{
public List<decimal> mResults;
public List<decimal> Solve(decimal goal, decimal[] elements)
{
mResults = new List<decimal>();
RSolve(goal, 0m, elements, elements.Where(e => e <= goal).OrderByDescending(x => x).FirstOrDefault());
return mResults;
}
public void RSolve(decimal goal, decimal currentSum, decimal[] elements, decimal toAdd)
{
if(currentSum >= goal)
return;
currentSum += toAdd;
mResults.Add(toAdd);
var remainder = goal - currentSum;
var nextToAdd = elements.Where(e => e <= remainder).OrderByDescending(e => e).FirstOrDefault();
if(nextToAdd == 0)
nextToAdd = elements.OrderBy(e => e).FirstOrDefault();
RSolve(goal, currentSum, elements, nextToAdd);
}
}
This is an instance of the change-making problem. It can be solved by dynamic programming; build an array where the value at index i is the largest coin that can be used in a solution totalling i, or -1 if no solution is possible. If you use larger units first then the solutions will satisfy the "highest unit of measure" requirement automatically; to satisfy the "least amount of overage" requirement, you can try to find a solution for 57, if that doesn't work try 58, and so on.
The running time is at most O((n + 1 - n₀)k) where n is the required sum, n₀ is the largest index in the cache, and k is the number of units of measure. In particular, after trying 57, it takes at most O(k) time to try 58, then at most O(k) time to try 59, and so on.
To build the output list for a sum n, initialize an empty list, then while n > 0 append the value in the cache at index n, and subtract that value from n.
I haven't done much LINQ before, so I often find some aspects confusing. Recently someone created a query that looks like the following using the GroupBy operator. Here's what they did:
List<int> ranges = new List<int>() {100, 1000, 1000000};
List<int> sizes = new List<int>(new int[]{99,98,10,5,5454, 12432, 11, 12432, 992, 56, 222});
var xx = sizes.GroupBy (size => ranges.First(range => range >= size));
xx.Dump();
Basically I am very quite confused as to how the key expression works, i.e. ranges.First(range => range >= size
Can anyone shed some light? Can it be decomposed further to make this easier to understand? I thought that First would produce one result.
Thanks in advance.
size => ranges.First(range => range >= size) this Func builds key, on which sizes will be grouped. It takes current size and finds first range, which is greater or equal current size.
How it works:
For size 99 first range which >= 99 is 100. So, calculated key value will be 100. Size goes to group with key 100.
Next sizes 98, 10, 5 also will get key 100 and go to that group.
For size 5454 calculated key value will be 1000000 (it's the first range which is greater that 5454. So, new key is created, and size goes to group with key 1000000.
Etc.
ranges.First(range => range >= size) returns an int, the first range that is >= the current size value. So every size belongs to one range. That is the group.
Note that First throws an exception if there's no range which is >= the given size.
If you write the code with for loop it looks like this:
var myGroup = new Dictionary<int, List<int>>();
foreach( size in sizes)
{
// ranges.First(range => range >= size) is like bellow
range = find minimum value in ranges which is greater than equal to size;
// this grouping will be done autamatically by calling GroupBy in your code:
if (myGroup[range] has no value) // actually TryGetValue
myGroup[range] = new List<int>();
// this addition will be done by each iteration on your inputs.
myGroup[range].Add(item);
}
Just difference in your linq command is, it doesn't works with for loop, actually it works with hash table, and it's faster (in average), and if you learn linq well, it's more readable.
Not sure whether it adds to the clarity, but if you really want to break it down, you could do the following (I'm guessing you are using LinqPad)
List<int> ranges = new List<int>() {100, 1000, 1000000};
List<int> sizes = new List<int>(new int[]{99,98,10,5,5454, 12432, 11, 12432, 992, 56, 222});
void Main()
{
var xx = sizes.GroupBy (size => GetRangeValue(size));
xx.Dump();
}
private int GetRangeValue(int size)
{
// find the first value in ranges which is bigger than or equal to our size
return ranges.First(range => range >= size);
}
And yes, you are correct, First does produce one result.
Indeed, first returns one value, which becomes key for grouping.
What happens here is
- First is called for each value in sizes, returning the first range larger than size (100,100,100,100,1000000, 1000000, etc)
- "sizes" are grouped by this value. For every range a grouping is returned, for instance
100: 99,98,10,5,11...
GroupBy essentially builds a lookup table (dictionary) where each of the items in your source that meets a common condition is grouped into a list and then assigned to a key in the lookup table.
Here is a sample program that replaces your call to xx.Dump() with a code block that pretty-prints the output in a way specific to your example. Notice the use of OrderBy to first order the keys (range values) as well as group of items associated with each range.
using System;
using System.Collections.Generic;
using System.Linq;
class GroupByDemo
{
static public void Main(string[] args)
{
List<int> ranges = new List<int>() {100, 1000, 1000000};
List<int> sizes = new List<int>(
new int[]{99,98,10,5,5454, 12432, 11, 12432, 992, 56, 222});
var sizesByRange =
sizes.GroupBy(size => ranges.First(range => range >= size));
// Pretty-print the 'GroupBy' results.
foreach (var range in sizesByRange.OrderBy(r => r.Key))
{
Console.WriteLine("Sizes up to range limit '{0}':", range.Key);
foreach (var size in range.ToList().OrderBy(s => s))
{
Console.WriteLine(" {0}", size);
}
}
Console.WriteLine("--");
}
}
Expected Results
Notice that 12432 appears twice in the last group because that value appears twice in the original source list.
Sizes up to range limit '100':
5
10
11
56
98
99
Sizes up to range limit '1000':
222
992
Sizes up to range limit '1000000':
5454
12432
12432
--