This isn't a complicated problem, but I can't for whatever reason think of a simple way to do this with the modulus operator. Basically I have a collection of N items and I want to display them in a grid.
I can display a maximum of 3 entries across and infinite vertically; they are not fixed width...So If I have 2 items they get displayed like that [1][2]. If I have 4 items they get displayed stacked like this:
[1][2]
[3][4]
If I have 5 items it should look like this:
[ 1 ][ 2]
[3][4][5]
Seven items is slightly more complicated:
[ 1 ][ 2]
[ 3 ][ 4]
[5][6][7]
This is one of those things where if I slept on it, it would be brain dead obvious in the morning, but all I can think about doing involves complicated loops and state variables. There has to be an easier way.
I'm doing this in C# but I doubt the language matters.
By maximizing the number of rows that have three items, you can minimize the total number of rows. Thus six items would be grouped as two rows of 3 rather than three rows of 2:
[1][2][3]
[4][5][6]
and ten items would be grouped as two rows of 2 and two rows of 3 rather than five rows of 2:
[ 1 ][ 2 ]
[ 3 ][ 4 ]
[5][6][7 ]
[8][9][10]
If you want rows with two items first, then you keep peeling off two items until the remaining items are divisible by 3. As you go through the loop, you need to keep track of the number of remaining items using an index or whatnot.
In your loop to populate each row, you can check these conditions:
//logic within loop iteration
if (remaining % 3 == 0) //take remaining in threes; break the loop
else if (remaining >= 4) //take two items, leaving two or more remaining
else //take remaining items, which will be two or three; break the loop
If we walk through the example of 10 items, the process would go as follows:
10 items remaining. 10 % 3 != 0. Since 10 > 4, take two items.
8 items remaining. 8 % 3 != 0. Since 8 > 4, take two items.
6 items remaining. 6 % 3 = 0. Take those 6 items in groups of three.
To go to your example of 7 items:
7 items remaining. 7 % 3 != 0. Since 7 > 4, take two items.
5 items remaining. 5 % 3 != 0. Since 5 > 4, take two items.
3 items remaining. 3 % 3 = 0. Take those 3 items as a group.
And here's the result for 4 items:
4 items remaining. 4 % 3 != 0. Since remaining = 4, take two items.
2 items remaining. 2 % 3 != 0. 2 < 4. Fall to else condition, take remaining items.
I think that'll work. At least, at 12:30 a.m. it seems like it should work.
if ((list.Count % 2) == 0)
{
//Display all as [][]
[][]
}
else
{
//Display all as [][]
[][]
//Display last 3 as [][][]
}
How about pseudo-code
if n mod 3 = 1
first 2 rows have 2 items each (assuming n >= 4)
all remaining rows have 3 items
else if n mod 3 = 2
first row has 2 items
all remaining rows have 3 items
else
all rows have 3 items
So, given that: a) the objective is to minimize the number of rows, b) a row cannot have more than 3 items, c) a row should have 3 items if possible, and d) you cannot have a row with a single item unless it is the only item, I would say the algorithm goes as follows:
If there is only one item, it will be alone in its own row; done.
Calculate the 'tentative' number of rows by dividing the number of items by 3.
If the remainder (N % 3) is 0, then all rows will have 3 items.
If the remainder is 1, then there will be an additional row, and the last 2 rows will only have 2 items each.
If the remainder is 2, then there will be an additional row, and it will only have 2 items.
This algorithm will produce a slightly different format from the one you were envisioning, (the 3-item rows will be at the top, the 2-item rows will be at the bottom,) but it satisfies the constraints. If you need the 2-item rows to be at the top, you can modify it.
Related
I have data like that:
Time(seconds from start)
Value
15
2
16
4
19
2
25
9
There are a lot of entries (10000+), and I need a way to find fast enough sum of any time range, like sum of range 16-25 seconds (which would be 4+2+9=15). This data will be dynamically changed many times (always adding new entries at the bottom of list).
I am thinking about using sorted list + binary search to determinate positions and just make sum of values, but is can took too much time to calculate it. Is there are any more appropriate way to do so? Nuget packets or algorithm references would be appreciated.
Just calculate cumulative sum:
Time Value CumulativeSum
15 2 2
16 4 6
19 2 8
25 9 17
Then for range [16,25] it will be task to binary search left border of 16 and 25 exact, which turns into 17 - 2 = 15
Complexity: O(log(n)), where n - size of the list.
Binary search implementation for lower/upper bound can be found in my repo - https://github.com/eocron/Algorithm/blob/master/Algorithm/Sorted/BinarySearchExtensions.cs
Consider the table below. It has 400,000 rows with 40 columns with values which can range from 0 to 4,000.
MeasureValue1
MeasureValue2
MeasureValue3
...
MeasureValue40
1
5
7
...
2740
2
5
7
...
2749
2
6
7
...
2703
4
6
8
...
2721
Conditions are then given per column to which other columns in the row must suffice before the value in that specific column is counted, these conditions are only known at runtime. Essentially a group by is performed across every column where other columns in the row satisfy the given conditions. For example, these conditions could be as follows.
Count MeasureValue1 if MeasureValue2 equals 5
Count MeasureValue2 if MeasureValue1 equals 2
Count MeasureValue3 if MeasureValue1 equals 2 and MeasureValue2 equals 5
...
Count MeasureValue40 if MeasureValue1 equals 2 and MeasureValue2 equals 5
In which the final result would be a table with counts per value. Given the example conditions above, that table would look as follows.
1
2
3
4
5
6
7
8
9
10
...
2749
1
1
0
0
1
0
1
0
0
0
...
1
To tackle this problem I have written something akin to the code below. It is definitely faster than performing a LINQ GroupBy across every column, and also faster even compared to multithreaded LINQ. It takes in an array of 400,000 times 40 values. You can imagine the 40 values as being a single row in a table as described above, of which there are 400,000 rows with those 40 values. Those 40 values can range from 0 to 4,000.
Since both the amount of rows and the amount of values can change dynamically, a single array was chosen to store everything. The reason jagged arrays are not being used is due to being clunky to work with, on top of having read that they negatively affect performance.
The code then counts the values present in the array if any combination of the 39 values meets a specific condition. In the example below the first value is counted if the second value is a 5, the second value is counted if the first value is a 2, and the third value is counted if the first value is a 2 and the second value is a 5. These conditions are only known at runtime. The combinations of conditions each with their own range of hundreds of possible values quickly reaches an array with a length too big to store, which means I cannot bake the amounts in some array and be done with it.
This piece of code will be called billions of times per day, maybe more, so it is imperative that it is as quick as possible. The implementation I have currently, which resembles the example below (there is an additional optimization which only counts the conditions once), has it down to an average of 40 milliseconds, I need to reduce this to at least 4 milliseconds by any means possible (except parallelism, obviously throwing more cores at it will make it faster). I briefly looked at SIMD but couldn't figure out how to apply it to this problem. How/where can I find the fastest algorithm to tackle this problem?
void Main()
{
var values = new ushort[400_000 * 40];
var stopwatch = new Stopwatch();
stopwatch.Start();
var totals = new ushort[4_000];
for (var i = 0; i < values.Length; i += 40)
{
if (values[i + 1] == 5)
{
totals[values[i]]++;
}
if (values[i] == 2)
{
totals[values[i + 1]]++;
}
if (values[i] == 2 && values[i + 1] == 5)
{
totals[values[i + 2]]++;
}
// More ifs which count...
}
Console.WriteLine(stopwatch.ElapsedMilliseconds);
}
I have a large set of n integers. I need to choose k elements in that list such that they sum from largest to smallest. Every subset chosen will have to be valid for some rule (it doesn't matter the rule). I want to find the largest-summed subset that is also valid.
For instance, using a small set of numbers (10, 7, 5, 3, 0), a subset size of 3, and a simple rule (sum must be prime), the code would look at:
10, 7, 5 = 22 -> NOT PRIME, KEEP GOING
10, 7, 3 = 20 -> NOT PRIME, KEEP GOING
10, 5, 3 = 18 -> NOT PRIME, KEEP GOING
10, 7, 0 = 17 -> PRIME, STOP
I know I could just put EVERY combination in a list, order it descending, and then work my way down until a sum passes the test, but that seems hugely inefficient in both space and time, especially if I have a set of size like 100 and a subset size of 8. That's like 186 billion combinations that I'd have to calculate.
Is there a way to just do this in a simple loop where I start at the biggest sum check for validity, and then calculate and go to the next largest possible sum and check for validity, etc.? Something like:
// Assuming set is ordered, this is the largest possible sum given the subset_size
int sum = set.Take(subset_size).Sum();
while (!IsValid(sum))
{
sum = NextLargest(set, subset_size, sum);
}
bool IsValid (int sum)
{
return sum % 2 == 0;
}
int NextLargest (int[] set, int subset_size, int current_sum)
{
// Find the next largest sum here as efficiently as possible
}
You don't need to look at every combination, only the ones that sum to a larger number.
Iterate over the set in descending order and check the sum. Keep track of the largest valid sum found so far. When a larger sum is impossible, break out of the loop. For example, given subset size 5, you found a valid sum 53. At some point, you are considering a subset that starts with 10. Since the numbers are in descending order, the largest sum you can get at this point is 50. So this path can be abandoned. This should significantly trim down your solution space.
I got 9 numbers which I want to divide in two lists, and both lists need to reach a certain amount when summed up. For example I got a list of ints:
List<int> test = new List<int>
{
1963000, 1963000, 393000, 86000,
393000, 393000, 176000, 420000,
3193000
};
And I want to have 2 lists of numbers that when you sum them up, they both reach over 4 million.
It doesn't matter if the 2 lists don't have the same amount of numbers. If it only takes 2 numbers to reach 4 million in 1 list, and 7 numbers together reaching 7 million, is fine.
As long as both lists summed up are equal to 4 million or higher.
Is this certain sum low enough to be reached easily?
If yes, then your algorithm may be as simple as: iterate i from 1 to number of items. sum up the first i numbers. if the sum is higher than your certain sum (eg 4 million), then you are finished, else increment i.
BUT: if your certain sums are high and it is not such trivial to find the partition, then you have the famous Partition Probem (https://en.wikipedia.org/wiki/Partition_problem), this is not that simple but there are some algorithms. Read this wikipedia artikle or try to google "Partition problem solution" or similar.
Items:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Repeater control I want to place class on highlighted item number.
so ... I have done following code.
if ((DL_NewProducts.Items.Count) % 3 == 0)
{
var libox = e.Item.FindControl("libox") as HtmlGenericControl;
if (libox != null)
libox.Attributes["class"] = "last";
}
Here is problem that in first iteration it find three items, mod work fine and it place class on 4th item but in second iteration it come again on 6th item and place class on 7th item while I want it to place it on 8th what will be correct logic for it..
You are looking for (DL_NewProducts.Items.Count % 4) == 0.
The question isn't completely clear - you have marked the sequence 4, 8, 12, ... in bold but appear to actually want the numbers in the sequence 3, 7, 11... to pass the test.
So I think you're looking for the expression:
DL_NewProducts.Items.Count % 4 == 3
But it's hard to tell since it isn't clear if those numbers at the top represent counts, zero-based indices or one-based indices. If you can clarify exactly what they represent and how they relate to the collection's count, we might be able to provide more appropriate answers.