I have an a large array ( +400 numbers) decimal[] Raw where I need to average every 20 numbers, send those numbers to a new array, or list RawAvgList,
then form a new array, or list to get that average of the numbers in RawAvgList. Aka my code should find the average of the first 20 numbers and stores them in my new array/list, then next 20, then next 20. It should also for count if there are more or less then 20 number at the end of the large array
Should my while loop be in another loop that restarts the counting index??
Should I just be removing every 20 numbers as I go? I know just simply using the Average() on the decimal[] Raw is an option but the numbers needs to be more exact then that function can give. I have also tried using IndexRange but when the number isn't divisible by my count (20) it give and error, which will happen.
I have just been stumped for so long I am at my wits end and frustrated beyond belief, anything to help.
int unitof = 20;
decimal[] Raw = new decimal[] { Decimal.Parse(line.Substring(9).ToString(), style1) };
for (int i = 0; i < Raw.Length; i++)
{
while (count < Raw.Count())
{
RawAvgList.Add(// ** Average of every 20 numbers **//);
count += unitof; // 20 by 20 counter
}
// Reset counter or add another counter??
}
Edit (8/22/2022)
I added the IEnumerable<IEnumerable> Chunk as suggested, but I believe something else went wrong or I didn't fully understand how it worked because i have never used chunks.
I implemented the Chunk
public static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> values, int chunkSize)
{
return values
.Select((v, i) => new { v, groupIndex = i / chunkSize })
.GroupBy(x => x.groupIndex)
.Select(g => g.Select(x => x.v));
}
added what you suggested
var rawAvgList = Raw.Chunk(20).Select(chunk => chunk.Average()).ToArray();
var result = rawAvgList.Average();
and then tried printing to the Console.Writeline()
Console.WriteLine($"{result} \t " + LengthRaw++);
Which got me and output of
36.41 0
37.94 1
38.35 2
37.63 3
36.41 4
36.41 5
36.21 6
36.82 7
37.43 8
37.43 9
37.43 10
37.43 11
37.43 12
37.94 13
37.94 14
37.84 15
37.43 16
37.84 17
37.43 18
37.84 19
37.84 20
When the output should be ( I am only using 21 numbers at the moment but it will be more then that later)
37.37 0
37.84 1
You can use Enumerable.Chunk() to split the data into batches of at most 20, and then average all those chunks:
decimal[] raw = new decimal[10000]; // Fill with your data.,
var rawAvgList = raw.Chunk(20).Select(chunk => chunk.Average()).ToArray();
var result = rawAvgList.Average();
However I don't know what you meant by
It should also for count if there are more or less then 20 number at
the end of the large array
The last block which will be averaged will be less than 20 long if the input is not a multiple of 20 items long, but all other blocks will be exactly 20 long.
Related
I need to set up a recursive function in C# to set the sequence number of a list of items. More specifically a bom. For each bom level, I need to start the sequence at 10, and increment of 10. How do I keep track of what level i'm at, and what counter to increment. This is driving me nuts.
Short example of data below, the real boms have thousands of lines and up to 12-15 levels.
Order
Level
Sequence
1
1
10
2
2
10
3
3
10
4
3
20
5
2
20
6
3
10
7
4
10
8
3
20
9
4
10
10
4
20
11
2
30
12
3
10
13
1
20
14
1
30
I indented the levels, to make the structure a bit more clear. And pasted the results of your answer to this. As you can see, the new levels are not sequenced properly.
I think i this case we can use the new language feature local function, and see if a recursive function is really necessary, as they are generally some of the hardest code to debug and maintain, only to be used sparingly, if at all this year, for any given year :)
[Fact]
public void SequencingRecursiveTest()
{
// BOM like byte order mark in utf 8 text encoding? odd problem :D
// Anyway we have a long list of values and want to emit the value with a sequence number, starting from 10 with increment 10
// Like in most cases when contemplating recursion, first off what about not using recursion to keep code maintainable and clean,
// As it turns out, we can:
//However super sneakily we have to reset all 'bom' sequence counts below the highest when an element in the sequence breaks the chain of same or greater
var keyValues = new Dictionary<int, int>();
var firstValue = 10;
var increment = 10;
int lastBom = 0;
int greatesBom = 0;
KeyValuePair<int, int> GetValueWithSequenceResetIfLowerThanLast(int bom)
{
bool reset = bom < lastBom;
greatesBom = bom > greatesBom ? bom : greatesBom;
if (reset)
{
foreach (int keyBom in keyValues.Keys)
{
if (keyBom < greatesBom)
keyValues[keyBom] = firstValue;
}
}
else if (keyValues.ContainsKey(bom))
{
keyValues[bom] = keyValues[bom] + increment;
}
else
{
keyValues.Add(bom, firstValue);
}
lastBom = bom;
return new KeyValuePair<int, int>(bom, keyValues[bom]);
}
var valueList = new List<int> { 1, 2, 3, 3, 2, 3, 4, 3, 4, 4, 2, 3, 1, 1 };
var valueSequenceList = valueList.Aggregate(
new List<KeyValuePair<int, int>>(),
(source, item) =>
{
source.Add(GetValueWithSequenceResetIfLowerThanLast(item));
return source;
}
);
foreach (var element in valueSequenceList)
System.Diagnostics.Debug.WriteLine($"{element.Key}: {element.Value}");
}
Prime Number Generator Code
Do know that this question should be quite basic but i have spent hours trying to figure out why my code is stuck in the loop as below. Have added a Console.WriteLine($"{counttemp} , {count1} "); in the if loop to check the 2 numbers and seems like it is not breaking out of the if condition when the condition is true
this is the console output for the writeline in the if loop
5 , 5
6 , 2
7 , 7
8 , 2
9 , 3
10 , 2
11 , 11
12 , 2
13 , 13
14 , 2
15 , 3
16 , 2
17 , 17
18 , 2
19 , 19
Problematic Loop??
for (count1 = 2; count1 <= counttemp ; ++count1)
{
if(counttemp % count1 == 0)
{
Console.WriteLine($"{counttemp} , {count1} ");
Console.ReadKey();
primetest1 = 0;
break;
}
}
full code sequence
static void Main(string[] args)
{
int prime1 = 10000, count1, primetest1, counttemp;
for (counttemp = 5; counttemp <= prime1; counttemp++)
{
primetest1 = 1;
for (count1 = 2; count1 <= counttemp ; ++count1)
{
if(counttemp % count1 == 0)
{
Console.WriteLine($"{counttemp} , {count1} ");
Console.ReadKey();
primetest1 = 0;
break;
}
}
if (primetest1 == 1)
{
Console.Write($"{counttemp}");
}
}
}
You're almost there. The problem is that you're checking if your candidate number is a prime by getting the remainder when divided by each number up to and including the number itself.
I think you'll find that N is a factor of N for all values of N. To fix this, you should only be checking up to but excluding the number.
And, as an aside, you don't really need to check all the way up to N - 1. You only need to go to the square root of N, adjusted up to the nearest integer. That's because, if it has a factor above the square root, you would already have found a factor below it.
Consider 24 as an example. It has 6, 8, and 12 as factors above the square root, but the matching values below the square root are 4, 3, and 2 respectively.
And there's a another trick you can use by realising that if a number is a multiple of a non-prime, it's also a multiple of every prime factor of that non-prime. In other words, every multiple of 12 is also a multiple of 2 and 3.
So you only need to check prime numbers up to the square root, to see if there's a factor. And prime numbers, other than two or three, are guaranteed to be of the form 6x-1 or 6x+1, so it's quite easy to filter out a large chunk of candidates very quickly, by checking only for those values.
In other words, check two and three as special cases. Then start at 5 and alternately add 2 and 4: 5, 7, 11, 13, 17, 19, .... Not every number in that set is prime (e.g, 25) every prime is guaranteed to be in that set.
You can check out an earlier answer of mine for more detail on why this is so, and how to do this sequence efficiently.
I have an array of integers. Value of each element represents the time taken to process a file. The processing of files consists of merging two files at a time. What is the algorithm to find the minimum time that can be taken for processing all the files. E.g. - {3,5,9,12,14,18}.
The time of processing can be calculated as -
Case 1) -
a) [8],9,12,14,18
b) [17],12,14,18
c) [26],17,18
d) 26,[35]
e) 61
So total time for processing is 61 + 35 + 26 + 17 + 8 = 147
Case 2) -
a) [21],5,9,12,14
b) [17],[21],9,14
c) [21],[17],[23]
d) [40],[21]
e) 61
This time the total time is 61 + 40 + 23 + 17 + 21 = 162
Seems to me that continuously sorting the array and adding the least two elements is the best bet for the minimum as in Case 1. Is my logic right? If not what is the right and easiest way to achieve this with best performance?
Once you have the sorted list, since you are only removing the two minimum items and replacing them with one, it makes more sense to do a sorted insert and place the new item in the correct place instead of re-sorting the entire list. However, this only saves a fractional amount of time - about 1% faster.
My method CostOfMerge doesn't assume the input is a List but if it is, you can remove the conversion ToList step.
public static class IEnumerableExt {
public static int CostOfMerge(this IEnumerable<int> psrc) {
var src = psrc.ToList();
src.Sort();
while (src.Count > 1) {
var sum = src[0]+src[1];
src.RemoveRange(0, 2);
var index = src.BinarySearch(sum);
if (index < 0)
index = ~index;
src.Insert(index, sum);
total += sum;
}
return total;
}
}
As already discussed in other answers, the best strategy will be to always work on the two items with minimal cost for each iteration. So the only remaining question is how to efficiently take the two smallest items each time.
Since you asked for best performance, I shamelessly took the algorithm from NetMage and modified it to speed it up roughly 40% for my test case (thanks and +1 to NetMage).
The idea is to work mostly in place on a single array.
Each iteration increase the starting index by 1 and move the elements within the array to make space for the sum from current iteration.
public static long CostOfMerge2(this IEnumerable<int> psrc)
{
long total = 0;
var src = psrc.ToArray();
Array.Sort(src);
var i = 1;
int length = src.Length;
while (i < length)
{
var sum = src[i - 1] + src[i];
total += sum;
// find insert position for sum
var index = Array.BinarySearch(src, i + 1, length - i - 1, sum);
if (index < 0)
index = ~index;
--index;
// shift items that come before insert position one place to the left
if (i < index)
Array.Copy(src, i + 1, src, i, index - i);
src[index] = sum;
++i;
}
return total;
}
I tested with the following calling code (switching between CostOfMerge and CostOfMerge2), with a few different values for random-seed, count of elements and max value of initial items.
static void Main(string[] args)
{
var r = new Random(10);
var testcase = Enumerable.Range(0, 400000).Select(x => r.Next(1000)).ToList();
var sw = Stopwatch.StartNew();
long resultCost = testcase.CostOfMerge();
sw.Stop();
Console.WriteLine($"Cost of Merge: {resultCost}");
Console.WriteLine($"Time of Merge: {sw.Elapsed}");
Console.ReadLine();
}
Result for shown configuration for NetMage CostOfMerge:
Cost of Merge: 3670570720
Time of Merge: 00:00:15.4472251
My CostOfMerge2:
Cost of Merge: 3670570720
Time of Merge: 00:00:08.7193612
Ofcourse the detailed numbers are hardware dependent and difference might be bigger or smaller depending on a load of stuff.
No, that's the minimum for a polyphase merge: where N is the bandwidth (number of files you can merge simultaneously), then you want to merge the smallest (N-1) files at each step. However, with this more general problem, you want to delay the larger files as long as possible -- you may want an early step or two to merge fewer than (N-1) files, somewhat like having a "bye" in an elimination tourney. You want all the latter steps to involve the full (N-1) files.
For instance, given N=4 and files 1, 6, 7, 8, 14, 22:
Early merge:
[22], 14, 22
[58]
total = 80
Late merge:
[14], 8, 14, 22
[58]
total = 72
Here, you can apply the following logic to get the desired output.
Get first two minimum values from list.
Remove first two minimum values from list.
Append the sum of first two minimum values in list
And continue until the list become of size 1
Return the only element from list. i.e, this will be your minimum time taken to process every item.
You can follow my Java code out there, if you find helpful .. :)
public class MinimumSums {
private static Integer getFirstMinimum(ArrayList<Integer> list) {
Integer min = Integer.MAX_VALUE;
for(int i=0; i<list.size(); i++) {
if(list.get(i) <= min)
min = list.get(i);
}
return min;
}
private static Integer getSecondMinimum(ArrayList<Integer> list, Integer firstItem) {
Integer min = Integer.MAX_VALUE;
for(int i=0; i<list.size(); i++) {
if(list.get(i) <= min && list.get(i)> firstItem)
min = list.get(i);
}
return min;
}
public static void main(String[] args) {
Integer[] processes = {5, 9, 3, 14, 12, 18};
ArrayList<Integer> list = new ArrayList<Integer>();
ArrayList<Integer> temp = new ArrayList<Integer>();
list.addAll(Arrays.asList(processes));
while(list.size()!= 1) {
Integer firstMin = getFirstMinimum(list); // getting first min value
Integer secondMin = getSecondMinimum(list, firstMin); // getting second min
list.remove(firstMin);
list.remove(secondMin);
list.add(firstMin+secondMin);
temp.add(firstMin + secondMin);
}
System.out.println(temp); // prints all the minimum pairs..
System.out.println(list.get(0)); // prints the output
}
}
I have a list of strings in C# such as:
List<string> myList;
Lets say I populate it by adding 20 strings, starting from "1", up to "20".
myList.Add("1"); // And so on...
How can I, in the most efficient and elegant way, randomly shuffle this list of strings while restricting how far each item of the list can end up from it's original index to, say, 4 .
Example of what I want:
I want the order:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
For example, to be shuffled to the following:
2 5 4 3 6 1 8 7 10 9 12 11 15 14 13 16 18 15 20 19
One (straightforward) way of doing it would be to split the list into four parts and shuffle the parts separately. But I am still not sure how efficient this would be.
Efficiency in my case means not doing something that is overcomplicated or just stupid.
The following Linq will create a new index where the new index is limited to distance places away from original index
List<string> list = Enumerable.Range(1, 20).Select(i => i.ToString()).ToList();
Random rng = new Random();
int distance = 4;
List<string> newList = list
.Select((s, i) =>
new {OrigIndex = i, NewIndex = i + rng.Next(-distance, distance+1), Val = s})
.OrderBy(a => a.NewIndex).ThenBy(a=>a.OrigIndex)
.Select(a => a.Val)
.ToList();
You can use the following code:
List<int> myList = Enumerable.Range(1, 20).ToList(); // Feed the list from 1 to 20
int numberByGroups = 4;
List<int> result = new List<int>();
for (int skip = 0; skip < myList.Count; skip = skip + numberByGroups)
{
result.AddRange(myList.Skip(skip) // skip the already used numbers
.Take(numberByGroups) // create a group
.OrderBy(a => Guid.NewGuid()) // "Shuffle"
.ToList());
}
Console.WriteLine(String.Join(", ", result));
Which will shuffle by groups of numberByGroups
I have a site where users can post and vote on suggestions. On the from page I initially list 10 suggestions and the header fetches a new random suggestion every 7 seconds.
I want the votes to influence the probability a suggestion will show up, both on the 10-suggestion list and in the header-suggestion. To that end I have a small algorithm to calculate popularity, taking into account votes, age and a couple other things (needs lots of tweaking).
Anyway, after running the algorithm I have a dictionary of suggestions and popularity index, sorted by popularity:
{ S = Suggestion1, P = 0.86 }
{ S = Suggestion2, P = 0.643 }
{ S = Suggestion3, P = 0.134 }
{ S = Suggestion4, P = 0.07 }
{ S = Suggestion5, P = 0.0 }
{ . . .}
I don't want this to be a glorified sort, so I'd like to introduce some random element to the selection process.
In short, I'd like the popularity to be the probability a suggestion gets picked out of the list.
Having a full list of suggestion/popularity, how do I go about picking 10 out based on probabilities? How can I apply the same to the looping header suggestion?
I'm afraid I don't know how to do this very fast, but if you have the collection in memory you can do it like this:
Note that you do not need to sort the list for this algorithm to work.
First sum up all the probabilities (if the probability is linked to popularity, just sum the popularity numbers, where I assume higher values means higher probability)
Calculate a random number in the range of 0 up to but not including that sum
Start at one end of the list and iterate through it
For each element, if the random number you generated is less than the popularity, pick that element
If not, subtract the popularity of the element from the random number, and continue to the next
If the list is static, you could build ranges and do some binary searches, but if the list keeps changing, then I don't know a better way.
Here is a sample LINQPad program that demonstrates:
void Main()
{
var list = Enumerable.Range(1, 9)
.Select(i => new { V = i, P = i })
.ToArray();
list.Dump("list");
var sum =
(from element in list
select element.P).Sum();
Dictionary<int, int> selected = new Dictionary<int, int>();
foreach (var value in Enumerable.Range(0, sum))
{
var temp = value;
var v = 0;
foreach (var element in list)
{
if (temp < element.P)
{
v = element.V;
break;
}
temp -= element.P;
}
Debug.Assert(v > 0);
if (!selected.ContainsKey(v))
selected[v] = 1;
else
selected[v] += 1;
}
selected.Dump("how many times was each value selected?");
}
Output:
list
[] (9 items)
V P
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
45 45 <-- sum
how many times was each value selected?
Dictionary<Int32,Int32> (9 items)
Key Value
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
45 <-- again, sum