"less than(<)" doesn't work like it's supposed to - c#

I'm making a piece of code that takes the fibonacci sequence below 4 000 000 and sums up the even numbers. in order to do this I made an easy piece of code that should work but the "C" variable goes over the 4 000 000 where it shouldn't (it ends on the number:"5 702 887"), as you can see here:
int amount = 4000000;
int A = 1;
int B = 2;
int C = 0;
int answer = 0;
while (C < amount)
{
C = A + B;
if (C % 2 == 0)
{
answer = answer + C;
}
A = B;
B = C;
}

You're modifying C after you check its value. The operator is working as expected.
Your while condition is being evaluated when C = 3524578, then you're incrementing it to the >5,000,000 number, using it, and checking again.
Remember that while loops will always exit when the condition is false.
You should probably adjust the order of your tests. For instance,
int amount = 4000000;
int A = 1;
int B = 2;
int C = 3; // I've changed this to give an appropriate start value
int answer = 0;
while (C < amount)
{
if (C % 2 == 0)
{
answer = answer + C;
}
A = B;
B = C;
C = A + B; // I've moved this so that answer is not in between the altering of this value and the check for it.
}
You could also implement a for loop, which will perform this action in a bit more language-native way.
int amount = 4000000;
int A = 1;
int B = 2;
int answer = 0;
for (int C = 3; C < amount; C = A + B)
{
if (C % 2 == 0)
{
answer = answer + C;
}
A = B;
B = C;
}
The difference here is that the predicate expression is evaluated every time C is set.
I'm in an analogy mood, and I don't think I really explained the actual bug here as much as I just gave the proper code to resolve the issue (teach a man to fish, as they say), so here's a real world example of a flaw in logic like this.
Let's say you're eating orange slices and you absolutely hate seeds and want nothing to do with any oranges that contain them. Regularly, you'd pick one up, check it for seeds, and eat it if it's clear. If you come across one with seeds in it, then gross, and throw away all your remaining orange slices. In pseudo-code, while (the next one doesn't have any seeds) { eat it and grab another. }. Simple enough, right?
However, the way you've written your while loop here, you'd be eating a slice, finding seeds in your mouth, then throwing them away. And as you can see, these are very different situations.
This latter one houses folds of regret because you checked the slice (the variable) after eating (using) it, rather than before. You know not to eat any more, sure, but you've already eaten a seed. You would have been much better off to check it before you ate it, since then you'd have known it was contaminated before it came anywhere near your mouth.

I suspect that you have a false belief common to beginners. The while statement does not terminate the loop immediately the moment that the condition is violated.
The correct way to think about a while loop is that
while(condition)
statement
is logically the same thing as:
continue_label:
if (!condition) goto break_label;
statement
goto continue_label;
break_label:

The condition of a while loop is executed only while entering the loop each time. If you look at your code, C is assigned after this check. Try printing C just before your C = ... assignment - you will see that it does not go beyond the limit.
The next assignment exceeds the value, but the while loop is not entered again.

When C is checked its still bellow 4000000 but then inside the iteration it gets raised above the limit(C = A + B) and in the next iteration its above that value and the loop exits.Instead try this "very dirty" implementation:
int amount = 4000000;
int A = 1;
int B = 2;
int C = 0;
int answer = 0;
for (int i = 0; i < amount; i = A + B)
{
C = A + B;
if (C % 2 == 0)
{
answer = answer + C;
}
A = B;
B = C;
}

Related

C# - How to place a given number of random mines in an array

I'm new to c#. I have a task to make a type of minesweeper, but which immediately opens a solution.
static void Main(string[] args)
{
Console.Write("Enter the width of the field: ");
int q = Convert.ToInt32(Console.ReadLine());
Console.Write("Enter the length of the field: ");
int w = Convert.ToInt32(Console.ReadLine());
Console.Write("Enter the number of bombs: ");
int c = Convert.ToInt32(Console.ReadLine());
Random rand = new Random();
var charArray = new char[q, w];
var intArray = new int[q, w];
for (int i = 0; i < q; i++)
{
for (int j = 0; j < w; j++)
{
intArray[i, j] = rand.Next(2);
charArray[i, j] = intArray[i, j] == 0 ? '_' : '*';
Console.Write(charArray[i, j]);
}
Console.WriteLine();
}
}
}
}
Two arrays should be output. Everything should be closed on the first one, that is, there should be only the characters: _ and *
0 - these are places without mines, I replaced them with a symbol _
1 - these are places with mines, I replaced them with an asterisk symbol, but they do not accept the number of mines entered by the user. And it is necessary that there are as many "*" characters as there are mines.
And in the second array there should be an already open solution of the game. That is, the cells next to which there are mines should take a number meaning the number of mines next to this cell.
Please help me..
Compiling the current code
Random random = new Random();
while(c > 0)
{
var rq = random.Next(q);
var rw = random.Next(w);
if(intArray[rq,rw] == 0)
{
intArray[rq, rw] = 1;
c--;
}
}
I would suggest dividing the problem in smaller manageable chunks. For instance, you can place the bombs in a initial step, and on a second step build the solution. You can build the solution at the same time you place the bombs, although for clarity you can do it after.
Naming of variables is also important. If you prefer using single letter variable names, I believe that's fine for the problem limits, however I would use meaningful letters easier to remember. eg: W and H for the width and height of the board, and B for the total number of bombs.
The first part of the problem then can be described as placing B bombs in a WxH board. So instead of having nested for statements that enumerate WxH times, it's better to have a while loop that repeats the bomb placing logic as long as you have remaining bombs.
Once you generate a new random location on the board, you have to check you haven't placed a bomb there already. You can have an auxiliary function HasBomb that checks that:
bool HasBomb(char[,] charArray, int x, int y)
{
return charArray[x,y] == '*';
}
I'll leave error checking out, this function being private can rely on the caller sending valid coordinates.
Then the bomb placing procedure can be something like:
int remainingBombs = B;
while (remainingBombs > 0)
{
int x = rand.Next(W);
int y = rand.Next(H);
if (!HasBomb(charArray, x, y)
{
charArray[x,y] = '*';
remainingBombs--;
}
}
At this point you may figure out another concern. If the number B of bombs to place is larger than the available positions on the board WxH, then you wont be able to place the bombs on the board. You'll have to check for that restriction when requesting the values for W, H and B.
Then in order to create the array with the number of bombs next to each position, you'll need some way to check for all the neighbouring positions to a given one. If the position is in the middle of the board it has 8 neighbour positions, if it's on an edge it has 5, and if it's on a corner it has 3. Having a helper function return all the valid neighbour positions can be handy.
IEnumerable<(int X, int Y)> NeighbourPositions(int x, int y, int W, int H)
{
bool leftEdge = x == 0;
bool topEdge = y == 0;
bool rightEdge = x == W - 1;
bool bottomEdge = y == H - 1;
if (!leftEdge && !topEdge)
yield return (x-1, y-1);
if (!topEdge)
yield return (x, y-1);
if (!rightEdge && !topEdge)
yield return (x+1, y-1);
if (!leftEdge)
yield return (x-1, y);
if (!rightEdge)
yield return (x+1, y);
if (!leftEdge && !bottomEdge)
yield return (x-1, y+1);
if (!bottomEdge)
yield return (x, y+1);
if (!rightEdge && !bottomEdge)
yield return (x+1, y+1)
}
This function uses Iterators and touples. If you feel those concepts are too complex as you said are new to C#, you can make the function return a list with coordinates instead.
Now the only thing left is to iterate over the whole intArray and increment the value on each position for each neighbour bomb you find.
for (int x = 0; x < W; x++)
{
for (int y = 0; y < H; y++)
{
foreach (var n in NeighbourPositions(x, y, W, H))
{
if (HasBomb(charArray, n.X, n.Y))
intArray[x,y]++;
}
}
}
The answers here are mostly about generating random x and random y put in loop and trying to put the mine into empty cells. It is ok solution, but if you think of it, it is not that sufficient. Every time you try to find a new random cell, there is chance that cell is already a mine. This is pretty much alright, if you don't put too many mines into your field, but if you put some greater number of mines, this event can occur quite often. This means that the loop might take longer than usually. Or, theoretically, if you wanted to put 999 mines into 1000 cell field, it would be really hard for the loop to fill all the necessary cells, especially for the last mine. Now, I am not saying that the solutions here are bad, I think, it's really alright solution for many people. But if someone wanted a little bit efficient solution, I have tried to crate it.
Solution
In this solution, you iterate each cell and try to calculate a probability of the mine to be placed there. I have come up with this easy formula, which calculates the probability:
Every time you try to place a mine into one cell, you calculate this formula and compare it to random generated number.
bool isMine = random.NextDouble() < calcProbability();

C# Trial division - first n primes. Error in logic?

In a course a problem was to list the first n primes. Apparently we should implement trial division while saving primes in an array to reduce the number of divisions required. Initially I misunderstood, but got a working if slower solution using a separate function to test for primality but I would like to implement it the way I should have done.
Below is my attempt, with irrelevant code removed, such as the input test.
using System;
namespace PrimeNumbers
{
class MainClass
{
public static void Main (string[] args)
{
Console.Write("How many primes?\n");
string s = Console.ReadLine();
uint N;
UInt32.TryParse(s, out N)
uint[] PrimeTable = new uint[N];
PrimeTable[0] = 2;
for (uint i=1; i < N; i++)//loop n spaces in array, [0] set already so i starts from 1
{
uint j = PrimeTable[i -1] + 1;//sets j bigger than biggest prime so far
bool isPrime = false;// Just a condition to allow the loop to break???(Is that right?)
while (!isPrime)//so loop continues until a break is hit
{
isPrime = true;//to ensure that the loop executes
for(uint k=0; k < i; k++)//want to divide by first i primes
{
if (PrimeTable[k] == 0) break;//try to avoid divide by zero - unnecessary
if (j % PrimeTable[k] == 0)//zero remainder means not prime so break and increment j
{
isPrime = false;
break;
}
}
j++;//j increment mentioned above
}
PrimeTable[i] = j; //not different if this is enclosed in brace above
}
for (uint i = 0; i < N; i++)
Console.Write(PrimeTable[i] + " ");
Console.ReadLine();
}
}
}
My comments are my attempt to describe what I think the code is doing, I have tried very many small changes, often they would lead to divide by zero errors when running so I added in a test, but I don't think it should be necessary. (I also got several out of range errors when trying to change the loop conditions.)
I have looked at several questions on stack exchange, in particular:
Program to find prime numbers
The first answer uses a different method, the second is close to what I want, but the exact thing is in this comment from Nick Larsson:
You could make this faster by keeping track of the primes and only
trying to divide by those.
C# is not shown on here: http://rosettacode.org/wiki/Sequence_of_primes_by_Trial_Division#Python
I have seen plenty of other methods and algorithms, such as Eratosthenes sieve and GNF, but really only want to implement it this way, as I think my problem is with the program logic and I don't understand why it doesn't work. Thanks
The following should solve your problem:
for (uint i = 1; i < numberOfPrimes; i++)//loop n spaces in array, [0] set already so i starts from 1
{
uint j = PrimeTable[i - 1] + 1;//sets j bigger than biggest prime so far
bool isPrime = false;// Just a condition to allow the loop to break???(Is that right?)
while (!isPrime)//so loop continues until a break is hit
{
isPrime = true;//to ensure that the loop executes
for (uint k = 0; k < i; k++)//want to divide by first i primes
{
if (PrimeTable[k] == 0) break;//try to avoid divide by zero - unnecessary
if (j % PrimeTable[k] == 0)//zero remainder means not prime so break and increment j
{
isPrime = false;
j++;
break;
}
}
}
PrimeTable[i] = j;
}
The major change that I did was move the incrementation of the variable j to inside the conditional prime check. This is because, the current value is not prime, so we want to check the next prime number and must move to the next candidate before breaking in the loop.
Your code was incrementing after the check was made. Which means that when you found a prime candidate, you would increment to the next candidate and assign that as your prime. For example, when j = 3, it would pass the condition, isPrime would still = true, but then j++ would increment it to 4 and that would add it to the PrimeTable.
Make sense?
This might not be a very good answer to your question, but you might want to look at this implementation and see if you can spot where yours differs.
int primesCount = 10;
List<uint> primes = new List<uint>() { 2u };
for (uint n = 3u;; n += 2u)
{
if (primes.TakeWhile(u => u * u <= n).All(u => n % u != 0))
{
primes.Add(n);
}
if (primes.Count() >= primesCount)
{
break;
}
}
This correctly and efficiently computes the first primesCount primes.

Enumerable.Range(...).Any(...) outperforms a basic loop: Why?

I was adapting a simple prime-number generation one-liner from Scala to C# (mentioned in a comment on this blog by its author). I came up with the following:
int NextPrime(int from)
{
while(true)
{
n++;
if (!Enumerable.Range(2, (int)Math.Sqrt(n) - 1).Any((i) => n % i == 0))
return n;
}
}
It works, returning the same results I'd get from running the code referenced in the blog. In fact, it works fairly quickly. In LinqPad, it generated the 100,000th prime in about 1 second. Out of curiosity, I rewrote it without Enumerable.Range() and Any():
int NextPrimeB(int from)
{
while(true)
{
n++;
bool hasFactor = false;
for (int i = 2; i <= (int)Math.Sqrt(n); i++)
{
if (n % i == 0) hasFactor = true;
}
if (!hasFactor) return n;
}
}
Intuitively, I'd expect them to either run at the same speed, or even for the latter to run a little faster. In actuality, computing the same value (100,000th prime) with the second method, takes 12 seconds - It's a staggering difference.
So what's going on here? There must be fundamentally something extra happening in the second approach that's eating up CPU cycles, or some optimization going on the background of the Linq examples. Anybody know why?
For every iteration of the for loop, you are finding the square root of n. Cache it instead.
int root = (int)Math.Sqrt(n);
for (int i = 2; i <= root; i++)
And as other have mentioned, break the for loop as soon as you find a factor.
The LINQ version short circuits, your loop does not. By this I mean that when you have determined that a particular integer is in fact a factor the LINQ code stops, returns it, and then moves on. Your code keeps looping until it's done.
If you change the for to include that short circuit, you should see similar performance:
int NextPrimeB(int from)
{
while(true)
{
n++;
for (int i = 2; i <= (int)Math.Sqrt(n); i++)
{
if (n % i == 0) return n;;
}
}
}
It looks like this is the culprit:
for (int i = 2; i <= (int)Math.Sqrt(n); i++)
{
if (n % i == 0) hasFactor = true;
}
You should exit the loop once you find a factor:
if (n % i == 0){
hasFactor = true;
break;
}
And as other have pointed out, move the Math.Sqrt call outside the loop to avoid calling it each cycle.
Enumerable.Any takes an early out if the condition is successful while your loop does not.
The enumeration of source is stopped as soon as the result can be determined.
This is an example of a bad benchmark. Try modifying your loop and see the difference:
if (n % i == 0) { hasFactor = true; break; }
}
throw new InvalidOperationException("Cannot satisfy criteria.");
In the name of optimization, you can be a little more clever about this by avoiding even numbers after 2:
if (n % 2 != 0)
{
int quux = (int)Math.Sqrt(n);
for (int i = 3; i <= quux; i += 2)
{
if (n % i == 0) return n;
}
}
There are some other ways to optimize prime searches, but this is one of the easier to do and has a large payoff.
Edit: you may want to consider using (int)Math.Sqrt(n) + 1. FP functions + round-down could potentially cause you to miss a square of a large prime number.
At least part of the problem is the number of times Math.Sqrt is executed. In the LINQ query this is executed once but in the loop example it's executed N times. Try pulling that out into a local and reprofiling the application. That will give you a more representative break down
int limit = (int)Math.Sqrt(n);
for (int i = 2; i <= limit; i++)

Is Multiplication Faster Than Comparison in .NET?

Good morning, afternoon or night,
Up until today, I thought comparison was one of the basic processor instructions, and so that it was one of the fastest operations one can do in a computer... On the other hand I know multiplication in sometimes trickier and involves lots of bits operations. However, I was a little shocked to look at the results of the following code:
Stopwatch Test = new Stopwatch();
int a = 0;
int i = 0, j = 0, l = 0;
double c = 0, d = 0;
for (i = 0; i < 32; i++)
{
Test.Start();
for (j = Int32.MaxValue, l = 1; j != 0; j = -j + ((j < 0) ? -1 : 1), l = -l)
{
a = l * j;
}
Test.Stop();
Console.WriteLine("Product: {0}", Test.Elapsed.TotalMilliseconds);
c += Test.Elapsed.TotalMilliseconds;
Test.Reset();
Test.Start();
for (j = Int32.MaxValue, l = 1; j != 0; j = -j + ((j < 0) ? -1 : 1), l = -l)
{
a = (j < 0) ? -j : j;
}
Test.Stop();
Console.WriteLine("Comparison: {0}", Test.Elapsed.TotalMilliseconds);
d += Test.Elapsed.TotalMilliseconds;
Test.Reset();
}
Console.WriteLine("Product: {0}", c / 32);
Console.WriteLine("Comparison: {0}", d / 32);
Console.ReadKey();
}
Result:
Product: 8558.6
Comparison: 9799.7
Quick explanation: j is an ancillary alternate variable which goes like (...), 11, -10, 9, -8, 7, (...) until it reaches zero, l is a variable which stores j's sign, and a is the test variable, which I want always to be equal to the modulus of j. The goal of the test was to check whether it is faster to set a to this value using multiplication or the conditional operator.
Can anyone please comment on these results?
Thank you very much.
Your second test it's not a mere comparison, but an if statement.
That's probably translated in a JUMP/BRANCH instruction in CPU, involving branch prediction (with possible blocks of the pipeline) and then is likely slower than a simple multiplication (even if not so much).
It can often be very difficult to make such assertions about an optimising compiler. They do lots of tricks that make simple cases different from real code. That said, you aren't just doing a comparison, you're doing a compare/assign in a very tight loop. The thread you're working on may have to pause many times at the branch; the multiplication can assign as many times as it likes, as long as the last assignment is still last, so many multiplies can be going on at once.
As is the general rule, make your code clear and ignore minor timing issues unless they become a problem.
If you do have a speed problem a good tracing/timing tool will guide you much better than knowing if one operation is faster than other in a specific case.
I guess one comment I would make is that you are doing a lot more in the second operation:
a = (j < 0) ? -j : j;
Not only are you doing a comparison, but also effectivly a "if..else.." with the ? operator and a negation of j.
You should try to run this test a 1000 or so times and use avrage to compare you never now what CLR is doing in background

Interpolation in c# - performance problem

I need to resample big sets of data (few hundred spectra, each containing few thousand points) using simple linear interpolation.
I have created interpolation method in C# but it seems to be really slow for huge datasets.
How can I improve the performance of this code?
public static List<double> interpolate(IList<double> xItems, IList<double> yItems, IList<double> breaks)
{
double[] interpolated = new double[breaks.Count];
int id = 1;
int x = 0;
while(breaks[x] < xItems[0])
{
interpolated[x] = yItems[0];
x++;
}
double p, w;
// left border case - uphold the value
for (int i = x; i < breaks.Count; i++)
{
while (breaks[i] > xItems[id])
{
id++;
if (id > xItems.Count - 1)
{
id = xItems.Count - 1;
break;
}
}
System.Diagnostics.Debug.WriteLine(string.Format("i: {0}, id {1}", i, id));
if (id <= xItems.Count - 1)
{
if (id == xItems.Count - 1 && breaks[i] > xItems[id])
{
interpolated[i] = yItems[yItems.Count - 1];
}
else
{
w = xItems[id] - xItems[id - 1];
p = (breaks[i] - xItems[id - 1]) / w;
interpolated[i] = yItems[id - 1] + p * (yItems[id] - yItems[id - 1]);
}
}
else // right border case - uphold the value
{
interpolated[i] = yItems[yItems.Count - 1];
}
}
return interpolated.ToList();
}
Edit
Thanks, guys, for all your responses. What I wanted to achieve, when I wrote this questions, were some general ideas where I could find some areas to improve the performance. I haven't expected any ready solutions, only some ideas. And you gave me what I wanted, thanks!
Before writing this question I thought about rewriting this code in C++ but after reading comments to Will's asnwer it seems that the gain can be less than I expected.
Also, the code is so simple, that there are no mighty code-tricks to use here. Thanks to Petar for his attempt to optimize the code
It seems that all reduces the problem to finding good profiler and checking every line and soubroutine and trying to optimize that.
Thank you again for all responses and taking your part in this discussion!
public static List<double> Interpolate(IList<double> xItems, IList<double> yItems, IList<double> breaks)
{
var a = xItems.ToArray();
var b = yItems.ToArray();
var aLimit = a.Length - 1;
var bLimit = b.Length - 1;
var interpolated = new double[breaks.Count];
var total = 0;
var initialValue = a[0];
while (breaks[total] < initialValue)
{
total++;
}
Array.Copy(b, 0, interpolated, 0, total);
int id = 1;
for (int i = total; i < breaks.Count; i++)
{
var breakValue = breaks[i];
while (breakValue > a[id])
{
id++;
if (id > aLimit)
{
id = aLimit;
break;
}
}
double value = b[bLimit];
if (id <= aLimit)
{
var currentValue = a[id];
var previousValue = a[id - 1];
if (id != aLimit || breakValue <= currentValue)
{
var w = currentValue - previousValue;
var p = (breakValue - previousValue) / w;
value = b[id - 1] + p * (b[id] - b[id - 1]);
}
}
interpolated[i] = value;
}
return interpolated.ToList();
}
I've cached some (const) values and used Array.Copy, but I think these are micro optimization that are already made by the compiler in Release mode. However You can try this version and see if it will beat the original version of the code.
Instead of
interpolated.ToList()
which copies the whole array, you compute the interpolated values directly in the final list (or return that array instead). Especially if the array/List is big enough to qualify for the large object heap.
Unlike the ordinary heap, the LOH is not compacted by the GC, which means that short lived large objects are far more harmful than small ones.
Then again: 7000 doubles are approx. 56'000 bytes which is below the large object threshold of 85'000 bytes (1).
Looks to me you've created an O(n^2) algorithm. You are searching for the interval, that's O(n), then probably apply it n times. You'll get a quick and cheap speed-up by taking advantage of the fact that the items are already ordered in the list. Use BinarySearch(), that's O(log(n)).
If still necessary, you should be able to do something speedier with the outer loop, what ever interval you found previously should make it easier to find the next one. But that code isn't in your snippet.
I'd say profile the code and see where it spends its time, then you have somewhere to focus on.
ANTS is popular, but Equatec is free I think.
few suggestions,
as others suggested, use profiler to understand better where time is used.
the loop
while (breaks[x] < xItems[0])
could cause exception if x grows bigger than number of items in "breaks" list. You should use something like
while (x < breaks.Count && breaks[x] < xItems[0])
But you might not need that loop at all. Why treat the first item as special case, just start with id=0 and handle the first point in for(i) loop. I understand that id might start from 0 in this case, and [id-1] would be negative index, but see if you can do something there.
If you want to optimize for speed then you sacrifice memory size, and vice versa. You cannot usually have both, except if you make really clever algorithm. In this case, it would mean to calculate as much as you can outside loops, store those values in variables (extra memory) and use them later. For example, instead of always saying:
id = xItems.Count - 1;
You could say:
int lastXItemsIndex = xItems.Count-1;
...
id = lastXItemsIndex;
This is the same suggestion as Petar Petrov did with aLimit, bLimit....
next point, your loop (or the one Petar Petrov suggested):
while (breaks[i] > xItems[id])
{
id++;
if (id > xItems.Count - 1)
{
id = xItems.Count - 1;
break;
}
}
could probably be reduced to:
double currentBreak = breaks[i];
while (id <= lastXIndex && currentBreak > xItems[id]) id++;
and the last point I would add is to check if there is some property in your samples that is special for your problem. For example if xItems represent time, and you are sampling in regular intervals, then
w = xItems[id] - xItems[id - 1];
is constant, and you do not have to calculate it every time in the loop.
This is probably not often the case, but maybe your problem has some other property which you could use to improve performance.
Another idea is this: maybe you do not need double precision, "float" is probably faster because it is smaller.
Good luck
System.Diagnostics.Debug.WriteLine(string.Format("i: {0}, id {1}", i, id));
I hope it's release build without DEBUG defined?
Otherwise, it might depend on what exactly are those IList parameters. May be useful to store Count value instead of accessing property every time.
This is the kind of problem where you need to move over to native code.

Categories