Most probable bits in random integer - c#

I've made such experiment - made 10 million random numbers from C and C#. And then counted how much times each bit from 15 bits in random integer is set. (I chose 15 bits because C supports random integer only up to 0x7fff).
What i've got is this:
I have two questions:
Why there are 3 most probable bits ? In C case bits 8,10,12 are most probable. And
in C# bits 6,8,11 are most probable.
Also seems that C# most probable bits is mostly shifted by 2 positions then compared to C most probable bits. Why is this ? Because C# uses other RAND_MAX constant or what ?
My test code for C:
void accumulateResults(int random, int bitSet[15]) {
int i;
int isBitSet;
for (i=0; i < 15; i++) {
isBitSet = ((random & (1<<i)) != 0);
bitSet[i] += isBitSet;
}
}
int main() {
int i;
int bitSet[15] = {0};
int times = 10000000;
srand(0);
for (i=0; i < times; i++) {
accumulateResults(rand(), bitSet);
}
for (i=0; i < 15; i++) {
printf("%d : %d\n", i , bitSet[i]);
}
system("pause");
return 0;
}
And test code for C#:
static void accumulateResults(int random, int[] bitSet)
{
int i;
int isBitSet;
for (i = 0; i < 15; i++)
{
isBitSet = ((random & (1 << i)) != 0) ? 1 : 0;
bitSet[i] += isBitSet;
}
}
static void Main(string[] args)
{
int i;
int[] bitSet = new int[15];
int times = 10000000;
Random r = new Random();
for (i = 0; i < times; i++)
{
accumulateResults(r.Next(), bitSet);
}
for (i = 0; i < 15; i++)
{
Console.WriteLine("{0} : {1}", i, bitSet[i]);
}
Console.ReadKey();
}
Very thanks !! Btw, OS is Windows 7, 64-bit architecture & Visual Studio 2010.
EDIT
Very thanks to #David Heffernan. I made several mistakes here:
Seed in C and C# programs was different (C was using zero and C# - current time).
I didn't tried experiment with different values of Times variable to research reproducibility of results.
Here's what i've got when analyzed how probability that first bit is set depends on number of times random() was called:
So as many noticed - results are not reproducible and shouldn't be taken seriously.
(Except as some form of confirmation that C/C# PRNG are good enough :-) ).

This is just common or garden sampling variation.
Imagine an experiment where you toss a coin ten times, repeatedly. You would not expect to get five heads every single time. That's down to sampling variation.
In just the same way, your experiment will be subject to sampling variation. Each bit follows the same statistical distribution. But sampling variation means that you would not expect an exact 50/50 split between 0 and 1.
Now, your plot is misleading you into thinking the variation is somehow significant or carries meaning. You'd get a much better understanding of this if you plotted the Y axis of the graph starting at 0. That graph looks like this:
If the RNG behaves as it should, then each bit will follow the binomial distribution with probability 0.5. This distribution has variance np(1 − p). For your experiment this gives a variance of 2.5 million. Take the square root to get the standard deviation of around 1,500. So you can see simply from inspecting your results, that the variation you see is not obviously out of the ordinary. You have 15 samples and none are more than 1.6 standard deviations from the true mean. That's nothing to worry about.
You have attempted to discern trends in the results. You have said that there are "3 most probable bits". That's only your particular interpretation of this sample. Try running your programs again with different seeds for your RNGs and you will have graphs that look a little different. They will still have the same quality to them. Some bits are set more than others. But there won't be any discernible patterns, and when you plot them on a graph that includes 0, you will see horizontal lines.
For example, here's what your C program outputs for a random seed of 98723498734.
I think this should be enough to persuade you to run some more trials. When you do so you will see that there are no special bits that are given favoured treatment.

You know that the deviation is about 2500/5,000,000, which comes down to 0,05%?

Note that the difference of frequency of each bit varies by only about 0.08% (-0.03% to +0.05%). I don't think I would consider that significant. If every bit were exactly equally probable, I would find the PRNG very questionable instead of just somewhat questionable. You should expect some level of variance in processes that are supposed to be more or less modelling randomness...

Related

Smth about Binet formula

Why does the Binet formula( O(LogN), but it is not exactly ) work worse in time than the iteration method( O(n) )?
static double SQRT5 = Math.Sqrt(5);
static double PHI = (SQRT5 + 1) / 2;
public static int Bine(int n)
{
return (int)(Math.Pow(PHI, n) / SQRT5 + 0.5);
}
static long[] NumbersFibonacci = new long[35];
public static void Iteracii(int n)
{
NumbersFibonacci[0] = 0;
NumbersFibonacci[1] = 1;
for (int i = 1; i < n - 1; i++)
{
NumbersFibonacci[i + 1] = NumbersFibonacci[i] + NumbersFibonacci[i - 1];
}
}
The time of the algorithms
If arithmetic operations are assumed to be O(1) then using Binet's formula is O(1) and the typical iterative implementation is O(n).
However, if we assume arithmetic operations are O(1) then, even though fibo(n) is a common interview and phone screen topic, it actually makes little sense to implement it in the typical way -- barring being told we are to ignore the finiteness of standard programming language integers and floating point numbers. The Fibonacci numbers grow exponentially. They overflow standard programming language types long before the particular algorithm chosen matters, as long as that is one did not choose the naive recursive implementation.
To get specific, here are two implementations of returning the nth Fibonacci numbers in C#. The top one implements Binet’s closed form solution on doubles and casts to a long, which in C# will be 64 bits wide. The second one is the iterative version:
static long constant_time_fibo(long n)
{
double sqrt_of_five = Math.Sqrt(5.0);
return (long) (
(Math.Pow(1.0 + sqrt_of_five, n) - Math.Pow(1.0 - sqrt_of_five, n)) /
(sqrt_of_five * Math.Pow(2.0, n))
);
}
static long linear_time_fibo(long n)
{
long previous = 0;
long current = 1;
for (int i = 1; i < n; i++)
{
long temp = current;
current = previous + current;
previous = temp;
}
return current;
}
static void Main(string[] args)
{
for (int i = 1; i < 100; i++)
Console.WriteLine("{0} => {1} {2}", i,
constant_time_fibo(i), linear_time_fibo(i) );
}
when I run this code I get the constant time algorithm failing to match the iterative implementation at around n = 72 due to floating point error and the iterative approach failing at n = 92 due to overflow. If I had used 32 bit types instead of 64 bits this would have happened even sooner.
Ninety-two items is nothing. If you need the nth fibonacci number in practice and only care about fibonacci numbers that fit in 64 bits, in a non-contrived situation -- not for a homework assignment or for a whiteboard question -- it should take O(1) time not because of the existence of Binet's formula but because you should use a lookup table with 92 items in it. In C++ you could even generate the 92 items at compile time with a constexpr function.
If on the other hand if we are talking about arbitrarily large number arithmetic then the question is somewhat more interesting. The exponents in Binet’s formula are all integers. You can implement Binet’s formula using only arbitrarily large integer arithmetic — you do not need to compute any square roots of 5, just need to keep track of “where the square roots of five are” because they are going to cancel out in the end. You calculate in terms of a binomial form like (a+b√5)/c but because of the weird algebraic properties of ϕ all of the irrationality and all of the non-integer math cancels out by magic. You do not need to actually calculate any √5's while finding ϕ^n. If you use “exponentiation by squaring” this will lead to an O(log n) implementation -- O(log n) arithmetic operations anyway; the time complexity of the whole thing would depend on the time complexity of the arbitrary large arithmetic library you are using.

What is wrong with my Fourier Transformation (Convolution) in MathNet.Numerics? - C#

I am trying to do a simple Convolution between 2 audio files using the MathNet.Numerics's FFT (Fast Fourier Transformation), but I get some weird background sounds, after the IFFT.
I tested if it's the Convolution or the Transformations, thats causing the problem, and I found out that the problem shows already in the FFT -> IFFT (Inverze FFT) conversion.
My code for a simple FFT and IFFT:
float[] sound; //here are stored my samples
Complex[] complexInput = new Complex[sound.Length];
for (int i = 0; i < complexInput.Length; i++)
{
Complex tmp = new Complex(sound[i],0);
complexInput[i] = tmp;
}
MathNet.Numerics.IntegralTransforms.Fourier.Forward(complexInput);
//do some stuff
MathNet.Numerics.IntegralTransforms.Fourier.Inverse(complexInput);
float[] outSamples = new float[complexInput.Length];
for (int i = 0; i < outSamples.Length; i++)
outSamples[i] = (float)complexInput[i].Real;
After this, the outSamples are corrupted with some wierd background sound/noise, even though I'm not doing anything between the FFT and IFFT.
What am I missing?
The current implementation of MathNet.Numerics.IntegralTransform.Fourier (see Fourier.cs and Fourier.Bluestein.cs)
uses Bluestein's algorithm for any FFT lengths that are not powers of 2.
This algorithm involves the creation of a Bluestein sequence (which includes terms proportional to n2) which up to version 3.6.0 was using the following code:
static Complex[] BluesteinSequence(int n)
{
double s = Constants.Pi/n;
var sequence = new Complex[n];
for (int k = 0; k < sequence.Length; k++)
{
double t = s*(k*k); // <--------------------- (k*k) 32-bit int expression
sequence[k] = new Complex(Math.Cos(t), Math.Sin(t));
}
return sequence;
}
For any size n greater than 46341, the intermediate expression (k*k) in this implementation is computed using int arithmetic (a 32-bit type as per MSDN integral type reference table) which results in numeric overflows for the largest values of k. As such the current implementation of MathNet.Numerics.IntegralTransfom.Fourier only supports input array sizes which are either powers of 2 or non-powers of 2 up to 46341 (included).
Thus for large input arrays, a workaround could be to pad your input to the next power of 2.
Note: this observation is based on version 3.6.0 of MathNet.Numerics, although the limitation appears to have been present in earlier releases (the Bluestein sequence code has not changed significantly going as far back as version 2.1.1).
Update 2015/04/26:
After I posted this and a comment on an similar issue on github bugtracking, the issue was quickly fixed in MathNet.Numerics. The fix should now be available in version 3.7.0. Note however that you may still want to pad to a power of two for performance reasons, especially since you already need to zero pad for the linear convolution.

How to best implement K-nearest neighbours in C# for large number of dimensions?

I'm implementing the K-nearest neighbours classification algorithm in C# for a training and testing set of about 20,000 samples each, and 25 dimensions.
There are only two classes, represented by '0' and '1' in my implementation. For now, I have the following simple implementation :
// testSamples and trainSamples consists of about 20k vectors each with 25 dimensions
// trainClasses contains 0 or 1 signifying the corresponding class for each sample in trainSamples
static int[] TestKnnCase(IList<double[]> trainSamples, IList<double[]> testSamples, IList<int[]> trainClasses, int K)
{
Console.WriteLine("Performing KNN with K = "+K);
var testResults = new int[testSamples.Count()];
var testNumber = testSamples.Count();
var trainNumber = trainSamples.Count();
// Declaring these here so that I don't have to 'new' them over and over again in the main loop,
// just to save some overhead
var distances = new double[trainNumber][];
for (var i = 0; i < trainNumber; i++)
{
distances[i] = new double[2]; // Will store both distance and index in here
}
// Performing KNN ...
for (var tst = 0; tst < testNumber; tst++)
{
// For every test sample, calculate distance from every training sample
Parallel.For(0, trainNumber, trn =>
{
var dist = GetDistance(testSamples[tst], trainSamples[trn]);
// Storing distance as well as index
distances[trn][0] = dist;
distances[trn][1] = trn;
});
// Sort distances and take top K (?What happens in case of multiple points at the same distance?)
var votingDistances = distances.AsParallel().OrderBy(t => t[0]).Take(K);
// Do a 'majority vote' to classify test sample
var yea = 0.0;
var nay = 0.0;
foreach (var voter in votingDistances)
{
if (trainClasses[(int)voter[1]] == 1)
yea++;
else
nay++;
}
if (yea > nay)
testResults[tst] = 1;
else
testResults[tst] = 0;
}
return testResults;
}
// Calculates and returns square of Euclidean distance between two vectors
static double GetDistance(IList<double> sample1, IList<double> sample2)
{
var distance = 0.0;
// assume sample1 and sample2 are valid i.e. same length
for (var i = 0; i < sample1.Count; i++)
{
var temp = sample1[i] - sample2[i];
distance += temp * temp;
}
return distance;
}
This takes quite a bit of time to execute. On my system it takes about 80 seconds to complete. How can I optimize this, while ensuring that it would also scale to larger number of data samples? As you can see, I've tried using PLINQ and parallel for loops, which did help (without these, it was taking about 120 seconds). What else can I do?
I've read about KD-trees being efficient for KNN in general, but every source I read stated that they're not efficient for higher dimensions.
I also found this stackoverflow discussion about this, but it seems like this is 3 years old, and I was hoping that someone would know about better solutions to this problem by now.
I've looked at machine learning libraries in C#, but for various reasons I don't want to call R or C code from my C# program, and some other libraries I saw were no more efficient than the code I've written. Now I'm just trying to figure out how I could write the most optimized code for this myself.
Edited to add - I cannot reduce the number of dimensions using PCA or something. For this particular model, 25 dimensions are required.
Whenever you are attempting to improve the performance of code, the first step is to analyze the current performance to see exactly where it is spending its time. A good profiler is crucial for this. In my previous job I was able to use the dotTrace profiler to good effect; Visual Studio also has a built-in profiler. A good profiler will tell you exactly where you code is spending time method-by-method or even line-by-line.
That being said, a few things come to mind in reading your implementation:
You are parallelizing some inner loops. Could you parallelize the outer loop instead? There is a small but nonzero cost associated to a delegate call (see here or here) which may be hitting you in the "Parallel.For" callback.
Similarly there is a small performance penalty for indexing through an array using its IList interface. You might consider declaring the array arguments to "GetDistance()" explicitly.
How large is K as compared to the size of the training array? You are completely sorting the "distances" array and taking the top K, but if K is much smaller than the array size it might make sense to use a partial sort / selection algorithm, for instance by using a SortedSet and replacing the smallest element when the set size exceeds K.

What is the sum of the digits of the number 2^1000?

This is a problem from Project Euler, and this question includes some source code, so consider this your spoiler alert, in case you are interested in solving it yourself. It is discouraged to distribute solutions to the problems, and that isn't what I want. I just need a little nudge and guidance in the right direction, in good faith.
The problem reads as follows:
2^15 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.
What is the sum of the digits of the number 2^1000?
I understand the premise and math of the problem, but I've only started practicing C# a week ago, so my programming is shaky at best.
I know that int, long and double are hopelessly inadequate for holding the 300+ (base 10) digits of 2^1000 precisely, so some strategy is needed. My strategy was to set a calculation which gets the digits one by one, and hope that the compiler could figure out how to calculate each digit without some error like overflow:
using System;
using System.IO;
using System.Windows.Forms;
namespace euler016
{
class DigitSum
{
// sum all the (base 10) digits of 2^powerOfTwo
[STAThread]
static void Main(string[] args)
{
int powerOfTwo = 1000;
int sum = 0;
// iterate through each (base 10) digit of 2^powerOfTwo, from right to left
for (int digit = 0; Math.Pow(10, digit) < Math.Pow(2, powerOfTwo); digit++)
{
// add next rightmost digit to sum
sum += (int)((Math.Pow(2, powerOfTwo) / Math.Pow(10, digit) % 10));
}
// write output to console, and save solution to clipboard
Console.Write("Power of two: {0} Sum of digits: {1}\n", powerOfTwo, sum);
Clipboard.SetText(sum.ToString());
Console.WriteLine("Answer copied to clipboard. Press any key to exit.");
Console.ReadKey();
}
}
}
It seems to work perfectly for powerOfTwo < 34. My calculator ran out of significant digits above that, so I couldn't test higher powers. But tracing the program, it looks like no overflow is occurring: the number of digits calculated gradually increases as powerOfTwo = 1000 increases, and the sum of digits also (on average) increases with increasing powerOfTwo.
For the actual calculation I am supposed to perform, I get the output:
Power of two: 1000 Sum of digits: 1189
But 1189 isn't the right answer. What is wrong with my program? I am open to any and all constructive criticisms.
For calculating the values of such big numbers you not only need to be a good programmer but also a good mathematician. Here is a hint for you,
there's familiar formula ax = ex ln a , or if you prefer, ax = 10x log a.
More specific to your problem
21000 Find the common (base 10) log of 2, and multiply it by 1000; this is the power of 10. If you get something like 1053.142 (53.142 = log 2 value * 1000) - which you most likely will - then that is 1053 x 100.142; just evaluate 100.142 and you will get a number between 1 and 10; and multiply that by 1053, But this 1053 will not be useful as 53 zero sum will be zero only.
For log calculation in C#
Math.Log(num, base);
For more accuracy you can use, Log and Pow function of Big Integer.
Now rest programming help I believe you can have from your side.
Normal int can't help you with such a large number. Not even long. They are never designed to handle numbers such huge. int can store around 10 digits (exact max: 2,147,483,647) and long for around 19 digits (exact max: 9,223,372,036,854,775,807). However, A quick calculation from built-in Windows calculator tells me 2^1000 is a number of more than 300 digits.
(side note: the exact value can be obtained from int.MAX_VALUE and long.MAX_VALUE respectively)
As you want precise sum of digits, even float or double types won't work because they only store significant digits for few to some tens of digits. (7 digit for float, 15-16 digits for double). Read here for more information about floating point representation, double precision
However, C# provides a built-in arithmetic
BigInteger for arbitrary precision, which should suit your (testing) needs. i.e. can do arithmetic in any number of digits (Theoretically of course. In practice it is limited by memory of your physical machine really, and takes time too depending on your CPU power)
Back to your code, I think the problem is here
Math.Pow(2, powerOfTwo)
This overflows the calculation. Well, not really, but it is the double precision is not precisely representing the actual value of the result, as I said.
A solution without using the BigInteger class is to store each digit in it's own int and then do the multiplication manually.
static void Problem16()
{
int[] digits = new int[350];
//we're doing multiplication so start with a value of 1
digits[0] = 1;
//2^1000 so we'll be multiplying 1000 times
for (int i = 0; i < 1000; i++)
{
//run down the entire array multiplying each digit by 2
for (int j = digits.Length - 2; j >= 0; j--)
{
//multiply
digits[j] *= 2;
//carry
digits[j + 1] += digits[j] / 10;
//reduce
digits[j] %= 10;
}
}
//now just collect the result
long result = 0;
for (int i = 0; i < digits.Length; i++)
{
result += digits[i];
}
Console.WriteLine(result);
Console.ReadKey();
}
I used bitwise shifting to left. Then converting to array and summing its elements. My end result is 1366, Do not forget to add reference to System.Numerics;
BigInteger i = 1;
i = i << 1000;
char[] myBigInt = i.ToString().ToCharArray();
long sum = long.Parse(myBigInt[0].ToString());
for (int a = 0; a < myBigInt.Length - 1; a++)
{
sum += long.Parse(myBigInt[a + 1].ToString());
}
Console.WriteLine(sum);
since the question is c# specific using a bigInt might do the job. in java and python too it works but in languages like c and c++ where the facility is not available you have to take a array and do multiplication. take a big digit in array and multiply it with 2. that would be simple and will help in improving your logical skill. and coming to project Euler. there is a problem in which you have to find 100! you might want to apply the same logic for that too.
Try using BigInteger type , 2^100 will end up to a a very large number for even double to handle.
BigInteger bi= new BigInteger("2");
bi=bi.pow(1000);
// System.out.println("Val:"+bi.toString());
String stringArr[]=bi.toString().split("");
int sum=0;
for (String string : stringArr)
{ if(!string.isEmpty()) sum+=Integer.parseInt(string); }
System.out.println("Sum:"+sum);
------------------------------------------------------------------------
output :=> Sum:1366
Here's my solution in JavaScript
(function (exponent) {
const num = BigInt(Math.pow(2, exponent))
let arr = num.toString().split('')
arr.slice(arr.length - 1)
const result = arr.reduce((r,c)=> parseInt(r)+parseInt(c))
console.log(result)
})(1000)
This is not a serious answer—just an observation.
Although it is a good challenge to try to beat Project Euler using only one programming language, I believe the site aims to further the horizons of all programmers who attempt it. In other words, consider using a different programming language.
A Common Lisp solution to the problem could be as simple as
(defun sum_digits (x)
(if (= x 0)
0
(+ (mod x 10) (sum_digits (truncate (/ x 10))))))
(print (sum_digits (expt 2 1000)))
main()
{
char c[60];
int k=0;
while(k<=59)
{
c[k]='0';
k++;
}
c[59]='2';
int n=1;
while(n<=999)
{
k=0;
while(k<=59)
{
c[k]=(c[k]*2)-48;
k++;
}
k=0;
while(k<=59)
{
if(c[k]>57){ c[k-1]+=1;c[k]-=10; }
k++;
}
if(c[0]>57)
{
k=0;
while(k<=59)
{
c[k]=c[k]/2;
k++;
}
printf("%s",c);
exit(0);
}
n++;
}
printf("%s",c);
}
Python makes it very simple to compute this with an oneliner:
print sum(int(digit) for digit in str(2**1000))
or alternatively with map:
print sum(map(int,str(2**1000)))

Modular Cubes in C#

I am have difficulties solving this problem:
For a positive number n, define C(n)
as the number of the integers x, for
which 1 < x < n and x^3 = 1 mod n.
When n=91, there are 8 possible values
for x, namely : 9, 16, 22, 29, 53, 74,
79, 81. Thus, C(91)=8.
Find the sum of the positive numbers
n <= 10^11 for which C(n) = 242.
My Code:
double intCount2 = 91;
double intHolder = 0;
for (int i = 0; i <= intCount2; i++)
{
if ((Math.Pow(i, 3) - 1) % intCount2 == 0)
{
if ((Math.Pow(i, 3) - 1) != 0)
{
Console.WriteLine(i);
intHolder += i;
}
}
}
Console.WriteLine("Answer = " + intHolder);
Console.ReadLine();
This works for 91 but when I put in any large number with a lot of 0's, it gives me a lot of answers I know are false. I think this is because it is so close to 0 that it just rounds to 0. Is there any way to see if something is precisely 0? Or is my logic wrong?
I know I need some optimization to get this to provide a timely answer but I am just trying to get it to produce correct answers.
Let me generalize your questions to two questions:
1) What specifically is wrong with this program?
2) How do I figure out where a problem is in a program?
Others have already answered the first part, but to sum up:
Problem #1: Math.Pow uses double-precision floating point numbers, which are only accurate to about 15 decimal places. They are unsuitable for doing problems that require perfect accuracy involving large integers. If you try to compute, say, 1000000000000000000 - 1, in doubles, you'll get 1000000000000000000, which is an accurate answer to 15 decimal places; that's all we guarantee. If you need a perfectly accurate answer for working on large numbers, use longs for results less than about 10 billion billion, or the large integer mathematics class in System.Numerics that will ship with the next version of the framework.
Problem #2: There are far more efficient ways to compute modular exponents that do not involve generating huge numbers; use them.
However, what we've got here is a "give a man a fish" situation. What would be better is to teach you how to fish; learn how to debug a program using the debugger.
If I had to debug this program the first thing I would do is rewrite it so that every step along the way was stored in a local variable:
double intCount2 = 91;
double intHolder = 0;
for (int i = 0; i <= intCount2; i++)
{
double cube = Math.Pow(i, 3) - 1;
double remainder = cube % intCount2;
if (remainder == 0)
{
if (cube != 0)
{
Console.WriteLine(i);
intHolder += i;
}
}
}
Now step through it in the debugger with an example where you know the answer is wrong, and look for places where your assumptions are violated. If you do so, you'll quickly discover that 1000000 cubed minus 1 is not 99999999999999999, but rather 1000000000000000000.
So that's advice #1: write the code so that it is easy to step through in the debugger, and examine every step looking for the one that seems wrong.
Advice #2: Pay attention to quiet nagging doubts. When something looks dodgy or there's a bit you don't understand, investigate it until you do understand it.
Wikipedia has an article on Modular exponentiation that you may find informative. IIRC, Python has it built in. C# does not, so you'll need to implement it yourself.
Don't compute powers modulo n using Math.Pow; you are likely to experience overflow issues among other possible issues. Instead, you should compute them from first principles. Thus, to compute the cube of an integer i modulo n first reduce i modulo n to some integer j so that i is congruent to j modulo n and 0 <= j < n. Then iteratively multiply by j and reduce modulo n after each multiplication; to compute a cube you would perform this step twice. Of course, that's the native approach but you can make this more efficient by following the classic algorithm for exponentiation by using exponentiation by squaring.
Also, as far as efficiency, I note that you are unnecessarily computing Math.Pow(i, 3) - 1 twice. Thus, at a minimum, replace
if ((Math.Pow(i, 3) - 1) % intCount2 == 0) {
if ((Math.Pow(i, 3) - 1) != 0) {
Console.WriteLine(i);
intHolder += i;
}
}
with
int cubed = Math.Pow(i, 3) - 1;
if((cubed % intCount2 == 0) && (cubed != 0)) {
Console.WriteLine(i);
intHolder += i;
}
Well, there's something missing or a typo...
"intHolder1" should presumably be "intHolder" and for intCount2=91 to result in 8 the increment line should be:-
intHolder ++;
I don't have a solution to your problem, but here's just a piece of advice :
Don't use floating point numbers for calculations that only involve integers... Type int (Int32) is clearly not big enough for your needs, but long (Int64) should be enough : the biggest number you will have to manipulate will be (10 ^ 11 - 1) ^ 3, which is less than 10 ^ 14, which is definitely less than Int64.MaxValue. Benefits :
you do all your calculations with 64-bit integers, which should be pretty efficient on a 64-bit processor
all the results of your calculations are exact, since there are no approximations due the internal representation of doubles
Don't use Math.Pow to calculate the cube of an integer... x*x*x is just as simple, and more efficient since it doesn't need a conversion to/from double. Anyway, I'm not very good at math, but you probably don't need to calculate x^3... check the links about modular exponentiation in other answers

Categories