I'm currently studying and trying to implement some algorithms. I'm trying to understand Big O notation and I can't figure out the Big O complexity for the algorithm below:
while (a != 0 && b != 0)
{
if (a > b)
a %= b;
else
b %= a;
}
if (a == 0)
common=b;
else
common=a;
It's easy to see that after two iterations the least of the numbers becomes at least twice smaller. If it was equal m at the beginning, then after 2K iterations it will be no more than m/2^K. If we put K = [log_2(m)] + 1 here, we'll see that after 2K iterations the least of the numbers becomes zero, and the loop terminates. Hence the number of iterations is no more than 2(log_2 m + 1) = O(log m).
That is the Euclidean algorithm for computing the greatest common divisor of two integers. I'll leave it to you to do the research on the complexity of this algorithm but the Fibonnacci numbers play an important role.
Most people (who are not mathematicians) never need to find out that stuff, it's already documented: http://en.wikipedia.org/wiki/Euclidean_algorithm#Algorithmic_efficiency
Related
int factorial(int n)
{
if(n < 0) {return -1;}
if(n == 0) {return 1;}
return n*factorial(n-1);
}
To find the time complexity I create a recurrence relation
T(n) = T(n-1) + c = T(n-2) + 2c = ... = T(n-k) + kc => O(n)
T(0) = 1;
What is the general way to find space complexity for that kind of algorithms (like Fibonacci)? We need to find a deep of the call stack?
The space required by a recursive algorithm can be approximated by three elements. Space required to store
recursive stack
parameters you feed into the function
output of a function
In example of a factorial. Recursive formula is T(n) = T(n-1) + c. When unrolling it you get T(n) = T(n-1) + T(n-2) + ... + n c. So recursive stack requires O(n) space.
Output of a function will be n!, to store a number n, you need log(n) bits. So to store the result you need log(n!) = O(n log n) space.
At each step of your recursion you need to store 1 parameter (n). You will need to store it n times, and each parameter takes log(n) space. So in total O(nlogn)
So you ended up with O(n) + O(nlogn) + O(nlogn) = O(nlogn). This is how much space required to calculate your recursion of factorial.
Doing this analysis, you can find why exactly chess programs that do alpa-beta pruning requires a lot of ram to calculate positions properly.
After writing time complexity in forms like this
T(n) = T(n - 1) + n
or
T(n) = 2T(n/2) + nlogn
and so on.
You have two options:
1) Use repeated substitution method which is needs some math skills for some cases because you should follow the steps until T(1) or T(0) and then calculate a summation.
or
2) Use the master Theorem which is like a formula Master Theorem, and works for many practical cases
I have tried to write a code for Fermat primality test, but apparently failed.
So if I understood well: if p is prime then ((a^p)-a)%p=0 where p%a!=0.
My code seems to be OK, therefore most likely I misunderstood the basics. What am I missing here?
private bool IsPrime(int candidate)
{
//checking if candidate = 0 || 1 || 2
int a = candidate + 1; //candidate can't be divisor of candidate+1
if ((Math.Pow(a, candidate) - a) % candidate == 0) return true;
return false;
}
Reading the wikipedia article on the Fermat primality test, You must choose an a that is less than the candidate you are testing, not more.
Furthermore, as MattW commented, testing only a single a won't give you a conclusive answer as to whether the candidate is prime. You must test many possible as before you can decide that a number is probably prime. And even then, some numbers may appear to be prime but actually be composite.
Your basic algorithm is correct, though you will have to use a larger data type than int if you want to do this for non-trivial numbers.
You should not implement the modular exponentiation in the way that you did, because the intermediate result is huge. Here is the square-and-multiply algorithm for modular exponentiation:
function powerMod(b, e, m)
x := 1
while e > 0
if e%2 == 1
x, e := (x*b)%m, e-1
else b, e := (b*b)%m, e//2
return x
As an example, 437^13 (mod 1741) = 819. If you use the algorithm shown above, no intermediate result will be greater than 1740 * 1740 = 3027600. But if you perform the exponentiation first, the intermediate result of 437^13 is 21196232792890476235164446315006597, which you probably want to avoid.
Even with all of that, the Fermat test is imperfect. There are some composite numbers, the Carmichael numbers, that will always report prime no matter what witness you choose. Look for the Miller-Rabin test if you want something that will work better. I modestly recommend this essay on Programming with Prime Numbers at my blog.
You are dealing with very large numbers, and trying to store them in doubles, which is only 64 bits.
The double will do the best it can to hold your number, but you are going to loose some accuracy.
An alternative approach:
Remember that the mod operator can be applied multiple times, and still give the same result.
So, to avoid getting massive numbers you could apply the mod operator during the calculation of your power.
Something like:
private bool IsPrime(int candidate)
{
//checking if candidate = 0 || 1 || 2
int a = candidate - 1; //candidate can't be divisor of candidate - 1
int result = 1;
for(int i = 0; i < candidate; i++)
{
result = result * a;
//Notice that without the following line,
//this method is essentially the same as your own.
//All this line does is keeps the numbers small and manageable.
result = result % candidate;
}
result -= a;
return result == 0;
}
This is fairly 'math-y' but I'm posting here because it's a Project Euler problem, & I have working code that presumably has bugs in it.
The question Determing longest repeating cycle in a decimal expansion solves the problem using logarithms, but I'm interested in solving with simple brute force. More accurately, I'm interested in understanding why my algorithm and code is not returning the correct solution.
The algorithm is simple:
replicate a 'long division',
at each step record the divisor and the remainder
when a divisor / remainder tuple is repeated, infer that the decimal representation will repeat.
Here are private fields, as requested
private int numerator;
private int recurrence;
private int result;
private int resultRecurrence;
private List<dynamic> digits;
and here is the code:
private void Go()
{
foreach (var i in primes)
{
digits = new List<dynamic>();
numerator = 1;
recurrence = 0;
while (numerator != 0)
{
numerator *= 10;
// quotient
var q = numerator / i;
// remainder
var r = numerator % i;
digits.Add(new { Divisor = q, Remainder = r });
// if we've found a repetition then break out
var m = digits.Where(p => p.Divisor == q && p.Remainder == r).ToList();
if (m.Count > 1)
{
recurrence = digits.LastIndexOf(m[0]) - digits.IndexOf(m[0]);
break;
}
numerator = r;
}
if (recurrence > resultRecurrence)
{
resultRecurrence = recurrence;
result = i;
}
}}
When testing integers < 10 and < 20 I get the correct result; and I correctly identify the value of i as well. However the decimal represetation that I get is incorrect - I calculate i-1 whereas the correct result is far less (something like i-250).
So presumably I either have a programming bug - which I can't find - or a logic bug.
I'm confused because it feels like a multiplicative group over p to me, in which there would be p-1 elements. I'm sure I'm missing something, can anyone provide suggestions?
edit
I'm not going to include my prime number code - it's not relevant, as I explain above I correctly identify the value of i (from memory it is 983) but I'm having problems getting the correct value for resultRecurrence.
I'm confused because it feels like a multiplicative group over p to me, in which there would be p-1 elements. I'm sure I'm missing something, can anyone provide suggestions?
Close.
For all primes except 2 and 5 (which divide 10), the sequence of remainders is formed by starting with 1 and transforming by
remainder = (10 * remainder) % prime
thus the k-th remainder is 10k (mod prime) and the set of remainders forms a subgroup of the group of nonzero remainders modulo prime[1]. The length of the recurring cycle is the order of that subgroup, which is also known as the order of 10 modulo prime.
The order of the group of nonzero remainders modulo prime is prime-1, and there's a theorem by Fermat:
Let G be a finite group of order g and H be a subgroup of G. Then the order h of H divides g.
So the length of the cycle is always a divisor of prime-1, and sometimes it's prime-1, e.g. for 7 or 19.
[1] For composite numbers n coprime to 10, that would be the group of remainders modulo n that are coprime to n.
First off, you don’t need the divisors, you only need the remainders.
Secondly, I would split the function into multiple independent parts instead of having everything in one big method: The long division / finding of the cycle length is independent of the rest (= finding the longest cycle).
Your break on Where coupled with Count is unintuitive. Why not just use a while loop with the condition (! digits.Contains(r))? (This would require putting 0 as a remainder into the digits list before the loop start.)
This leaves us with a much cleaner code that should be straightforward to debug.
recurrence = digits.LastIndexOf(m[0]) - digits.IndexOf(m[0]);
Surely the value of resultRecurrence is always going to be i-1 ? Since for a fraction of the form 1/n, the decimal starts repeating exactly when the division-in-progress (the ith digit) gives the same quotient-remainder as the very first trial division (1, hence i-1).
(as a side note, may I introduce you to Math.DivRem).
Was wondering how is it possible to generate 512 bit (155 decimal digits) prime number, last five decimal digits of which are specified/fixed (eg. ***28071) ??
The principles of generating simple primes without any specifications are quite understandable, but my case goes further.
Any hints for, at least, where should I start?
Java or C# is preferable.
Thanks!
I guess the only way would be to first generate a random number of 150 decimal digits, then append the 28071 behind it by doing number = randomnumber * 100000 + 28071 then just brute force it out with something like
while (!IsPrime(number))
number += 100000;
Of course this could take awhile to compute ;-)
Did you try just generating such numbers and checking them? I would expect that to be acceptably fast. The prime density decreases only as the logarithm of the number, so I'd expect you to try a few hundred numbers until you hit a prime. ln(2^512) = 354 so about one number in 350 will be prime.
Roughly speaking, the prime number theorem states that if a random number nearby some large number N is selected, the chance of it being prime is about 1 / ln(N), where ln(N) denotes the natural logarithm of N. For example, near N = 10,000, about one in nine numbers is prime, whereas near N = 1,000,000,000, only one in every 21 numbers is prime. In other words, the average gap between prime numbers near N is roughly ln(N)
(from http://en.wikipedia.org/wiki/Prime_number_theorem)
You just need to take care that a number exists for your final digits. But I think that's as easy as checking that the last digit isn't divisible by 2 or 5 (i.e. it is 1, 3, 7 or 9).
According to this performance data you can do about 2000 ModPow operations on 512 bit data per second, and since a simple prime-test is checking 2^(p-1) mod p=1 which is one ModPow operation, you should be able to generate several primes with your properties per second.
So you could do (pseudocode):
BigInteger FindPrimeCandidate(int lastDigits)
{
BigInteger i=Random512BitInt;
int remainder = i % 100000;
int increment = lastDigits-remainder;
i += increment;
BigInteger test = BigInteger.ModPow(2, i - 1, i);
if(test == 1)
return i;
else
return null;
}
And do more extensive prime checks on the result of that function.
As #Doggot said, but start from least possible 150 digit number which ends with 28071, means 100000....0028071, now add it up with 100000 each time and for testing primarily use miller rabin like the code I provided here, It needs some customization. If the return value is true, check it for exact primarily.
You can use a sieve which contains only numbers satisfying your special condition to filter out numbers divisible by small primes.
For each small prime p you need to find the correct starting point and step by taking into account that only each 100000th number is present in the sieve.
For the numbers that survive the sieve you can use BigInteger.isProbablePrime() to check whether it is prime with sufficient probability.
Let ABCDE be the five digits number in base ten, which you are considering. Based on Dirichlet's theorem on arithmetic progressions, if ABCDE and 100000 are coprime, then there are infinitely many primes of the form 100000*k+ABCDE. Since you are looking for prime numbers, neither 2 nor 5 would divide ABCDE anyway, thus ABCDE and 100000 are coprime. So there are infinitely many primes of the form you are considering.
You could extend one of the standard methods for generating large primes by adding an extra constraint, i.e. that the last 5 decimal digits must be correct. Naively, you can just add this as an extra test but it will increase the time to find a suitable prime by 10^5.
Not-so-naively: generate a random 512-bit number then set sufficient low-order bits so that the decimal representation ends with the required sequence. Then continue with the normal primality tests.
I rewrote the brute-force algorithm from the int world to the BigDecimal one with the help of the BigSquareRoot class from http://www.merriampark.com/bigsqrt.htm. (Note that from 1 to 1000 there is said to be exactly 168 primes.)
Sorry, but if you put there your range, i.e. <10154; 10155-1>, you can let your computer work and when you have retired, you may have the result... it is damn slow!
However, you can somehow find at least a part of this useful in combination with the other answers in this thread.
package edu.eli.test.primes;
import java.math.BigDecimal;
public class PrimeNumbersGenerator {
public static void main(String[] args) {
// BigDecimal lowerLimit = BigDecimal.valueOf(10).pow(154); /* 155 digits */
// BigDecimal upperLimit = BigDecimal.valueOf(10).pow(155).subtract(BigDecimal.ONE);
BigDecimal lowerLimit = BigDecimal.ONE;
BigDecimal upperLimit = new BigDecimal("1000");
BigDecimal prime = lowerLimit;
int i = 1;
/* http://www.merriampark.com/bigsqrt.htm */
BigSquareRoot bsr = new BigSquareRoot();
upperLimit = upperLimit.add(BigDecimal.ONE);
while (prime.compareTo(upperLimit) == -1) {
bsr.setScale(0);
BigDecimal roundedSqrt = bsr.get(prime);
boolean isPrimeNumber = false;
BigDecimal upper = roundedSqrt;
while (upper.compareTo(BigDecimal.ONE) == 1) {
BigDecimal div = prime.remainder(upper);
if ((prime.compareTo(upper) != 0) && (div.compareTo(BigDecimal.ZERO) == 0)) {
isPrimeNumber = false;
break;
} else if (!isPrimeNumber) {
isPrimeNumber = true;
}
upper = upper.subtract(BigDecimal.ONE);
}
if (isPrimeNumber) {
System.out.println("\n" + i + " -> " + prime + " is a prime!");
i++;
} else {
System.out.print(".");
}
prime = prime.add(BigDecimal.ONE);
}
}
}
Let's consider brute-force. Take a look at this very interesting text called "The prime number lottery":
http://plus.maths.org/content/prime-number-lottery
Given the last entry in the last table, there are ~2.79*10^14 primes less then 10^16. Thus, approximately every 35th number is a prime in that range.
EDIT: See the comment by CodeInChaos - if you just walk a few thousand 512bit numbers with last 5 digits fixed, you'll find one quickly.
I am have difficulties solving this problem:
For a positive number n, define C(n)
as the number of the integers x, for
which 1 < x < n and x^3 = 1 mod n.
When n=91, there are 8 possible values
for x, namely : 9, 16, 22, 29, 53, 74,
79, 81. Thus, C(91)=8.
Find the sum of the positive numbers
n <= 10^11 for which C(n) = 242.
My Code:
double intCount2 = 91;
double intHolder = 0;
for (int i = 0; i <= intCount2; i++)
{
if ((Math.Pow(i, 3) - 1) % intCount2 == 0)
{
if ((Math.Pow(i, 3) - 1) != 0)
{
Console.WriteLine(i);
intHolder += i;
}
}
}
Console.WriteLine("Answer = " + intHolder);
Console.ReadLine();
This works for 91 but when I put in any large number with a lot of 0's, it gives me a lot of answers I know are false. I think this is because it is so close to 0 that it just rounds to 0. Is there any way to see if something is precisely 0? Or is my logic wrong?
I know I need some optimization to get this to provide a timely answer but I am just trying to get it to produce correct answers.
Let me generalize your questions to two questions:
1) What specifically is wrong with this program?
2) How do I figure out where a problem is in a program?
Others have already answered the first part, but to sum up:
Problem #1: Math.Pow uses double-precision floating point numbers, which are only accurate to about 15 decimal places. They are unsuitable for doing problems that require perfect accuracy involving large integers. If you try to compute, say, 1000000000000000000 - 1, in doubles, you'll get 1000000000000000000, which is an accurate answer to 15 decimal places; that's all we guarantee. If you need a perfectly accurate answer for working on large numbers, use longs for results less than about 10 billion billion, or the large integer mathematics class in System.Numerics that will ship with the next version of the framework.
Problem #2: There are far more efficient ways to compute modular exponents that do not involve generating huge numbers; use them.
However, what we've got here is a "give a man a fish" situation. What would be better is to teach you how to fish; learn how to debug a program using the debugger.
If I had to debug this program the first thing I would do is rewrite it so that every step along the way was stored in a local variable:
double intCount2 = 91;
double intHolder = 0;
for (int i = 0; i <= intCount2; i++)
{
double cube = Math.Pow(i, 3) - 1;
double remainder = cube % intCount2;
if (remainder == 0)
{
if (cube != 0)
{
Console.WriteLine(i);
intHolder += i;
}
}
}
Now step through it in the debugger with an example where you know the answer is wrong, and look for places where your assumptions are violated. If you do so, you'll quickly discover that 1000000 cubed minus 1 is not 99999999999999999, but rather 1000000000000000000.
So that's advice #1: write the code so that it is easy to step through in the debugger, and examine every step looking for the one that seems wrong.
Advice #2: Pay attention to quiet nagging doubts. When something looks dodgy or there's a bit you don't understand, investigate it until you do understand it.
Wikipedia has an article on Modular exponentiation that you may find informative. IIRC, Python has it built in. C# does not, so you'll need to implement it yourself.
Don't compute powers modulo n using Math.Pow; you are likely to experience overflow issues among other possible issues. Instead, you should compute them from first principles. Thus, to compute the cube of an integer i modulo n first reduce i modulo n to some integer j so that i is congruent to j modulo n and 0 <= j < n. Then iteratively multiply by j and reduce modulo n after each multiplication; to compute a cube you would perform this step twice. Of course, that's the native approach but you can make this more efficient by following the classic algorithm for exponentiation by using exponentiation by squaring.
Also, as far as efficiency, I note that you are unnecessarily computing Math.Pow(i, 3) - 1 twice. Thus, at a minimum, replace
if ((Math.Pow(i, 3) - 1) % intCount2 == 0) {
if ((Math.Pow(i, 3) - 1) != 0) {
Console.WriteLine(i);
intHolder += i;
}
}
with
int cubed = Math.Pow(i, 3) - 1;
if((cubed % intCount2 == 0) && (cubed != 0)) {
Console.WriteLine(i);
intHolder += i;
}
Well, there's something missing or a typo...
"intHolder1" should presumably be "intHolder" and for intCount2=91 to result in 8 the increment line should be:-
intHolder ++;
I don't have a solution to your problem, but here's just a piece of advice :
Don't use floating point numbers for calculations that only involve integers... Type int (Int32) is clearly not big enough for your needs, but long (Int64) should be enough : the biggest number you will have to manipulate will be (10 ^ 11 - 1) ^ 3, which is less than 10 ^ 14, which is definitely less than Int64.MaxValue. Benefits :
you do all your calculations with 64-bit integers, which should be pretty efficient on a 64-bit processor
all the results of your calculations are exact, since there are no approximations due the internal representation of doubles
Don't use Math.Pow to calculate the cube of an integer... x*x*x is just as simple, and more efficient since it doesn't need a conversion to/from double. Anyway, I'm not very good at math, but you probably don't need to calculate x^3... check the links about modular exponentiation in other answers