I am trying to solve Project Euler Challenge 5, What is the smallest possible number that is evenly divisible by all the numbers from 1 to 20.
My problem is that my isMultiple() method doesn't return true when it should.
* My Code *
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Challenge_5
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(isMultiple(2520, 10)); //should return True
Console.WriteLine(smallestMultiple(20)); //should return 232792560
Console.ReadLine();
}
static int factorial(int n)
{
int product = 1;
for (int i = 1; i <= n; i++)
{
product *= i;
}
return product; //returns the factorial of n, (n * n-1 * n-2... * 1)
}
static bool isMultiple(int number, int currentFactor)
{
bool returnBool = false;
if (currentFactor == 1)
{
returnBool = true; // if all factors below largestFactor can divide into the number, returns true
}
else
{
if (number % currentFactor == 0)
{
currentFactor--;
isMultiple(number, currentFactor);
}
}
return returnBool;
}
static int smallestMultiple(int largestFactor)
{
for (int i = largestFactor; i < factorial(largestFactor); i+= largestFactor) //goes through all values from the kargestFactor to largestFactor factorial
{
if (isMultiple(i, largestFactor))
{
return i; // if current number can be evenly divided by all factors, it gets returned
}
}
return factorial(largestFactor); // if no numbers get returned, the factorial is the smallest multiple
}
}
}
I know there are much easier ways to solve this, but I want the program to be used to check the lowest multiple of the numbers from 1 to any number, not just 20.
Help would be much appreciated.
EDIT
Thanks to help, i have fixed my code by changing line 42 from
isMultiple(number, currentFactor);
to
returnBool = isMultiple(number, currentFactor);
I also fixed the problem with not getting an accurate return value for smallestMultiple(20);
by changing some of the variables to long instead of int
Your problem is you forgot to use the output of isMultiple in your recursive part
if (number % currentFactor == 0)
{
currentFactor--;
returnBool = isMultiple(number, currentFactor); //you need a to save the value here.
}
Without assigning returnBool there is no way of knowing if the inner isMultiple returned true or not.
Scott's answer is perfectly valid. Forgive me if I'm wrong, but it sounds like you're a student, so in the interest of education, I thought I'd give you some pointers for cleaning up your code.
When writing recursive functions, it's (in my opinion) usually cleaner to return the recursive call directly if possible, as well as returning base cases directly, rather than storing a value that you return at the end. (It's not always possible, in complicated cases where you have to make a recursive call, modify the return value, and then make another recursive call, but these are uncommon.)
This practice:
makes base cases very obvious increasing readability and forces you to consider all of your base cases
prevents you from forgetting to assign the return value, as you did in your original code
prevents potential bugs where you might accidentally do some erroneous additional processing that alters the result before you return it
reduces the number of nested if statements, increasing readability by reducing preceding whitespace
makes code-flow much more obvious increasing readability and ability to debug
usually results in tail-recursion which is the best for performance, nearly as performant as iterative code
caveat: nowadays most compilers' optimizers will rejigger production code to produce tail-recursive functions, but getting into the habit of writing your code with tail-recursion in the first place is good practice, especially as interpreted scripting languages (e.g. JavaScript) are taking over the world, where code optimization is less possible by nature
The other change I'd make is to remove currentFactor--; and move the subtraction into the recursive call itself. It increases readability, reduces the chance of side effects, and, in the case where you don't use tail-recursion, prevents you from altering a value that you later expect to be unaltered. In general, if you can avoid altering values passed into a function (as opposed to a procedure/void), you should.
Also, in this particular case, making this change removes up to 3 assembly instructions and possibly an additional value on the stack depending on how the optimizer handles it. In long running loops with large depths, this can make a difference*.
static bool isMultiple(int number, int currentFactor)
{
if (currentFactor == 1)
{
// if all factors below largestFactor can divide into the number, return true
return true;
}
if (number % currentFactor != 0)
{
return false;
}
return isMultiple(number, currentFactor - 1);
}
* A personal anecdote regarding deep recursive calls and performance...
A while back, I was writing a program in C++ to enumerate the best moves for all possible Connect-4 games. The maximum depth of the recursive search function was 42 and each depth had up to 7 recursive calls. Initial versions of the code had an estimated running time of 2 million years, and that was using parallelism. Those 3 additional instructions can make a HUGE difference, both for sheer number of additional instructions and the amount L1 and L2 cache misses.
This algorithm came up to my mind right now, so please anyone correct me if i'm wrong.
As composite numbers are made by multiplication of prime numbers (and prime numbers can not be generated from multiplication of any other numbers) so the number must be multiplied to all prime numbers, some numbers like 6 when they are reached our smallest number is dividable (as its already multiplied to 2 and 3), but for those that are not, multiplying by a prime number (that is obviously less than the number itself so should be in our prime list) would make our smallest dividable to that number too. so for example when we get to 4, multiplying by2` (a prime number less than 4) would be enough, 8 and 9 the same way, ...
bool IsPrime(int x)
{
if(x == 1) return false;
for(int i=2;i<=Math.Sqrt(x);i+=2)
if(x%i==0) return false;
return true;
}
int smallest = 1;
List<int> primes = new List<int>();
for(int i=1;i<=20;i++)
if(IsPrime(i))
{
smallest *= i;
primes.Add(i);
}
else if(smallest % i != 0)
for(int j=0;j<primes.Count;j++)
if((primes[j]*smallest)%i == 0)
{
smallest *= primes[j];
break;
}
Edit:
As we have list of prime numbers so the best way to find out if a number is prime or not would be:
bool IsPrime(int x)
{
if(x == 1) return false;
for(int i = 0; i< primes.Count; i++)
if(x%primes[i] == 0) return false;
return true;
}
Related
I tried to use recursion for the problem at hand as follows,
int newlevelgen()
{
int exampleno = Random.Range (1,4);
if (exampleno != lastlevelno)
{
lastlevelno = exampleno;
return exampleno;
}
else
{
newlevelgen();
}
return exampleno;
}
This is my code above, what I want to do is generate new number without repeating the previous one, but this simply does not work. Help!
The idea is that you get a value between a and b. If that value is greater of equal to previous value, then you increase that and return it. Other case, you return as is.
Think of it, it does work.
public int GetUniqueLevelInclusiveOrdinal(int a , int b, int previous){
// given the ordinal numbers from a to b INCLUSIVE,
// so including a and b,
// (*** NOTE, no random int calls in Unity or any system
// work inclusively so take care ***)
// choose one of the numbers from a to b inclusive
// but do not choose "previous"
// which top index to use with Random.Range which is exclusive?
int top = b+1;
// everyday algorithm for excluding one item from a random run
int topExcludeOne = top-1;
int value = Random.Range(a, topExcludeOne);
if (value >= previous) return value+1;
else return value;
}
This is an extremely well-known, basic, pattern in programming...
int result = UnityEngine.Random.Range(0,highest-1);
if (result>=exclude)
return result+1;
else
return result;
In Unity you must use extensions:
public static int RandomIndexButNotThis(this int highest, int exclude)
{
if (highest<2) Debug.Break();
int result = UnityEngine.Random.Range(0,highest-1);
if (result>=exclude)
return result+1;
else
return result;
}
To get a random index 0 to 99
Random.Range(0,100)
To get a random index 0 to 99, but excluding 61
100.RandomIndexButNotThis(61)
To get a random index 0 to 9
Random.Range(0,10)
To get a random index 0 to 9, but excluding 8
10.RandomIndexButNotThis(8)
If new to Unity, intro to extensions
Let me preface this by saying that the following is an answer to the problem with the original method that was posted. It is far from that best way to get the desired result but that's not the focus of this answer. Yes there is a chance that the recursion will go on for a few calls. 33% chance to need another recursive call, though that does not mean a 33% for an infinite loop as the likelihood of needing X recursive calls is 0.33^X so the likelihood of reaching 3 recursive calls is only 0.33^3 ~= 0.036.
Still, a different method is advised. See Joe Blow's answer for example.
There's a problem with your recursion. You aren't using the new value that the recursive call to newlevelgen() would give you in case exampleno is the same as lastlevelno and always returning the (possibly duplicate) exampleno value. Change it to:
int newlevelgen()
{
int exampleno = Random.Range (1,4);
if (exampleno != lastlevelno)
{
lastlevelno = exampleno;
return exampleno;
}
else
{
return newlevelgen();
}
}
I created utility to easely handle random without repetitions, flat distributed random, and weighted lists (universal property drawer included!). You can find it on GtiHub and use freely:
https://github.com/khadzhynov/RandomUtils
Currently I am working on a program that processes extremely large integernumbers .
To prevent hitting the intiger.maxvalue a script that processes strings as numbers, and splits them up into a List<int>as following
0 is the highest currently known value
list entry 0: 123 (hundred twenty three million)
list entry 1: 321 (three hundred twenty one thousand)
list entry 2: 777 (seven hundred seventy seven)
Now my question is: How would one check if the incoming string value is sub tractable from these values?
The start for subtraction I currently made is as following, but I am getting stuck on the subtracting part.
public bool Subtract(string value)
{
string cleanedNumeric = NumericAndSpaces(value);
List<string> input = new List<string>(cleanedNumeric.Split(' '));
// In case 1) the amount is bigger 2) biggest value exceeded by a 10 fold
// 3) biggest value exceeds the value
if (input.Count > values.Count ||
input[input.Count - 1].Length > values[0].ToString().Length ||
FastParseInt(input[input.Count -1]) > values[0])
return false;
// Flip the array for ease of comparison
input.Reverse();
return true;
}
EDIT
Current target for the highest achievable number in this program is a Googolplex And are limited to .net3.5 MONO
You should do some testing on this because I haven't run extensive tests but it has worked on the cases I've put it through. Also, it might be worth ensuring that each character in the string is truly a valid integer as this procedure would bomb given a non-integer character. Finally, it expects positive numbers for both subtrahend and minuend.
static void Main(string[] args)
{
// In subtraction, a subtrahend is subtracted from a minuend to find a difference.
string minuend = "900000";
string subtrahend = "900001";
var isSubtractable = IsSubtractable(subtrahend, minuend);
}
public static bool IsSubtractable(string subtrahend, string minuend)
{
minuend = minuend.Trim();
subtrahend = subtrahend.Trim();
// maybe loop through characters and ensure all are valid integers
// check if the original number is longer - clearly subtractable
if (minuend.Length > subtrahend.Length) return true;
// check if original number is shorter - not subtractable
if (minuend.Length < subtrahend.Length) return false;
// at this point we know the strings are the same length, so we'll
// loop through the characters, one by one, from the start, to determine
// if the minued has a higher value character in a column of the number.
int numberIndex = 0;
while (numberIndex < minuend.Length )
{
Int16 minuendCharValue = Convert.ToInt16(minuend[numberIndex]);
Int16 subtrahendCharValue = Convert.ToInt16(subtrahend[numberIndex]);
if (minuendCharValue > subtrahendCharValue) return true;
if (minuendCharValue < subtrahendCharValue) return false;
numberIndex++;
}
// number are the same
return true;
}
[BigInteger](https://msdn.microsoft.com/en-us/library/system.numerics.biginteger.aspx) is of aribtary size.
Run this code if you don't believe me
var foo = new BigInteger(2);
while (true)
{
foo = foo * foo;
}
Things get crazy. My debugger (VS2013) becomes unable to represent the number before it's done. ran it for a short time and got a number with 1.2 million digits in base 10 from ToString. It is big enough. There is a 2GB limit on object, which can be overriden in .NET 4.5 with the setting gcAllowVeryLargeObjects
Now what to do if you are using .NET 3.5? You basically need to reimplement BigInteger (obviously only taking what you need, there is a lot in there).
public class MyBigInteger
{
uint[] _bits; // you need somewhere to store the value to an arbitrary length.
....
You also need to perform maths on these arrays. here is the Equals method from BigInteger:
public bool Equals(BigInteger other)
{
AssertValid();
other.AssertValid();
if (_sign != other._sign)
return false;
if (_bits == other._bits)
// _sign == other._sign && _bits == null && other._bits == null
return true;
if (_bits == null || other._bits == null)
return false;
int cu = Length(_bits);
if (cu != Length(other._bits))
return false;
int cuDiff = GetDiffLength(_bits, other._bits, cu);
return cuDiff == 0;
}
It basically does cheap length and sign comparisons of the byte arrays, then, if that doesn't produce a difference hands off to GetDiffLength.
internal static int GetDiffLength(uint[] rgu1, uint[] rgu2, int cu)
{
for (int iv = cu; --iv >= 0; )
{
if (rgu1[iv] != rgu2[iv])
return iv + 1;
}
return 0;
}
Which does the expensive check of looping through the arrays looking for a difference.
All you math will have to follow this pattern and can largely be ripped of from the .Net source code.
Googleplex and 2GB:
Here the 2GB limit becomes a problem, because you will be needing an object size of 3.867×10^90 gigabyte. This the the point where you give up, or get clever and store objects as powers at the cost of not being able to represent a lot of them. *2
if you moderate your expectations, it doesn't actually change the maths of BigInteger to split _bits into multiple jagged arrays *1. You change the cheap checks a bit. Rather than checking the size of the array, you check the number of subarrays and then the size of the last one. Then the loop needs to be a bit more (but not much) more complex in that it does elementwise array comparison for each sub array. There are other changes as well, but it's by no means impossible and gets you out of the 2GB limit.
*1 Note use jagged arrays[][], not multidimensional arrays [,] which are still subject to the same limit.
*2 Ie give up on precision and store the mantissa and exponent. If you look how floating point numbers are implemented they can't represent all numbers between their max and min (as the number of real numbers in a range is 'bigger' than infinite). They make a complex trade off between precision and range. If you are wanting to do this, looking at float implementations will be a lot more useful than taking about integer representations like Biginteger.
I have tried to write a code for Fermat primality test, but apparently failed.
So if I understood well: if p is prime then ((a^p)-a)%p=0 where p%a!=0.
My code seems to be OK, therefore most likely I misunderstood the basics. What am I missing here?
private bool IsPrime(int candidate)
{
//checking if candidate = 0 || 1 || 2
int a = candidate + 1; //candidate can't be divisor of candidate+1
if ((Math.Pow(a, candidate) - a) % candidate == 0) return true;
return false;
}
Reading the wikipedia article on the Fermat primality test, You must choose an a that is less than the candidate you are testing, not more.
Furthermore, as MattW commented, testing only a single a won't give you a conclusive answer as to whether the candidate is prime. You must test many possible as before you can decide that a number is probably prime. And even then, some numbers may appear to be prime but actually be composite.
Your basic algorithm is correct, though you will have to use a larger data type than int if you want to do this for non-trivial numbers.
You should not implement the modular exponentiation in the way that you did, because the intermediate result is huge. Here is the square-and-multiply algorithm for modular exponentiation:
function powerMod(b, e, m)
x := 1
while e > 0
if e%2 == 1
x, e := (x*b)%m, e-1
else b, e := (b*b)%m, e//2
return x
As an example, 437^13 (mod 1741) = 819. If you use the algorithm shown above, no intermediate result will be greater than 1740 * 1740 = 3027600. But if you perform the exponentiation first, the intermediate result of 437^13 is 21196232792890476235164446315006597, which you probably want to avoid.
Even with all of that, the Fermat test is imperfect. There are some composite numbers, the Carmichael numbers, that will always report prime no matter what witness you choose. Look for the Miller-Rabin test if you want something that will work better. I modestly recommend this essay on Programming with Prime Numbers at my blog.
You are dealing with very large numbers, and trying to store them in doubles, which is only 64 bits.
The double will do the best it can to hold your number, but you are going to loose some accuracy.
An alternative approach:
Remember that the mod operator can be applied multiple times, and still give the same result.
So, to avoid getting massive numbers you could apply the mod operator during the calculation of your power.
Something like:
private bool IsPrime(int candidate)
{
//checking if candidate = 0 || 1 || 2
int a = candidate - 1; //candidate can't be divisor of candidate - 1
int result = 1;
for(int i = 0; i < candidate; i++)
{
result = result * a;
//Notice that without the following line,
//this method is essentially the same as your own.
//All this line does is keeps the numbers small and manageable.
result = result % candidate;
}
result -= a;
return result == 0;
}
I'm trying to nail down some interview questions, so I stared with a simple one.
Design the factorial function.
This function is a leaf (no dependencies - easly testable), so I made it static inside the helper class.
public static class MathHelper
{
public static int Factorial(int n)
{
Debug.Assert(n >= 0);
if (n < 0)
{
throw new ArgumentException("n cannot be lower that 0");
}
Debug.Assert(n <= 12);
if (n > 12)
{
throw new OverflowException("Overflow occurs above 12 factorial");
}
int factorialOfN = 1;
for (int i = 1; i <= n; ++i)
{
//checked
//{
factorialOfN *= i;
//}
}
return factorialOfN;
}
}
Testing:
[TestMethod]
[ExpectedException(typeof(OverflowException))]
public void Overflow()
{
int temp = FactorialHelper.MathHelper.Factorial(40);
}
[TestMethod]
public void ZeroTest()
{
int factorialOfZero = FactorialHelper.MathHelper.Factorial(0);
Assert.AreEqual(1, factorialOfZero);
}
[TestMethod]
public void FactorialOf5()
{
int factOf5 = FactorialHelper.MathHelper.Factorial(5);
Assert.AreEqual(5*4*3*2*1,factOf5);
}
[TestMethod]
[ExpectedException(typeof(ArgumentException))]
public void NegativeTest()
{
int factOfMinus5 = FactorialHelper.MathHelper.Factorial(-5);
}
I have a few questions:
Is it correct? (I hope so ;) )
Does it throw right exceptions?
Should I use checked context or this trick ( n > 12 ) is ok?
Is it better to use uint istead of checking for negative values?
Future improving: Overload for long, decimal, BigInteger or maybe generic method?
Thank you
It looks right to me, but it would be inefficient with larger numbers. If you're allowing for big integers, the number will keep growing with each multiply, so you would see a tremendous (asymptotically better) increase in speed if you multiplied them hierarchically. For example:
bigint myFactorial(uint first, uint last)
{
if (first == last) return first;
uint mid = first + (last - first)/2;
return myFactorial(first,mid) * myFactorial(1+mid,last);
}
bigint factorial(uint n)
{
return myFactorial(2,n);
}
If you really want a fast factorial method, you also might consider something like this:
Factor the factorial with a modified Sieve of Eratosthenes
Compute the powers of each prime factor using a fast exponentiation algorithm (and fast multiplication and square algorithms)
Multiply all the powers of primes together hierarchically
Yes, it looks right
The exceptions seem OK to me, and also as an interviewer, I can't see myself being concerned there
Checked. Also, in an interview, you'd never know that 12 just happened to be the right number.
Uint. If you can enforce something with a signature instead of an exception, do it.
You should just make it long (or bigint) and be done with it (int is a silly choice of return types here)
Here are some follow-up questions I'd ask if I were your interviewer:
Why didn't you solve this recursively? Factorial is a naturally recursive problem.
Can you add memoization to this so that it does a faster job computing 12! if it's already done 11!?
Do you need the n==0 case here?
As an interviewer, I'd definitely have some curveballs like that to throw at you. In general, I like the approach of practicing with a whiteboard and a mock interviewer, because so much of it is being nimble and thinking on your feet.
In the for cycle you can start with for (int i = 2...). Multiplying by 1 is quite useless.
I would have throw a single ArgumentOutOfRangeException for both < 0 and > 12. The Debug.Assert will mask the exception when you are using your unit test (you would have to test it in Release mode).
I was wondering if it is possible to find the largest prime factor of a number by using modulos in C#. In other words, if i % x == 0 then we could break a for loop or something like that, where x is equal to all natural numbers below our i value.
How would I specify the all natural numbers below our i value as equal to our x variable? It becomes a little tedious to write out conditionals for every single integer if you know what I'm saying.
By the way, I'm sure there is a far easier way to do this in C#, so please let me know about it if you have an idea, but I'd also like to try and solve it this way, just to see if I can do it with my beginner knowledge.
Here is my current code if you want to see what I have so far:
static void Main()
{
int largestPrimeFactor = 0;
for (long i = 98739853; i <= 98739853; i--)
{
if (true)
{
largestPrimeFactor += (int) i;
break;
}
}
Console.WriteLine(largestPrimeFactor);
Console.ReadLine();
}
If I were to do this using loop and modulos I would do:
long number = 98739853;
long biggestdiv = number;
while(number%2==0) //get rid of even numbers
number/=2;
long divisor = 3;
if(number!=1)
while(divisor!=number)
{
while(number%divisor==0)
{
number/=divisor;
biggestdiv = divisor;
}
divisor+=2;
}
In the end, biggestdiv would be the largest prime factor.
Note: This code is written directly in browser. I didn't try to compile or run it. This is only for showing my concept. There might be algorithm mistakes. It they are, let me know. I'm aware of the fact that it is not optimized at all (I think Sieve is the best for this).
EDIT:
fixed: previous code would return 1 when number were prime.
fixed: previous code would end in loop leading to overflow of divisor where number were power of 2
Ooh, this sounds like a fun use for iterator blocks. Don't turn this in to your professor, though:
private static List<int> primes = new List<int>() {2};
public static IEnumerable<int> Primes()
{
int p;
foreach(int i in primes) {p = i; yield return p;}
while (p < int.MaxValue)
{
p++;
if (!primes.Any(i => p % i ==0))
{
primes.Add(p);
yield return p;
}
}
}
public int LargestPrimeFactor(int n)
{
return Primes.TakeWhile(p => p <= Math.Sqrt(n)).Where(p => n % p == 0).Last();
}
I'm not sure quite what your question is: perhaps you need a loop over the numbers? However there are two clear problems with your code:
Your for loop has the same stop and end value. Ie it will run once and once only
You have a break before the largestPrimeFactor sum. This sum will NEVER execute, because break will stop the for loop ( and hence execution of that block). The compiler should be giving a warning that this sum is unreachable.