I learnt about quick sort and how it can be implemented in both Recursive and Iterative method.
In Iterative method:
Push the range (0...n) into the stack
Partition the given array with a pivot
Pop the top element.
Push the partitions (index range) onto a stack if the range has more than one element
Do the above 3 steps, till the stack is empty
And the recursive version is the normal one defined in wiki.
I learnt that recursive algorithms are always slower than their iterative counterpart.
So, Which method is preferred in terms of time complexity (memory is not a concern)?
Which one is fast enough to use in Programming contest?
Is c++ STL sort() using a recursive approach?
In terms of (asymptotic) time complexity - they are both the same.
"Recursive is slower then iterative" - the rational behind this statement is because of the overhead of the recursive stack (saving and restoring the environment between calls).
However -these are constant number of ops, while not changing the number of "iterations".
Both recursive and iterative quicksort are O(nlogn) average case and O(n^2) worst case.
EDIT:
just for the fun of it I ran a benchmark with the (java) code attached to the post , and then I ran wilcoxon statistic test, to check what is the probability that the running times are indeed distinct
The results may be conclusive (P_VALUE=2.6e-34, https://en.wikipedia.org/wiki/P-value. Remember that the P_VALUE is P(T >= t | H) where T is the test statistic and H is the null hypothesis). But the answer is not what you expected.
The average of the iterative solution was 408.86 ms while of recursive was 236.81 ms
(Note - I used Integer and not int as argument to recursiveQsort() - otherwise the recursive would have achieved much better, because it doesn't have to box a lot of integers, which is also time consuming - I did it because the iterative solution has no choice but doing so.
Thus - your assumption is not true, the recursive solution is faster (for my machine and java for the very least) than the iterative one with P_VALUE=2.6e-34.
public static void recursiveQsort(int[] arr,Integer start, Integer end) {
if (end - start < 2) return; //stop clause
int p = start + ((end-start)/2);
p = partition(arr,p,start,end);
recursiveQsort(arr, start, p);
recursiveQsort(arr, p+1, end);
}
public static void iterativeQsort(int[] arr) {
Stack<Integer> stack = new Stack<Integer>();
stack.push(0);
stack.push(arr.length);
while (!stack.isEmpty()) {
int end = stack.pop();
int start = stack.pop();
if (end - start < 2) continue;
int p = start + ((end-start)/2);
p = partition(arr,p,start,end);
stack.push(p+1);
stack.push(end);
stack.push(start);
stack.push(p);
}
}
private static int partition(int[] arr, int p, int start, int end) {
int l = start;
int h = end - 2;
int piv = arr[p];
swap(arr,p,end-1);
while (l < h) {
if (arr[l] < piv) {
l++;
} else if (arr[h] >= piv) {
h--;
} else {
swap(arr,l,h);
}
}
int idx = h;
if (arr[h] < piv) idx++;
swap(arr,end-1,idx);
return idx;
}
private static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
public static void main(String... args) throws Exception {
Random r = new Random(1);
int SIZE = 1000000;
int N = 100;
int[] arr = new int[SIZE];
int[] millisRecursive = new int[N];
int[] millisIterative = new int[N];
for (int t = 0; t < N; t++) {
for (int i = 0; i < SIZE; i++) {
arr[i] = r.nextInt(SIZE);
}
int[] tempArr = Arrays.copyOf(arr, arr.length);
long start = System.currentTimeMillis();
iterativeQsort(tempArr);
millisIterative[t] = (int)(System.currentTimeMillis()-start);
tempArr = Arrays.copyOf(arr, arr.length);
start = System.currentTimeMillis();
recursvieQsort(tempArr,0,arr.length);
millisRecursive[t] = (int)(System.currentTimeMillis()-start);
}
int sum = 0;
for (int x : millisRecursive) {
System.out.println(x);
sum += x;
}
System.out.println("end of recursive. AVG = " + ((double)sum)/millisRecursive.length);
sum = 0;
for (int x : millisIterative) {
System.out.println(x);
sum += x;
}
System.out.println("end of iterative. AVG = " + ((double)sum)/millisIterative.length);
}
Recursion is NOT always slower than iteration. Quicksort is perfect example of it. The only way to do this in iterate way is create stack structure. So in other way do the same that the compiler do if we use recursion, and propably you will do this worse than compiler. Also there will be more jumps if you don't use recursion (to pop and push values to stack).
That's the solution i came up with in Javascript. I think it works.
const myArr = [33, 103, 3, 726, 200, 984, 198, 764, 9]
document.write('initial order :', JSON.stringify(myArr), '<br><br>')
qs_iter(myArr)
document.write('_Final order :', JSON.stringify(myArr))
function qs_iter(items) {
if (!items || items.length <= 1) {
return items
}
var stack = []
var low = 0
var high = items.length - 1
stack.push([low, high])
while (stack.length) {
var range = stack.pop()
low = range[0]
high = range[1]
if (low < high) {
var pivot = Math.floor((low + high) / 2)
stack.push([low, pivot])
stack.push([pivot + 1, high])
while (low < high) {
while (low < pivot && items[low] <= items[pivot]) low++
while (high > pivot && items[high] > items[pivot]) high--
if (low < high) {
var tmp = items[low]
items[low] = items[high]
items[high] = tmp
}
}
}
}
return items
}
Let me know if you found a mistake :)
Mister Jojo UPDATE :
this code just mixes values that can in rare cases lead to a sort, in other words never.
For those who have a doubt, I put it in snippet.
Given two arrays as parameters (x and y) and find the starting index where the first occurrence of y in x. I am wondering what the simplest or the fastest implementation would be.
Example:
when x = {1,2,4,2,3,4,5,6}
y = {2,3}
result
starting index should be 3
Update: Since my code is wrong I removed it from the question.
Simplest to write?
return (from i in Enumerable.Range(0, 1 + x.Length - y.Length)
where x.Skip(i).Take(y.Length).SequenceEqual(y)
select (int?)i).FirstOrDefault().GetValueOrDefault(-1);
Not quite as efficient, of course... a bit more like it:
private static bool IsSubArrayEqual(int[] x, int[] y, int start) {
for (int i = 0; i < y.Length; i++) {
if (x[start++] != y[i]) return false;
}
return true;
}
public static int StartingIndex(this int[] x, int[] y) {
int max = 1 + x.Length - y.Length;
for(int i = 0 ; i < max ; i++) {
if(IsSubArrayEqual(x,y,i)) return i;
}
return -1;
}
Here is a simple (yet fairly efficient) implementation that finds all occurances of the array, not just the first one:
static class ArrayExtensions {
public static IEnumerable<int> StartingIndex(this int[] x, int[] y) {
IEnumerable<int> index = Enumerable.Range(0, x.Length - y.Length + 1);
for (int i = 0; i < y.Length; i++) {
index = index.Where(n => x[n + i] == y[i]).ToArray();
}
return index;
}
}
Example:
int[] x = { 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 };
int[] y = { 2, 3 };
foreach (int i in x.StartingIndex(y)) {
Console.WriteLine(i);
}
Output:
1
5
9
The method first loops through the x array to find all occurances of the first item in the y array, and place the index of those in the index array. Then it goes on to reduce the matches by checking which of those also match the second item in the y array. When all items in the y array is checked, the index array contains only the full matches.
Edit:
An alternative implementation would be to remove the ToArray call from the statement in the loop, making it just:
index = index.Where(n => x[n + i] == y[i]);
This would totally change how the method works. Instead of looping through the items level by level, it would return an enumerator with nested expressions, deferring the search to the time when the enumerator was iterated. That means that you could get only the first match if you wanted:
int index = x.StartingIndex(y).First();
This would not find all matches and then return the first, it would just search until the first was found and then return it.
The simplest way is probably this:
public static class ArrayExtensions
{
private static bool isMatch(int[] x, int[] y, int index)
{
for (int j = 0; j < y.Length; ++j)
if (x[j + index] != y[j]) return false;
return true;
}
public static int IndexOf(this int[] x, int[] y)
{
for (int i = 0; i < x.Length - y.Length + 1; ++i)
if (isMatch(x, y, i)) return i;
return -1;
}
}
But it's definitely not the fastest way.
This is based off of Mark Gravell's answer but I made it generic and added some simple bounds checking to keep exceptions from being thrown
private static bool IsSubArrayEqual<T>(T[] source, T[] compare, int start) where T:IEquatable<T>
{
if (compare.Length > source.Length - start)
{
//If the compare string is shorter than the test area it is not a match.
return false;
}
for (int i = 0; i < compare.Length; i++)
{
if (source[start++].Equals(compare[i]) == false) return false;
}
return true;
}
Could be improved further by implementing Boyer-Moore but for short patterns it works fine.
"Simplest" and "fastest" are opposites in this case, and besides, in order to describe fast algorithms we need to know lots of things about how the source array and the search array are related to each other.
This is essentially the same problem as finding a substring inside a string. Suppose you are looking for "fox" in "the quick brown fox jumps over the lazy dog". The naive string matching algorithm is extremely good in this case. If you are searching for "bananananananananananananananana" inside a million-character string that is of the form "banananananabanananabananabananabanananananbananana..." then the naive substring matching algorithm is terrible -- far faster results can be obtained by using more complex and sophisticated string matching algorithms. Basically, the naive algorithm is O(nm) where n and m are the lengths of the source and search strings. There are O(n+m) algorithms but they are far more complex.
Can you tell us more about the data you're searching? How big is it, how redundant is it, how long are the search arrays, and what is the likelihood of a bad match?
I find something along the following lines more intuitive, but that may be a matter of taste.
public static class ArrayExtensions
{
public static int StartingIndex(this int[] x, int[] y)
{
var xIndex = 0;
while(xIndex < x.length)
{
var found = xIndex;
var yIndex = 0;
while(yIndex < y.length && xIndex < x.length && x[xIndex] == y[yIndex])
{
xIndex++;
yIndex++;
}
if(yIndex == y.length-1)
{
return found;
}
xIndex = found + 1;
}
return -1;
}
}
This code also addresses an issue I believe your implementation may have in cases like x = {3, 3, 7}, y = {3, 7}. I think what would happen with your code is that it matches the first number, then resets itself on the second, but starts matching again on the third, rather than stepping back to the index just after where it started matching. May be missing something, but it's definitely something to consider and should be easily fixable in your code.
//this is the best in C#
//bool contains(array,subarray)
// when find (subarray[0])
// while subarray[next] IS OK
// subarray.end then Return True
public static bool ContainSubArray<T>(T[] findIn, out int found_index,
params T[]toFind)
{
found_index = -1;
if (toFind.Length < findIn.Length)
{
int index = 0;
Func<int, bool> NextOk = (i) =>
{
if(index < findIn.Length-1)
return findIn[++index].Equals(toFind[i]);
return false;
};
//----------
int n=0;
for (; index < findIn.Length; index++)
{
if (findIn[index].Equals(toFind[0]))
{
found_index=index;n=1;
while (n < toFind.Length && NextOk(n))
n++;
}
if (n == toFind.Length)
{
return true;
}
}
}
return false;
}
using System;
using System.Linq;
public class Test
{
public static void Main()
{
int[] x = {1,2,4,2,3,4,5,6};
int[] y = {2,3};
int? index = null;
for(int i=0; i<x.Length; ++i)
{
if (y.SequenceEqual(x.Skip(i).Take(y.Length)))
{
index = i;
break;
}
}
Console.WriteLine($"{index}");
}
}
Output
3
I want to find the prime number between 0 and a long variable but I am not able to get any output.
The program is
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication16
{
class Program
{
void prime_num(long num)
{
bool isPrime = true;
for (int i = 0; i <= num; i++)
{
for (int j = 2; j <= num; j++)
{
if (i != j && i % j == 0)
{
isPrime = false;
break;
}
}
if (isPrime)
{
Console.WriteLine ( "Prime:" + i );
}
isPrime = true;
}
}
static void Main(string[] args)
{
Program p = new Program();
p.prime_num (999999999999999L);
Console.ReadLine();
}
}
}
Can any one help me out and find what is the possible error in the program?
You can do this faster using a nearly optimal trial division sieve in one (long) line like this:
Enumerable.Range(0, Math.Floor(2.52*Math.Sqrt(num)/Math.Log(num))).Aggregate(
Enumerable.Range(2, num-1).ToList(),
(result, index) => {
var bp = result[index]; var sqr = bp * bp;
result.RemoveAll(i => i >= sqr && i % bp == 0);
return result;
}
);
The approximation formula for number of primes used here is π(x) < 1.26 x / ln(x). We only need to test by primes not greater than x = sqrt(num).
Note that the sieve of Eratosthenes has much better run time complexity than trial division (should run much faster for bigger num values, when properly implemented).
Try this:
void prime_num(long num)
{
// bool isPrime = true;
for (long i = 0; i <= num; i++)
{
bool isPrime = true; // Move initialization to here
for (long j = 2; j < i; j++) // you actually only need to check up to sqrt(i)
{
if (i % j == 0) // you don't need the first condition
{
isPrime = false;
break;
}
}
if (isPrime)
{
Console.WriteLine ( "Prime:" + i );
}
// isPrime = true;
}
}
You only need to check odd divisors up to the square root of the number. In other words your inner loop needs to start:
for (int j = 3; j <= Math.Sqrt(i); j+=2) { ... }
You can also break out of the function as soon as you find the number is not prime, you don't need to check any more divisors (I see you're already doing that!).
This will only work if num is bigger than two.
No Sqrt
You can avoid the Sqrt altogether by keeping a running sum. For example:
int square_sum=1;
for (int j=3; square_sum<i; square_sum+=4*(j++-1)) {...}
This is because the sum of numbers 1+(3+5)+(7+9) will give you a sequence of odd squares (1,9,25 etc). And hence j represents the square root of square_sum. As long as square_sum is less than i then j is less than the square root.
People have mentioned a couple of the building blocks toward doing this efficiently, but nobody's really put the pieces together. The sieve of Eratosthenes is a good start, but with it you'll run out of memory long before you reach the limit you've set. That doesn't mean it's useless though -- when you're doing your loop, what you really care about are prime divisors. As such, you can start by using the sieve to create a base of prime divisors, then use those in the loop to test numbers for primacy.
When you write the loop, however, you really do NOT want to us sqrt(i) in the loop condition as a couple of answers have suggested. You and I know that the sqrt is a "pure" function that always gives the same answer if given the same input parameter. Unfortunately, the compiler does NOT know that, so if use something like '<=Math.sqrt(x)' in the loop condition, it'll re-compute the sqrt of the number every iteration of the loop.
You can avoid that a couple of different ways. You can either pre-compute the sqrt before the loop, and use the pre-computed value in the loop condition, or you can work in the other direction, and change i<Math.sqrt(x) to i*i<x. Personally, I'd pre-compute the square root though -- I think it's clearer and probably a bit faster--but that depends on the number of iterations of the loop (the i*i means it's still doing a multiplication in the loop). With only a few iterations, i*i will typically be faster. With enough iterations, the loss from i*i every iteration outweighs the time for executing sqrt once outside the loop.
That's probably adequate for the size of numbers you're dealing with -- a 15 digit limit means the square root is 7 or 8 digits, which fits in a pretty reasonable amount of memory. On the other hand, if you want to deal with numbers in this range a lot, you might want to look at some of the more sophisticated prime-checking algorithms, such as Pollard's or Brent's algorithms. These are more complex (to put it mildly) but a lot faster for large numbers.
There are other algorithms for even bigger numbers (quadratic sieve, general number field sieve) but we won't get into them for the moment -- they're a lot more complex, and really only useful for dealing with really big numbers (the GNFS starts to be useful in the 100+ digit range).
First step: write an extension method to find out if an input is prime
public static bool isPrime(this int number ) {
for (int i = 2; i < number; i++) {
if (number % i == 0) {
return false;
}
}
return true;
}
2 step: write the method that will print all prime numbers that are between 0 and the number input
public static void getAllPrimes(int number)
{
for (int i = 0; i < number; i++)
{
if (i.isPrime()) Console.WriteLine(i);
}
}
It may just be my opinion, but there's another serious error in your program (setting aside the given 'prime number' question, which has been thoroughly answered).
Like the rest of the responders, I'm assuming this is homework, which indicates you want to become a developer (presumably).
You need to learn to compartmentalize your code. It's not something you'll always need to do in a project, but it's good to know how to do it.
Your method prime_num(long num) could stand a better, more descriptive name. And if it is supposed to find all prime numbers less than a given number, it should return them as a list. This makes it easier to seperate your display and your functionality.
If it simply returned an IList containing prime numbers you could then display them in your main function (perhaps calling another outside function to pretty print them) or use them in further calculations down the line.
So my best recommendation to you is to do something like this:
public void main(string args[])
{
//Get the number you want to use as input
long x = number;//'number' can be hard coded or retrieved from ReadLine() or from the given arguments
IList<long> primes = FindSmallerPrimes(number);
DisplayPrimes(primes);
}
public IList<long> FindSmallerPrimes(long largestNumber)
{
List<long> returnList = new List<long>();
//Find the primes, using a method as described by another answer, add them to returnList
return returnList;
}
public void DisplayPrimes(IList<long> primes)
{
foreach(long l in primes)
{
Console.WriteLine ( "Prime:" + l.ToString() );
}
}
Even if you end up working somewhere where speration like this isn't needed, it's good to know how to do it.
EDIT_ADD: If Will Ness is correct that the question's purpose is just to output a continuous stream of primes for as long as the program is run (pressing Pause/Break to pause and any key to start again) with no serious hope of every getting to that upper limit, then the code should be written with no upper limit argument and a range check of "true" for the first 'i' for loop. On the other hand, if the question wanted to actually print the primes up to a limit, then the following code will do the job much more efficiently using Trial Division only for odd numbers, with the advantage that it doesn't use memory at all (it could also be converted to a continuous loop as per the above):
static void primesttt(ulong top_number) {
Console.WriteLine("Prime: 2");
for (var i = 3UL; i <= top_number; i += 2) {
var isPrime = true;
for (uint j = 3u, lim = (uint)Math.Sqrt((double)i); j <= lim; j += 2) {
if (i % j == 0) {
isPrime = false;
break;
}
}
if (isPrime) Console.WriteLine("Prime: {0} ", i);
}
}
First, the question code produces no output because of that its loop variables are integers and the limit tested is a huge long integer, meaning that it is impossible for the loop to reach the limit producing an inner loop EDITED: whereby the variable 'j' loops back around to negative numbers; when the 'j' variable comes back around to -1, the tested number fails the prime test because all numbers are evenly divisible by -1 END_EDIT. Even if this were corrected, the question code produces very slow output because it gets bound up doing 64-bit divisions of very large quantities of composite numbers (all the even numbers plus the odd composites) by the whole range of numbers up to that top number of ten raised to the sixteenth power for each prime that it can possibly produce. The above code works because it limits the computation to only the odd numbers and only does modulo divisions up to the square root of the current number being tested.
This takes an hour or so to display the primes up to a billion, so one can imagine the amount of time it would take to show all the primes to ten thousand trillion (10 raised to the sixteenth power), especially as the calculation gets slower with increasing range. END_EDIT_ADD
Although the one liner (kind of) answer by #SLaks using Linq works, it isn't really the Sieve of Eratosthenes as it is just an unoptimised version of Trial Division, unoptimised in that it does not eliminate odd primes, doesn't start at the square of the found base prime, and doesn't stop culling for base primes larger than the square root of the top number to sieve. It is also quite slow due to the multiple nested enumeration operations.
It is actually an abuse of the Linq Aggregate method and doesn't effectively use the first of the two Linq Range's generated. It can become an optimized Trial Division with less enumeration overhead as follows:
static IEnumerable<int> primes(uint top_number) {
var cullbf = Enumerable.Range(2, (int)top_number).ToList();
for (int i = 0; i < cullbf.Count; i++) {
var bp = cullbf[i]; var sqr = bp * bp; if (sqr > top_number) break;
cullbf.RemoveAll(c => c >= sqr && c % bp == 0);
} return cullbf; }
which runs many times faster than the SLaks answer. However, it is still slow and memory intensive due to the List generation and the multiple enumerations as well as the multiple divide (implied by the modulo) operations.
The following true Sieve of Eratosthenes implementation runs about 30 times faster and takes much less memory as it only uses a one bit representation per number sieved and limits its enumeration to the final iterator sequence output, as well having the optimisations of only treating odd composites, and only culling from the squares of the base primes for base primes up to the square root of the maximum number, as follows:
static IEnumerable<uint> primes(uint top_number) {
if (top_number < 2u) yield break;
yield return 2u; if (top_number < 3u) yield break;
var BFLMT = (top_number - 3u) / 2u;
var SQRTLMT = ((uint)(Math.Sqrt((double)top_number)) - 3u) / 2u;
var buf = new BitArray((int)BFLMT + 1,true);
for (var i = 0u; i <= BFLMT; ++i) if (buf[(int)i]) {
var p = 3u + i + i; if (i <= SQRTLMT) {
for (var j = (p * p - 3u) / 2u; j <= BFLMT; j += p)
buf[(int)j] = false; } yield return p; } }
The above code calculates all the primes to ten million range in about 77 milliseconds on an Intel i7-2700K (3.5 GHz).
Either of the two static methods can be called and tested with the using statements and with the static Main method as follows:
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
static void Main(string[] args) {
Console.WriteLine("This program generates prime sequences.\r\n");
var n = 10000000u;
var elpsd = -DateTime.Now.Ticks;
var count = 0; var lastp = 0u;
foreach (var p in primes(n)) { if (p > n) break; ++count; lastp = (uint)p; }
elpsd += DateTime.Now.Ticks;
Console.WriteLine(
"{0} primes found <= {1}; the last one is {2} in {3} milliseconds.",
count, n, lastp,elpsd / 10000);
Console.Write("\r\nPress any key to exit:");
Console.ReadKey(true);
Console.WriteLine();
}
which will show the number of primes in the sequence up to the limit, the last prime found, and the time expended in enumerating that far.
EDIT_ADD: However, in order to produce an enumeration of the number of primes less than ten thousand trillion (ten to the sixteenth power) as the question asks, a segmented paged approach using multi-core processing is required but even with C++ and the very highly optimized PrimeSieve, this would require something over 400 hours to just produce the number of primes found, and tens of times that long to enumerate all of them so over a year to do what the question asks. To do it using the un-optimized Trial Division algorithm attempted, it will take super eons and a very very long time even using an optimized Trial Division algorithm as in something like ten to the two millionth power years (that's two million zeros years!!!).
It isn't much wonder that his desktop machine just sat and stalled when he tried it!!!! If he had tried a smaller range such as one million, he still would have found it takes in the range of seconds as implemented.
The solutions I post here won't cut it either as even the last Sieve of Eratosthenes one will require about 640 Terabytes of memory for that range.
That is why only a page segmented approach such as that of PrimeSieve can handle this sort of problem for the range as specified at all, and even that requires a very long time, as in weeks to years unless one has access to a super computer with hundreds of thousands of cores. END_EDIT_ADD
Smells like more homework. My very very old graphing calculator had a is prime program like this. Technnically the inner devision checking loop only needs to run to i^(1/2). Do you need to find "all" prime numbers between 0 and L ? The other major problem is that your loop variables are "int" while your input data is "long", this will be causing an overflow making your loops fail to execute even once. Fix the loop variables.
One line code in C# :-
Console.WriteLine(String.Join(Environment.NewLine,
Enumerable.Range(2, 300)
.Where(n => Enumerable.Range(2, (int)Math.Sqrt(n) - 1)
.All(nn => n % nn != 0)).ToArray()));
The Sieve of Eratosthenes answer above is not quite correct. As written it will find all the primes between 1 and 1000000. To find all the primes between 1 and num use:
private static IEnumerable Primes01(int num)
{
return Enumerable.Range(1, Convert.ToInt32(Math.Floor(Math.Sqrt(num))))
.Aggregate(Enumerable.Range(1, num).ToList(),
(result, index) =>
{
result.RemoveAll(i => i > result[index] && i%result[index] == 0);
return result;
}
);
}
The seed of the Aggregate should be range 1 to num since this list will contain the final list of primes. The Enumerable.Range(1, Convert.ToInt32(Math.Floor(Math.Sqrt(num)))) is the number of times the seed is purged.
ExchangeCore Forums have a good console application listed that looks to write found primes to a file, it looks like you can also use that same file as a starting point so you don't have to restart finding primes from 2 and they provide a download of that file with all found primes up to 100 million so it would be a good start.
The algorithm on the page also takes a couple shortcuts (odd numbers and only checks up to the square root) which makes it extremely efficient and it will allow you to calculate long numbers.
so this is basically just two typos, one, the most unfortunate, for (int j = 2; j <= num; j++) which is the reason for the unproductive testing of 1%2,1%3 ... 1%(10^15-1) which goes on for very long time so the OP didn't get "any output". It should've been j < i; instead. The other, minor one in comparison, is that i should start from 2, not from 0:
for( i=2; i <= num; i++ )
{
for( j=2; j < i; j++ ) // j <= sqrt(i) is really enough
....
Surely it can't be reasonably expected of a console print-out of 28 trillion primes or so to be completed in any reasonable time-frame. So, the original intent of the problem was obviously to print out a steady stream of primes, indefinitely. Hence all the solutions proposing simple use of sieve of Eratosthenes are totally without merit here, because simple sieve of Eratosthenes is bounded - a limit must be set in advance.
What could work here is the optimized trial division which would save the primes as it finds them, and test against the primes, not just all numbers below the candidate.
Second alternative, with much better complexity (i.e. much faster) is to use a segmented sieve of Eratosthenes. Which is incremental and unbounded.
Both these schemes would use double-staged production of primes: one would produce and save the primes, to be used by the other stage in testing (or sieving), much above the limit of the first stage (below its square of course - automatically extending the first stage, as the second stage would go further and further up).
To be quite frank, some of the suggested solutions are really slow, and therefore are bad suggestions. For testing a single number to be prime you need some dividing/modulo operator, but for calculating a range you don't have to.
Basically you just exclude numbers that are multiples of earlier found primes, as the are (by definition) not primes themselves.
I will not give the full implementation, as that would be to easy, this is the approach in pseudo code. (On my machine, the actual implementation calculates all primes in an Sytem.Int32 (2 bilion) within 8 seconds.
public IEnumerable<long> GetPrimes(long max)
{
// we safe the result set in an array of bytes.
var buffer = new byte[long >> 4];
// 1 is not a prime.
buffer[0] = 1;
var iMax = (long)Math.Sqrt(max);
for(long i = 3; i <= iMax; i +=2 )
{
// find the index in the buffer
var index = i >> 4;
// find the bit of the buffer.
var bit = (i >> 1) & 7;
// A not set bit means: prime
if((buffer[index] & (1 << bit)) == 0)
{
var step = i << 2;
while(step < max)
{
// find position in the buffer to write bits that represent number that are not prime.
}
}
// 2 is not in the buffer.
yield return 2;
// loop through buffer and yield return odd primes too.
}
}
The solution requires a good understanding of bitwise operations. But it ways, and ways faster. You also can safe the result of the outcome on disc, if you need them for later use. The result of 17 * 10^9 numbers can be safed with 1 GB, and the calculation of that result set takes about 2 minutes max.
I know this is quiet old question, but after reading here:
Sieve of Eratosthenes Wiki
This is the way i wrote it from understanding the algorithm:
void SieveOfEratosthenes(int n)
{
bool[] primes = new bool[n + 1];
for (int i = 0; i < n; i++)
primes[i] = true;
for (int i = 2; i * i <= n; i++)
if (primes[i])
for (int j = i * 2; j <= n; j += i)
primes[j] = false;
for (int i = 2; i <= n; i++)
if (primes[i]) Console.Write(i + " ");
}
In the first loop we fill the array of booleans with true.
Second for loop will start from 2 since 1 is not a prime number and will check if prime number is still not changed and then assign false to the index of j.
last loop we just printing when it is prime.
Very similar - from an exercise to implement Sieve of Eratosthenes in C#:
public class PrimeFinder
{
readonly List<long> _primes = new List<long>();
public PrimeFinder(long seed)
{
CalcPrimes(seed);
}
public List<long> Primes { get { return _primes; } }
private void CalcPrimes(long maxValue)
{
for (int checkValue = 3; checkValue <= maxValue; checkValue += 2)
{
if (IsPrime(checkValue))
{
_primes.Add(checkValue);
}
}
}
private bool IsPrime(long checkValue)
{
bool isPrime = true;
foreach (long prime in _primes)
{
if ((checkValue % prime) == 0 && prime <= Math.Sqrt(checkValue))
{
isPrime = false;
break;
}
}
return isPrime;
}
}
Prime Helper very fast calculation
public static class PrimeHelper
{
public static IEnumerable<Int32> FindPrimes(Int32 maxNumber)
{
return (new PrimesInt32(maxNumber));
}
public static IEnumerable<Int32> FindPrimes(Int32 minNumber, Int32 maxNumber)
{
return FindPrimes(maxNumber).Where(pn => pn >= minNumber);
}
public static bool IsPrime(this Int64 number)
{
if (number < 2)
return false;
else if (number < 4 )
return true;
var limit = (Int32)System.Math.Sqrt(number) + 1;
var foundPrimes = new PrimesInt32(limit);
return !foundPrimes.IsDivisible(number);
}
public static bool IsPrime(this Int32 number)
{
return IsPrime(Convert.ToInt64(number));
}
public static bool IsPrime(this Int16 number)
{
return IsPrime(Convert.ToInt64(number));
}
public static bool IsPrime(this byte number)
{
return IsPrime(Convert.ToInt64(number));
}
}
public class PrimesInt32 : IEnumerable<Int32>
{
private Int32 limit;
private BitArray numbers;
public PrimesInt32(Int32 limit)
{
if (limit < 2)
throw new Exception("Prime numbers not found.");
startTime = DateTime.Now;
calculateTime = startTime - startTime;
this.limit = limit;
try { findPrimes(); } catch{/*Overflows or Out of Memory*/}
calculateTime = DateTime.Now - startTime;
}
private void findPrimes()
{
/*
The Sieve Algorithm
http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
*/
numbers = new BitArray(limit, true);
for (Int32 i = 2; i < limit; i++)
if (numbers[i])
for (Int32 j = i * 2; j < limit; j += i)
numbers[j] = false;
}
public IEnumerator<Int32> GetEnumerator()
{
for (Int32 i = 2; i < 3; i++)
if (numbers[i])
yield return i;
if (limit > 2)
for (Int32 i = 3; i < limit; i += 2)
if (numbers[i])
yield return i;
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
// Extended for Int64
public bool IsDivisible(Int64 number)
{
var sqrt = System.Math.Sqrt(number);
foreach (var prime in this)
{
if (prime > sqrt)
break;
if (number % prime == 0)
{
DivisibleBy = prime;
return true;
}
}
return false;
}
private static DateTime startTime;
private static TimeSpan calculateTime;
public static TimeSpan CalculateTime { get { return calculateTime; } }
public Int32 DivisibleBy { get; set; }
}
public static void Main()
{
Console.WriteLine("enter the number");
int i = int.Parse(Console.ReadLine());
for (int j = 2; j <= i; j++)
{
for (int k = 2; k <= i; k++)
{
if (j == k)
{
Console.WriteLine("{0}is prime", j);
break;
}
else if (j % k == 0)
{
break;
}
}
}
Console.ReadLine();
}
static void Main(string[] args)
{ int i,j;
Console.WriteLine("prime no between 1 to 100");
for (i = 2; i <= 100; i++)
{
int count = 0;
for (j = 1; j <= i; j++)
{
if (i % j == 0)
{ count=count+1; }
}
if ( count <= 2)
{ Console.WriteLine(i); }
}
Console.ReadKey();
}
U can use the normal prime number concept must only two factors (one and itself).
So do like this,easy way
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace PrimeNUmber
{
class Program
{
static void FindPrimeNumber(long num)
{
for (long i = 1; i <= num; i++)
{
int totalFactors = 0;
for (int j = 1; j <= i; j++)
{
if (i % j == 0)
{
totalFactors = totalFactors + 1;
}
}
if (totalFactors == 2)
{
Console.WriteLine(i);
}
}
}
static void Main(string[] args)
{
long num;
Console.WriteLine("Enter any value");
num = Convert.ToInt64(Console.ReadLine());
FindPrimeNumber(num);
Console.ReadLine();
}
}
}
This solution displays all prime numbers between 0 and 100.
int counter = 0;
for (int c = 0; c <= 100; c++)
{
counter = 0;
for (int i = 1; i <= c; i++)
{
if (c % i == 0)
{ counter++; }
}
if (counter == 2)
{ Console.Write(c + " "); }
}
This is the fastest way to calculate prime numbers in C#.
void PrimeNumber(long number)
{
bool IsprimeNumber = true;
long value = Convert.ToInt32(Math.Sqrt(number));
if (number % 2 == 0)
{
IsprimeNumber = false;
}
for (long i = 3; i <= value; i=i+2)
{
if (number % i == 0)
{
// MessageBox.Show("It is divisible by" + i);
IsprimeNumber = false;
break;
}
}
if (IsprimeNumber)
{
MessageBox.Show("Yes Prime Number");
}
else
{
MessageBox.Show("No It is not a Prime NUmber");
}
}
class CheckIfPrime
{
static void Main()
{
while (true)
{
Console.Write("Enter a number: ");
decimal a = decimal.Parse(Console.ReadLine());
decimal[] k = new decimal[int.Parse(a.ToString())];
decimal p = 0;
for (int i = 2; i < a; i++)
{
if (a % i != 0)
{
p += i;
k[i] = i;
}
else
p += i;
}
if (p == k.Sum())
{ Console.WriteLine ("{0} is prime!", a);}
else
{Console.WriteLine("{0} is NOT prime", a);}
}
}
}
There are some very optimal ways to implement the algorithm. But if you don't know much about maths and you simply follow the definition of prime as the requirement:
a number that is only divisible by 1 and by itself (and nothing else), here's a simple to understand code for positive numbers.
public bool IsPrime(int candidateNumber)
{
int fromNumber = 2;
int toNumber = candidateNumber - 1;
while(fromNumber <= toNumber)
{
bool isDivisible = candidateNumber % fromNumber == 0;
if (isDivisible)
{
return false;
}
fromNumber++;
}
return true;
}
Since every number is divisible by 1 and by itself, we start checking from 2 onwards until the number immediately before itself. That's the basic reasoning.
You can do also this:
class Program
{
static void Main(string[] args)
{
long numberToTest = 350124;
bool isPrime = NumberIsPrime(numberToTest);
Console.WriteLine(string.Format("Number {0} is prime? {1}", numberToTest, isPrime));
Console.ReadLine();
}
private static bool NumberIsPrime(long n)
{
bool retVal = true;
if (n <= 3)
{
retVal = n > 1;
} else if (n % 2 == 0 || n % 3 == 0)
{
retVal = false;
}
int i = 5;
while (i * i <= n)
{
if (n % i == 0 || n % (i + 2) == 0)
{
retVal = false;
}
i += 6;
}
return retVal;
}
}
An easier approach , what i did is check if a number have exactly two division factors which is the essence of prime numbers .
List<int> factorList = new List<int>();
int[] numArray = new int[] { 1, 0, 6, 9, 7, 5, 3, 6, 0, 8, 1 };
foreach (int item in numArray)
{
for (int x = 1; x <= item; x++)
{
//check for the remainder after dividing for each number less that number
if (item % x == 0)
{
factorList.Add(x);
}
}
if (factorList.Count == 2) // has only 2 division factors ; prime number
{
Console.WriteLine(item + " is a prime number ");
}
else
{Console.WriteLine(item + " is not a prime number ");}
factorList = new List<int>(); // reinitialize list
}
Here is a solution with unit test:
The solution:
public class PrimeNumbersKata
{
public int CountPrimeNumbers(int n)
{
if (n < 0) throw new ArgumentException("Not valide numbre");
if (n == 0 || n == 1) return 0;
int cpt = 0;
for (int i = 2; i <= n; i++)
{
if (IsPrimaire(i)) cpt++;
}
return cpt;
}
private bool IsPrimaire(int number)
{
for (int i = 2; i <= number / 2; i++)
{
if (number % i == 0) return false;
}
return true;
}
}
The tests:
[TestFixture]
class PrimeNumbersKataTest
{
private PrimeNumbersKata primeNumbersKata;
[SetUp]
public void Init()
{
primeNumbersKata = new PrimeNumbersKata();
}
[TestCase(1,0)]
[TestCase(0,0)]
[TestCase(2,1)]
[TestCase(3,2)]
[TestCase(5,3)]
[TestCase(7,4)]
[TestCase(9,4)]
[TestCase(11,5)]
[TestCase(13,6)]
public void CountPrimeNumbers_N_AsArgument_returnCountPrimes(int n, int expected)
{
//arrange
//act
var actual = primeNumbersKata.CountPrimeNumbers(n);
//assert
Assert.AreEqual(expected,actual);
}
[Test]
public void CountPrimairs_N_IsNegative_RaiseAnException()
{
var ex = Assert.Throws<ArgumentException>(()=> { primeNumbersKata.CountPrimeNumbers(-1); });
//Assert.That(ex.Message == "Not valide numbre");
Assert.That(ex.Message, Is.EqualTo("Not valide numbre"));
}
}
in the university it was necessary to count prime numbers up to 10,000 did so, the teacher was a little surprised, but I passed the test. Lang c#
void Main()
{
int number=1;
for(long i=2;i<10000;i++)
{
if(PrimeTest(i))
{
Console.WriteLine(number+++" " +i);
}
}
}
List<long> KnownPrime = new List<long>();
private bool PrimeTest(long i)
{
if (i == 1) return false;
if (i == 2)
{
KnownPrime.Add(i);
return true;
}
foreach(int k in KnownPrime)
{
if(i%k==0)
return false;
}
KnownPrime.Add(i);
return true;
}
for (int i = 2; i < 100; i++)
{
bool isPrimeNumber = true;
for (int j = 2; j <= i && j <= 100; j++)
{
if (i != j && i % j == 0)
{
isPrimeNumber = false; break;
}
}
if (isPrimeNumber)
{
Console.WriteLine(i);
}
}
The .Net framework has an Array.Sort overload that allows one to specify the starting and ending indicies for the sort to act upon. However these parameters are only 32 bit. So I don't see a way to sort a part of a large array when the indicies that describe the sort range can only be specified using a 64-bit number. I suppose I could copy and modify the the framework's sort implementation, but that is not ideal.
Update:
I've created two classes to help me around these and other large-array issues. One other such issue was that long before I got to my memory limit, I start getting OutOfMemoryException's. I'm assuming this is because the requested memory may be available but not contiguous. So for that, I created class BigArray, which is a generic, dynamically sizable list of arrays. It has a smaller memory footprint than the framework's generic list class, and does not require that the entire array be contiguous. I haven't tested the performance hit, but I'm sure its there.
public class BigArray<T> : IEnumerable<T>
{
private long capacity;
private int itemsPerBlock;
private int shift;
private List<T[]> blocks = new List<T[]>();
public BigArray(int itemsPerBlock)
{
shift = (int)Math.Ceiling(Math.Log(itemsPerBlock) / Math.Log(2));
this.itemsPerBlock = 1 << shift;
}
public long Capacity
{
get
{
return capacity;
}
set
{
var requiredBlockCount = (value - 1) / itemsPerBlock + 1;
while (blocks.Count > requiredBlockCount)
{
blocks.RemoveAt(blocks.Count - 1);
}
while (blocks.Count < requiredBlockCount)
{
blocks.Add(new T[itemsPerBlock]);
}
capacity = (long)itemsPerBlock * blocks.Count;
}
}
public T this[long index]
{
get
{
Debug.Assert(index < capacity);
var blockNumber = (int)(index >> shift);
var itemNumber = index & (itemsPerBlock - 1);
return blocks[blockNumber][itemNumber];
}
set
{
Debug.Assert(index < capacity);
var blockNumber = (int)(index >> shift);
var itemNumber = index & (itemsPerBlock - 1);
blocks[blockNumber][itemNumber] = value;
}
}
public IEnumerator<T> GetEnumerator()
{
for (long i = 0; i < capacity; i++)
{
yield return this[i];
}
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
}
And getting back to the original issue of sorting... What I really needed was a way to act on each element of an array, in order. But with such large arrays, it is prohibitive to copy the data, sort it, act on it and then discard the sorted copy (the original order must be maintained). So I created static class OrderedOperation, which allows you to perform an arbitrary operation on each element of an unsorted array, in a sorted order. And do so with a low memory footprint (trading memory for execution time here).
public static class OrderedOperation
{
public delegate void WorkerDelegate(int index, float progress);
public static void Process(WorkerDelegate worker, IEnumerable<int> items, int count, int maxItem, int maxChunkSize)
{
// create a histogram such that a single bin is never bigger than a chunk
int binCount = 1000;
int[] bins;
double binScale;
bool ok;
do
{
ok = true;
bins = new int[binCount];
binScale = (double)(binCount - 1) / maxItem;
int i = 0;
foreach (int item in items)
{
bins[(int)(binScale * item)]++;
if (++i == count)
{
break;
}
}
for (int b = 0; b < binCount; b++)
{
if (bins[b] > maxChunkSize)
{
ok = false;
binCount *= 2;
break;
}
}
} while (!ok);
var chunkData = new int[maxChunkSize];
var chunkIndex = new int[maxChunkSize];
var done = new System.Collections.BitArray(count);
var processed = 0;
var binsCompleted = 0;
while (binsCompleted < binCount)
{
var chunkMax = 0;
var sum = 0;
do
{
sum += bins[binsCompleted];
binsCompleted++;
} while (binsCompleted < binCount - 1 && sum + bins[binsCompleted] <= maxChunkSize);
Debug.Assert(sum <= maxChunkSize);
chunkMax = (int)Math.Ceiling((double)binsCompleted / binScale);
var chunkCount = 0;
int i = 0;
foreach (int item in items)
{
if (item < chunkMax && !done[i])
{
chunkData[chunkCount] = item;
chunkIndex[chunkCount] = i;
chunkCount++;
done[i] = true;
}
if (++i == count)
{
break;
}
}
Debug.Assert(sum == chunkCount);
Array.Sort(chunkData, chunkIndex, 0, chunkCount);
for (i = 0; i < chunkCount; i++)
{
worker(chunkIndex[i], (float)processed / count);
processed++;
}
}
Debug.Assert(processed == count);
}
}
The two classes can work together (that's how I use them), but they don't have to. I hope someone else finds them useful. But I'll admit, they are fringe case classes. Questions welcome. And if my code sucks, I'd like to hear tips, too.
One final thought: As you can see in OrderedOperation, I'm using ints and not longs. Currently that is sufficient for me despite the original question I had (the application is in flux, in case you can't tell). But the class should be able to handle longs as well, should the need arise.
You'll find that even on the 64-bit framework, the maximum number of elements in an array is int.MaxValue.
The existing methods that take or return Int64 just cast the long values to Int32 internally and, in the case of parameters, will throw an ArgumentOutOfRangeException if a long parameter isn't between int.MinValue and int.MaxValue.
For example the LongLength property, which returns an Int64, just casts and returns the value of the Length property:
public long LongLength
{
get { return (long)this.Length; } // Length is an Int32
}
So my suggestion would be to cast your Int64 indicies to Int32 and then call one of the existing Sort overloads.
Since Array.Copy takes Int64 params, you could pull out the section you need to sort, sort it, then put it back. Assuming you're sorting less than 2^32 elements, of course.
Seems like if you are sorting more than 2^32 elements then it would be best to write your own, more efficient, sort algorithm anyway.