Anyone knows if multiply operator is faster than using the Math.Pow method? Like:
n * n * n
vs
Math.Pow ( n, 3 )
I just reinstalled windows so visual studio is not installed and the code is ugly
using System;
using System.Diagnostics;
public static class test{
public static void Main(string[] args){
MyTest();
PowTest();
}
static void PowTest(){
var sw = Stopwatch.StartNew();
double res = 0;
for (int i = 0; i < 333333333; i++){
res = Math.Pow(i,30); //pow(i,30)
}
Console.WriteLine("Math.Pow: " + sw.ElapsedMilliseconds + " ms: " + res);
}
static void MyTest(){
var sw = Stopwatch.StartNew();
double res = 0;
for (int i = 0; i < 333333333; i++){
res = MyPow(i,30);
}
Console.WriteLine("MyPow: " + sw.ElapsedMilliseconds + " ms: " + res);
}
static double MyPow(double num, int exp)
{
double result = 1.0;
while (exp > 0)
{
if (exp % 2 == 1)
result *= num;
exp >>= 1;
num *= num;
}
return result;
}
}
The results:
csc /o test.cs
test.exe
MyPow: 6224 ms: 4.8569351667866E+255
Math.Pow: 43350 ms: 4.8569351667866E+255
Exponentiation by squaring (see https://stackoverflow.com/questions/101439/the-most-efficient-way-to-implement-an-integer-based-power-function-powint-int) is much faster than Math.Pow in my test (my CPU is a Pentium T3200 at 2 Ghz)
EDIT: .NET version is 3.5 SP1, OS is Vista SP1 and power plan is high performance.
Basically, you should benchmark to see.
Educated Guesswork (unreliable):
In case it's not optimized to the same thing by some compiler...
It's very likely that x * x * x is faster than Math.Pow(x, 3) as Math.Pow has to deal with the problem in its general case, dealing with fractional powers and other issues, while x * x * x would just take a couple multiply instructions, so it's very likely to be faster.
A few rules of thumb from 10+ years of optimization in image processing & scientific computing:
Optimizations at an algorithmic level beat any amount of optimization at a low level. Despite the "Write the obvious, then optimize" conventional wisdom this must be done at the start. Not after.
Hand coded math operations (especially SIMD SSE+ types) will generally outperform the fully error checked, generalized inbuilt ones.
Any operation where the compiler knows beforehand what needs to be done are optimized by the compiler. These include:
1. Memory operations such as Array.Copy()
2. For loops over arrays where the array length is given. As in for (..; i<array.Length;..)
Always set unrealistic goals (if you want to).
I just happened to have tested this yesterday, then saw your question now.
On my machine, a Core 2 Duo running 1 test thread, it is faster to use multiply up to a factor of 9. At 10, Math.Pow(b, e) is faster.
However, even at a factor of 2, the results are often not identical. There are rounding errors.
Some algorithms are highly sensitive to rounding errors. I had to literally run over a million random tests until I discovered this.
This is so micro that you should probably benchmark it for specific platforms, I don't think the results for a Pentium Pro will be necessarily the same as for an ARM or Pentium II.
All in all, it's most likely to be totally irrelevant.
I checked, and Math.Pow() is defined to take two doubles. This means that it can't do repeated multiplications, but has to use a more general approach. If there were a Math.Pow(double, int), it could probably be more efficient.
That being said, the performance difference is almost certainly absolutely trivial, and so you should use whichever is clearer. Micro-optimizations like this are almost always pointless, can be introduced at virtually any time, and should be left for the end of the development process. At that point, you can check if the software is too slow, where the hot spots are, and spend your micro-optimization effort where it will actually make a difference.
Let's use the convention x^n. Let's assume n is always an integer.
For small values of n, boring multiplication will be faster, because Math.Pow (likely, implementation dependent) uses fancy algorithms to allow for n to be non-integral and/or negative.
For large values of n, Math.Pow will likely be faster, but if your library isn't very smart it will use the same algorithm, which is not ideal if you know that n is always an integer. For that you could code up an implementation of exponentiation by squaring or some other fancy algorithm.
Of course modern computers are very fast and you should probably stick to the simplest, easiest to read, least likely to be buggy method until you benchmark your program and are sure that you will get a significant speedup by using a different algorithm.
Math.Pow(x, y) is typically calculated internally as Math.Exp(Math.Log(x) * y). Evey power equation requires finding a natural log, a multiplication, and raising e to a power.
As I mentioned in my previous answer, only at a power of 10 does Math.Pow() become faster, but accuracy will be compromised if using a series of multiplications.
I disagree that handbuilt functions are always faster. The cosine functions are way faster and more accurate than anything i could write. As for pow(). I did a quick test to see how slow Math.pow() was in javascript, because Mehrdad cautioned against guesswork
for (i3 = 0; i3 < 50000; ++i3) {
for(n=0; n < 9000;n++){
x=x*Math.cos(i3);
}
}
here are the results:
Each function run 50000 times
time for 50000 Math.cos(i) calls = 8 ms
time for 50000 Math.pow(Math.cos(i),9000) calls = 21 ms
time for 50000 Math.pow(Math.cos(i),9000000) calls = 16 ms
time for 50000 homemade for loop calls 1065 ms
if you don't agree try the program at http://www.m0ose.com/javascripts/speedtests/powSpeedTest.html
Related
I have a newbie question guys.
Let's suppose that I have the following simple nested loop, where m and n are not necessarily the same, but are really big numbers:
x = 0;
for (i=0; i<m; i++)
{
for (j=0; j<n; j++)
{
delta = CalculateDelta(i,j);
x = x + j + i + delta;
}
}
And now I have this:
x = 0;
for (i=0; i<m; i++)
{
for (j=0; j<n; j++)
{
delta = CalculateDelta(i,j);
x = x + j + i + delta;
j++;
delta = CalculateDelta(i,j);
x = x + j + i + delta;
}
}
Rule: I do need to go through all the elements of the loop, because of this delta calculation.
My questions are:
1) Is the second algorithm faster than the first one, or is it the same?
I have this doubt because for me the first algorithm has a complexity of O(m * n) and the second one is O(m * n/2). Or does lower complexity not necessary makes it faster?
2) Is there any other way to make this faster without something like a Parallel. For?
3) If I make usage of a Parallel. For, would it really make it faster since I would probably need to do a synchronization lock on the x variable?
Thanks!
Definitely not, since time complexity is presumably dominated by the number of times CalculateDelta() is called, it doesn't matter whether you make the calls inline, within a single loop or any number of nested loops, the call gets made m*n times.
And now you have a bug (which is the reason I decided to add an answer after #Peter-Duniho had already done so quite comprehensively)
If n is odd, you do more iterations than intended - almost certainly getting the wrong answer or crashing your program...
Asking three questions in a single post is pushing the limits of "too broad". But in your case, the questions are reasonably simple, so…
1) Does the second algorithm is faster then the first one, or is it the same? I have this doubt because for me the first algorithm have a complexity of O(m * n) and the second one is O(m * n/2). Or does lower complexity not necessary makes it faster?
Complexity ignores coefficients like 1/2. So there's no difference between O(m * n) and O(m * n/2). The latter should have been reduced to O(m * n), which is obviously the same as the former.
And the second isn't really O(m * n/2) anyway, because you didn't really remove work. You just partially unrolled your loop. These kinds of meaningless transformations are one of the reasons we ignore coefficients in big-O notation in the first place. It's too easy to fiddle with the coefficients without really changing the actual computational work.
2) Is there any other way to make this faster without something like a Parallel.For?
That's definitely too broad a question. "Any other way"? Probably. But you haven't provided enough context.
The only obvious potential improvement I can see in the code you posted is that you are computing j + i repeatedly, when you could instead just observe that that component of the whole computation increases by 1 with each iteration and so you could keep a separate incrementing variable instead of adding i and j each time. But a) it's far from clear making that change would speed anything up (whether it would, would depend a lot on specifics in the CPU's own optimization logic), and b) if it did so reliably, it's possible that a good optimizing JIT compiler would make that transformation to the code for you.
But beyond that, the CalculateDelta() method is a complete unknown here. It could be a simple on-liner that the compiler inlines for you, or it could be some enormous computation that dominates the whole loop.
There's no way for any of us to tell you if there is "any other way" to make the loop faster. For that matter, it's not even clear that the change you made makes the loop faster. Maybe it did, maybe it didn't.
3) If I make usage of a Parallel.For, would it really make it faster since I would probably need to do a syncronization lock on the x variable?
That at least depends on what CalculateDelta() is doing. If it's expensive enough, then the synchronization on x might not matter. The bigger issue is that each calculation of x depends on the previous one. It's impossible to parallelize the computation, because it's inherently a serial computation.
What you could do is compute all the deltas in parallel, since they don't depend on x (at least, they don't in the code you posted). The other element of the sum is constant (i + j for known m and n), so in the end it's just the sum of the deltas and that constant. Again, whether this is worth doing depends somewhat on how costly CalculateDelta() is. The less costly that method is, the less likely you're going to see much if any improvement by parallelizing execution of it.
One advantageous transformation is to extract the arithmetic sum of the contributions of the i and j terms using the double arithmetic series formula. This saves quite a bit of work, reducing the complexity of that portion of the calculation to O(1) from O(m*n).
x = 0;
for (i=0; i<m; i++)
{
for (j=0; j<n; j++)
{
delta = CalculateDelta(i,j);
x = x + j + i + delta;
}
}
can become
x = n * m * (n + m - 2) / 2;
for (i=0; i<m; i++)
{
for (j=0; j<n; j++)
{
x += CalculateDelta(i,j);
}
}
Optimizing further depends entirely on what CalculateDelta does, which you have not disclosed. If it has side effects, then that's a problem. But if it's a pure function (where its result is dependent only on the inputs i and j) then there's a good chance it can be computed directly as well.
the first for() will send you to the second for()
the second for() will loop till jn) and the second mn/2
I've been warned by numerous programmers not to use the square root function, and instead to raise numbers to the half power. My question is twofold:
What is the perceived/real performance benefit to doing this? Why is it faster?
If it really is faster, why does the square root function even exist?
I've performed a simple test:
Stopwatch sw = new Stopwatch();
sw.Start();
Double s = 0.0;
// compute 1e8 times either Sqrt(x) or its emulation as Pow(x, 0.5)
for (Double d = 0; d < 1e8; d += 1)
// s += Math.Sqrt(d); // <- uncomment it to test Sqrt
s += Math.Pow(d, 0.5); // <- uncomment it to test Pow
sw.Stop();
Console.Out.Write(sw.ElapsedMilliseconds);
The (averaged) outcome at my workstation (x64) is
Sqrt: 950 ms
Pow: 5500 ms
As you can see, more specific Sqrt(x) 5.5 times faster than its emulation Pow(x, 0.5). So it's just one more legend (at least in C#) that Sqrt is that slow one should prefer Pow substitution
You would have to know something about how each function is implemented to answer the question.
The square root function uses Newton's method to iteratively calculate the square root. It converges quadratically. Nothing will speed that up.
The other functions, exp() and ln(x), have implementations that have their own convergence/complexity issues. For example, it's possible to implement both as series sums. A certain number of terms are required to maintain sufficient accuracy.
All bets are off if those functions happen to be implemented in native code. Those might be faster than anything you'll write.
Knowing those would let you make an informed decision. I would not take it on faith because those programmers "know" the answer.
Unless you're doing intensive numerical work, I'd say that the choice won't affect your overall program performance. It's micro-optimization that's best avoided, unless you're doing serious large-scale scientific programming.
This is a mathematical problem, not programming to be something useful!
I want to count factorials of very big numbers (10^n where n>6).
I reached to arbitrary precision, which is very helpful in tasks like 1000!. But it obviously dies(StackOverflowException :) ) at much higher values. I'm not looking for a direct answer, but some clues on how to proceed further.
static BigInteger factorial(BigInteger i)
{
if (i < 1)
return 1;
else
return i * factorial(i - 1);
}
static void Main(string[] args)
{
long z = (long)Math.Pow(10, 12);
Console.WriteLine(factorial(z));
Console.Read();
}
Would I have to resign from System.Numerics.BigInteger? I was thinking of some way of storing necessary data in files, since RAM will obviously run out. Optimization is at this point very important. So what would You recommend?
Also, I need values to be as precise as possible. Forgot to mention that I don't need all of these numbers, just about 20 last ones.
As other answers have shown, the recursion is easily removed. Now the question is: can you store the result in a BigInteger, or are you going to have to go to some sort of external storage?
The number of bits you need to store n! is roughly proportional to n log n. (This is a weak form of Stirling's Approximation.) So let's look at some sizes: (Note that I made some arithmetic errors in an earlier version of this post, which I am correcting here.)
(10^6)! takes order of 2 x 10^6 bytes = a few megabytes
(10^12)! takes order of 3 x 10^12 bytes = a few terabytes
(10^21)! takes order of 10^22 bytes = ten billion terabytes
A few megs will fit into memory. A few terabytes is easily within your grasp but you'll need to write a memory manager probably. Ten billion terabytes will take the combined resources of all the technology companies in the world, but it is doable.
Now consider the computation time. Suppose we can perform a million multiplications per second per machine and that we can parallelize the work out to multiple machines somehow.
(10^6)! takes order of one second on one machine
(10^12)! takes order of 10^6 seconds on one machine =
10 days on one machine =
a few minutes on a thousand machines.
(10^21)! takes order of 10^15 seconds on one machine =
30 million years on one machine =
3 years on 10 million machines
1 day on 10 billion machines (each with a TB drive.)
So (10^6)! is within your grasp. (10^12)! you are going to have to write your own memory manager and math library, and it will take you some time to get an answer. (10^21)! you will need to organize all the resources of the world to solve this problem, but it is doable.
Or you could find another approach.
The solution is easy: Calculate the factorials without using recursion, and you won't blow out your stack.
I.e. you're not getting this error because the numbers are too large, but because you have too many levels of function calls. And fortunately, for factorials there's no reason to calculate them recursively.
Once you've solved your stack problem, you can worry about whether your number format can handle your "very big" factorials. Since you don't need the exact values, use one of the many efficient numeric approximations (which you can count on to get all of the most significant digits right). The most common one is Stirling's approximation:
n! ~ n^n e^{-n} sqrt(2 \pi n)
The image is from this page, where you'll find discussion and a second, more accurate formula (although "in most cases the difference is quite small", they say). Of course this number is still too large for you to store, but now you can work with logarithms and drop the unimportant digits before you extract the number. Or use the Wikipedia version of the approximation, which is already expressed as a logarithm.
Unroll recursion:
static BigInteger factorial(BigInteger n)
{
BigInteger res = 1;
for (BigInteger i = 2; i <= n; ++i)
res *= i;
return res;
}
C#'s Math class does roots and powers in double only. Various things may go a bit faster if I add float-based square-root and power functions to my Math2 class (Today is a relaxation day and I find optimization relaxing).
So - Fast square-root and power functions that I don't have to worry about licensing for, plskthx. Or a link that'll get me there.
I'm going to take it as axiomatic that no software method will compete with the hardware instruction for square roots. The only difficulty is that .NET doesn't give us direct control of the hardware as in the days of inline assembler for C code.
Let's first discuss a generic x86 hardware prospect.
The floating point x86 instruction FSQRT does come in three precisions: single, double, and extended (the native precision of the 80-bit FP registers), and there is a 25-40% shorter timing for single vs. double precision. See here for 32-bit x86 instructions.
That may sound like a big opportunity, but it's only a dozen clocks or so. That sort of economization will easily get lost in the overhead unless you are able to carefully manage the code from function call to return value. Managed C++ sounds (as Marcelo Cantos suggests) like a more practical base for this than C#.
Note: Timings for FSQRT are identical to those FDIV, with which it shares an execution unit in the Intel architecture, and thus a common latency.
A better opportunity for specialized C# code probably exists in the direction of SSE SIMD instructions, where hardware allows for up to 4 single precision square roots to be done in parallel. JIT compiler support for this has been missing for years, but here are some leads on current development.
Intel has jumped in (Dec. 15,2010), seeing that .NET Framework 4 wasn't doing anything with SIMD:
[Intel Performance Libraries allow... SIMD instructions in C#]
Even before that the Mono project added JIT support for SIMD in Mono 2.2:
[Mono: Release Note Mono 2.2]
The possibility of calling Mono's SIMD support from MS C# was recently raised here:
[Calling mono c# code from Microsoft .net ? -- Stackoverflow]
An earlier question also addresses (though without much love shown!) how to install Mono's SIMD support:
[how to enable Mono.Simd -- Stackoverflow]
Should check out this link:
http://www.codecodex.com/wiki/Calculate_an_integer_square_root
has lots of speedy algorithms in a bunch of different languages.
Ex:
// Finds the integer square root of a positive number
public static int Isqrt(int num) {
if (0 == num) { return 0; } // Avoid zero divide
int n = (num / 2) + 1; // Initial estimate, never low
int n1 = (n + (num / n)) / 2;
while (n1 < n) {
n = n1;
n1 = (n + (num / n)) / 2;
} // end while
return n;
} // end Isqrt()
but there are a lot more, some C/C++ ones are supposed to be the fastest, or so they claim.
for the POW algotrithm check i found this one HERE, along an explanation of how to get to that algorithm, starting from simpler ones.
private double Power(double a, int b) {
if (b<0) {
throw new ApplicationException("B must be a positive integer or zero");
}
if (b==0) return 1;
if (a==0) return 0;
if (b%2==0) {
return Power(a*a, b/2);
} else if (b%2==1) {
return a*Power(a*a,b/2);
}
return 0;
}
Wikipedia has an extensive article on calculation of square roots:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots
Calculating x to the power of y is simpler:
http://www.osix.net/modules/article/?id=696
I liked this pocked calculator way of doing it:
... but I honestly have no idea whether it is fast.
Probably the easiest way is to implement the float versions in Managed C++. Whether that will go faster that the baked-in double versions or not, I can't say.
I actually have an answer to my question but it is not parallelized so I am interested in ways to improve the algorithm. Anyway it might be useful as-is for some people.
int Until = 20000000;
BitArray PrimeBits = new BitArray(Until, true);
/*
* Sieve of Eratosthenes
* PrimeBits is a simple BitArray where all bit is an integer
* and we mark composite numbers as false
*/
PrimeBits.Set(0, false); // You don't actually need this, just
PrimeBits.Set(1, false); // remindig you that 2 is the smallest prime
for (int P = 2; P < (int)Math.Sqrt(Until) + 1; P++)
if (PrimeBits.Get(P))
// These are going to be the multiples of P if it is a prime
for (int PMultiply = P * 2; PMultiply < Until; PMultiply += P)
PrimeBits.Set(PMultiply, false);
// We use this to store the actual prime numbers
List<int> Primes = new List<int>();
for (int i = 2; i < Until; i++)
if (PrimeBits.Get(i))
Primes.Add(i);
Maybe I could use multiple BitArrays and BitArray.And() them together?
You might save some time by cross-referencing your bit array with a doubly-linked list, so you can more quickly advance to the next prime.
Also, in eliminating later composites once you hit a new prime p for the first time - the first composite multiple of p remaining will be p*p, since everything before that has already been eliminated. In fact, you only need to multiply p by all the remaining potential primes that are left after it in the list, stopping as soon as your product is out of range (larger than Until).
There are also some good probabilistic algorithms out there, such as the Miller-Rabin test. The wikipedia page is a good introduction.
Parallelisation aside, you don't want to be calculating sqrt(Until) on every iteration. You also can assume multiples of 2, 3 and 5 and only calculate for N%6 in {1,5} or N%30 in {1,7,11,13,17,19,23,29}.
You should be able to parallelize the factoring algorithm quite easily, since the Nth stage only depends on the sqrt(n)th result, so after a while there won't be any conflicts. But that's not a good algorithm, since it requires lots of division.
You should also be able to parallelize the sieve algorithms, if you have writer work packets which are guaranteed to complete before a read. Mostly the writers shouldn't conflict with the reader - at least once you've done a few entries, they should be working at least N above the reader, so you only need a synchronized read fairly occasionally (when N exceeds the last synchronized read value). You shouldn't need to synchronize the bool array across any number of writer threads, since write conflicts don't arise (at worst, more than one thread will write a true to the same place).
The main issue would be to ensure that any worker being waited on to write has completed. In C++ you'd use a compare-and-set to switch to the worker which is being waited for at any point. I'm not a C# wonk so don't know how to do it that language, but the Win32 InterlockedCompareExchange function should be available.
You also might try an actor based approach, since that way you can schedule the actors working with the lowest values, which may be easier to guarantee that you're reading valid parts of the the sieve without having to lock the bus on each increment of N.
Either way, you have to ensure that all workers have got above entry N before you read it, and the cost of doing that is where the trade-off between parallel and serial is made.
Without profiling we cannot tell which bit of the program needs optimizing.
If you were in a large system, then one would use a profiler to find that the prime number generator is the part that needs optimizing.
Profiling a loop with a dozen or so instructions in it is not usually worth while - the overhead of the profiler is significant compared to the loop body, and about the only ways to improve a loop that small is to change the algorithm to do fewer iterations. So IME, once you've eliminated any expensive functions and have a known target of a few lines of simple code, you're better off changing the algorithm and timing an end-to-end run than trying to improve the code by instruction level profiling.
#DrPizza Profiling only really helps improve an implementation, it doesn't reveal opportunities for parallel execution, or suggest better algorithms (unless you've experience to the otherwise, in which case I'd really like to see your profiler).
I've only single core machines at home, but ran a Java equivalent of your BitArray sieve, and a single threaded version of the inversion of the sieve - holding the marking primes in an array, and using a wheel to reduce the search space by a factor of five, then marking a bit array in increments of the wheel using each marking prime. It also reduces storage to O(sqrt(N)) instead of O(N), which helps both in terms of the largest N, paging, and bandwidth.
For medium values of N (1e8 to 1e12), the primes up to sqrt(N) can be found quite quickly, and after that you should be able to parallelise the subsequent search on the CPU quite easily. On my single core machine, the wheel approach finds primes up to 1e9 in 28s, whereas your sieve (after moving the sqrt out of the loop) takes 86s - the improvement is due to the wheel; the inversion means you can handle N larger than 2^32 but makes it slower. Code can be found here. You could parallelise the output of the results from the naive sieve after you go past sqrt(N) too, as the bit array is not modified after that point; but once you are dealing with N large enough for it to matter the array size is too big for ints.
You also should consider a possible change of algorithms.
Consider that it may be cheaper to simply add the elements to your list, as you find them.
Perhaps preallocating space for your list, will make it cheaper to build/populate.
Are you trying to find new primes? This may sound stupid, but you might be able to load up some sort of a data structure with known primes. I am sure someone out there has a list. It might be a much easier problem to find existing numbers that calculate new ones.
You might also look at Microsofts Parallel FX Library for making your existing code multi-threaded to take advantage of multi-core systems. With minimal code changes you can make you for loops multi-threaded.
There's a very good article about the Sieve of Eratosthenes: The Genuine Sieve of Eratosthenes
It's in a functional setting, but most of the opimization do also apply to a procedural implementation in C#.
The two most important optimizations are to start crossing out at P^2 instead of 2*P and to use a wheel for the next prime numbers.
For concurrency, you can process all numbers till P^2 in parallel to P without doing any unnecessary work.
void PrimeNumber(long number)
{
bool IsprimeNumber = true;
long value = Convert.ToInt32(Math.Sqrt(number));
if (number % 2 == 0)
{
IsprimeNumber = false;
MessageBox.Show("No It is not a Prime NUmber");
return;
}
for (long i = 3; i <= value; i=i+2)
{
if (number % i == 0)
{
MessageBox.Show("It is divisible by" + i);
IsprimeNumber = false;
break;
}
}
if (IsprimeNumber)
{
MessageBox.Show("Yes Prime NUmber");
}
else
{
MessageBox.Show("No It is not a Prime NUmber");
}
}