C# TimeSpan gives 0 as TotalMilliseconds [duplicate] - c#

This question already has answers here:
Is DateTime.Now the best way to measure a function's performance? [closed]
(16 answers)
Closed 6 years ago.
I was trying to "benchmark" two nearly identical loops. When I inspect them one by one the subtraction is without any hiccups, and the TimeSpan gives back the correct numbers as the TotalMilliseconds. When I benchmark the two loops together however the first subtraction gives back a number between 3 - 4 (which should be right), and the second one always gives back 0. What am I doing wrong? Thanks!
class Program
{
static void Main(string[] args)
{
Console.WriteLine(":: Simple for ::");
var a = 0;
var start = DateTime.Now;
for (int i = 0; i < 10; i++)
{
a += i;
}
Console.WriteLine("Elapsed time: {0} ms", (DateTime.Now - start).TotalMilliseconds); // 3 - 4 ms
Console.WriteLine(":: Fancy for ::");
a = 0;
start = DateTime.Now;
foreach (var i in Enumerable.Range(0, 9))
{
a += i;
}
Console.WriteLine("Elapsed time: {0} ms", (DateTime.Now - start).TotalMilliseconds); // 0 ms
}
}

From the documentation:
The resolution of this property depends on the system timer, which is approximately 15 milliseconds on Windows systems.As a result, repeated calls to the Now property in a short time interval, such as in a loop, may return the same value.
The Now property is frequently used to measure performance. However, because of its low resolution, it is not suitable for use as a benchmarking tool. A better alternative is to use the Stopwatch class.

Try using Stopwatch class which is specially designed for benchmarking:
Stopwatch timer = new Stopwatch();
timer.Start();
for (int i = 0; i < 10; i++)
{
a += i;
}
timer.Stop();
Console.WriteLine($"Elapsed time: {timer.Elapsed}");
Outcome (.Net 4.6 IA-64, Core i7 3.2 GHz)
00:00:00.0000003
please, notice that the time is about 300 nano seconds

Related

Why Seconds show small value when actual elapsed time is in minutes? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I have a parallel loop in a console application with a StopWatch.
If I input 10000 loops it says it finishes in 13 seconds but in real world the console doesn't stop printing until 15 minutes later. So how can I properly time this loop so the stopwatch stops after the last line has finished writing to the console?
My expected result is "Process took 903 Seconds"
int i = 0;
int N = 10000;
Stopwatch sw = new StopWatch();
sw.Start
Parallel.For(0, N, i =>
{
//Do some logic task here then write to the console
console.Writeline("Thread " + i + " has finished");
});
sw.Stop()
console.WriteLine("Process took" + sw.Elapsed.Seconds + " Seconds");
Is is similar to C# Timespan Milliseconds vs TotalMilliseconds where one used Milliseconds instead of TotalMilliseconds?
From the documentation of the TimeSpan.Seconds property:
public int Seconds { get; }
Gets the seconds component of the time interval represented by the current TimeSpan structure.
So the Seconds return a value between 0 - 59. To get the total duration of the TimeSpan in seconds you need to query the TimeSpan.TotalSeconds property:
public double TotalSeconds { get; }
Gets the value of the current TimeSpan structure expressed in whole and fractional seconds.
You need to use sw.Elapsed.TotalSeconds which returns a double instead of .Seconds which returns the integer seconds part of (hours,minutes,seconds,..)
The rest of the code is correct.
example code:
static class Program
{
static void Main(string[] args)
{
int N = 10000;
Stopwatch sw = new Stopwatch();
sw.Start();
Parallel.For(0, N, (i) =>
{
//Do some logic task here then write to the console
LongTask();
Console.WriteLine($"Iteration {i} has finished in thread {Thread.CurrentThread.ManagedThreadId}");
});
sw.Stop();
Console.WriteLine($"Process took {sw.Elapsed.TotalSeconds} Seconds");
}
static double LongTask()
{
double x = 1;
for (int i = 1; i <= 2<<55; i++)
{
x += Math.Pow(1+1/i,i);
}
return x;
}
}
NOTE: Always run timing on Release builds, and if run from VS, use the Start without Debugging command.

How to determine efficiency?

How do I compare these two iterations to determine which is most efficient?
Process.GetProcessesByName("EXCEL")
.Where(p => p.StartTime >= _createdOn)
.ToList()
.ForEach(p => p.Kill());
vs
foreach (var proc in Process.GetProcessesByName("EXCEL").Where(p => p.StartTime >= _createdOn))
proc.Kill();
The actual difference can only be determined by running it both ways in exact situations and measuring by using Stopwatch or other profiling tools, however here are a few observations:
The only real difference is the call to ToList() which is necessary to use .ForEach(). Other than that they are equivalent.
Since I would assume that this is not run very often (how often do you need to kill every Excel process?) the performance difference should be immaterial in this scenario.
You may be fixing the symptom rather than the problem. I would be curious why you have have multiple excel processes that you need to kill.
They are almost absolutely the same. Time consuming operations here are Process.GetProcessesByName("EXCEL"), p.StartTime and proc.Kill(). You are doing the same amount of them in both cases. Everything else just takes very short time. If you want some real optimization here, you can experiment with WinAPI to do those long operations, sometimes it works faster.
EDIT:
I measured speed of these operations before, for my own project. But I didn't have exact numbers, so I rechecked again.
These are my results:
Process.GetProcessesByName():
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
Process.GetProcessesByName("chrome");
}
TimeSpan duration = DateTime.Now - start;
It takes 5 seconds for every 1000 of opeartions.
p.Kill():
TimeSpan duratiom = TimeSpan.Zero;
for (int i = 0 ; i < 1000 ; i++) {
Process p = Process.Start("c:/windows/notepad");
DateTime start = DateTime.Now;
p.Kill();
duratiom += DateTime.Now - start;
}
It takes 300 millisecond for every 1000 operations. Still a big number.
And StartTime, just compare numbers:
Without p.StartTime:
Process[] ps = Process.GetProcessesByName("chrome");
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
ps.Where(p => true).ToList();
}
TimeSpan duratiom = DateTime.Now - start;
6 milliseconds
With p.StartTime:
Process[] ps = Process.GetProcessesByName("chrome");
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
ps.Where(p => p.StartTime < DateTime.Now).ToList();
}
TimeSpan duratiom = DateTime.Now - start;
408 milliseconds
So, these numbers tell me that there is no point to optimize .Where(), ToList() or foreach. Operations with Process take dozens times more time anyway. Also, I know about profilers, and I use them to measure and optimize, but I made these examples to get the exact numbers and to show the point.

Slow While loop in C#

I have a while loop and all it does is a method call. I have a timer on the outside of the loop and another timer that incrementally adds up the time the method call takes inside the loop. The outer time takes about 17 seconds and the total on the inner timer is 40 ms. The loop is executing 50,000 times. Here is an example of the code:
long InnerTime = 0;
long OutterTime = 0;
Stopw1.Start();
int count = 1;
while (count <= TestCollection.Count) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedMilliseconds;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedMilliseconds;
Stopw1.Reset();
Any help would be much appreciated.
Massimo
You are comparing apples and oranges. Your outer timer measures the total time taken. Your inner timer measures the number of whole milliseconds taken by the call to Method1.
The ElapsedMilliseconds property "represents elapsed time rounded down to the nearest whole millisecond value." So, you are rounding down to the nearest millisecond about 50,000 times.
If your call to Method1 takes, on average, less than 1ms, then most of the time, the `ElapsedMilliseconds' property will return 0 and your inner count will be much, much less than the actual time. In fact, your method takes about 0.3ms on average, so you're lucky even to get it to go over 1ms 40 times.
Use the Elapsed.TotalMilliseconds or ElapsedTicks property instead of ElapsedMilliseconds. One millisecond is equivalent to 10,000 ticks.
What is this doing: TestCollection.Count ?
I suspect your 17 seconds are being spent counting your 50,000 items over and over again.
Try changing this:
while (count <= TestCollection.Count) {
...
}
to this:
int total = TestCollection.Count;
while (count <= total) {
...
}
To add to what the others have already said, in general the C# compiler must re-evaluate any property, including
TestCollection.Count
for every single loop iteration. The property's value could change from iteration to iteration.
Assigning the value to a local variable removes the compiler's need to re-evaluate for every loop iteration.
The one exception that I'm aware of is for Array.Length, which benefits from an optimization specifically for arrays. This is referred to as Array Bounds Check Elimination.
To have a correct measurement of the time that your calls take,
you should use the Ticks
Please try the following:
long InnerTime = 0;
long OutterTime = 0;
Stopwatch Stopw1 = new Stopwatch();
Stopwatch Stopw2 = new Stopwatch();
Stopw1.Start();
int count = 1;
int run = TestCollection.Count;
while (count <= run) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedTicks;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedTicks;
Stopw1.Reset();
You should not measure such a tiny method individually. But if you really want to, try this:
long innertime = 0;
while (count <= TestCollection.Count)
{
innertime -= Stopw2.GetTimestamp();
Medthod1();
innertime += Stopw2.GetTimestamp();
count++;
}
Console.WriteLine("{0} ms", innertime * 1000.0 / Stopw2.Frequency);

Timing C# code using Timer

Even though it is good to check performance of code in terms of algorithmic analysis and Big-Oh! notation i wanted to see how much it takes for the code to execute in my PC. I had initialized a List to 9999count and removed even elements out from the them. Sadly the timespan to execute this seems to be 0:0:0. Surprised by the result there must be something wrong in the way i time the execution. Could someone help me time the code correct?
IList<int> source = new List<int>(100);
for (int i = 0; i < 9999; i++)
{
source.Add(i);
}
TimeSpan startTime, duration;
startTime = Process.GetCurrentProcess().Threads[0].UserProcessorTime;
RemoveEven(ref source);
duration = Process.GetCurrentProcess().Threads[0].UserProcessorTime.Subtract(startTime);
Console.WriteLine(duration.Milliseconds);
Console.Read();
The most appropriate thing to use there would be Stopwatch - anything involving TimeSpan has nowhere near enough precision for this:
var watch = Stopwatch.StartNew();
// something to time
watch.Stop();
Console.WriteLine(watch.ElapsedMilliseconds);
However, a modern CPU is very fast, and it would not surprise me if it can remove them in that time. Normally, for timing, you need to repeat an operation a large number of times to get a reasonable measurement.
Aside: the ref in RemoveEven(ref source) is almost certainly not needed.
In .Net 2.0 you can use the Stopwatch class
IList<int> source = new List<int>(100);
for (int i = 0; i < 9999; i++)
{
source.Add(i);
}
Stopwatch watch = new Stopwatch();
watch.Start();
RemoveEven(ref source);
//watch.ElapsedMilliseconds contains the execution time in ms
watch.Stop()
Adding to previous answers:
var sw = Stopwatch.StartNew();
// instructions to time
sw.Stop();
sw.ElapsedMilliseconds returns a long and has a resolution of:
1 millisecond = 1000000 nanoseconds
sw.Elapsed.TotalMilliseconds returns a double and has a resolution equal to the inverse of Stopwatch.Frequency. On my PC for example Stopwatch.Frequency has a value of 2939541 ticks per second, that gives sw.Elapsed.TotalMilliseconds a resolution of:
1/2939541 seconds = 3,401891655874165e-7 seconds = 340 nanoseconds

Math.Max vs inline if - what are the differences?

I was working on a project today, and found myself using Math.Max in several places and inline if statements in other places. So, I was wondering if anybody knew which is "better"... or rather, what the real differences are.
For example, in the following, c1 = c2:
Random rand = new Random();
int a = rand.next(0,10000);
int b = rand.next(0,10000);
int c1 = Math.Max(a, b);
int c2 = a>b ? a : b;
I'm asking specifically about C#, but I suppose the answer could be different in different languages, though I'm not sure which ones have similar concepts.
One of the major differences I would notice right away would be for readability sake, as far as I know for implementation/performance sake, they would be nearly equivalent.
Math.Max(a,b) is very simple to understand, regardless of previous coding knowledge.
a>b ? a : b would require the user to have some knowledge of the ternary operator, at least.
"When in doubt - go for readability"
I thought it would be fun to throw in some numbers into this discussion so I wrote some code to profile it. As expected they are almost identical for all practical purposes.
The code does a billion loops (yep 1 billion). Subtracting the overhead of the loop you get:
Math.Max() took .0044 seconds to run 1 billion times
The inline if took .0055 seconds to run 1 billion times
I subtracted the overhead which I calculated by running an empty loop 1 billion times, the overhead was 1.2 seconds.
I ran this on a laptop, 64-bit Windows 7, 1.3 Ghz Intel Core i5 (U470). The code was compiled in release mode and ran without a debugger attached.
Here's the code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace TestMathMax {
class Program {
static int Main(string[] args) {
var num1 = 10;
var num2 = 100;
var maxValue = 0;
var LoopCount = 1000000000;
double controlTotalSeconds;
{
var stopwatch = new Stopwatch();
stopwatch.Start();
for (var i = 0; i < LoopCount; i++) {
// do nothing
}
stopwatch.Stop();
controlTotalSeconds = stopwatch.Elapsed.TotalSeconds;
Console.WriteLine("Control - Empty Loop - " + controlTotalSeconds + " seconds");
}
Console.WriteLine();
{
var stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < LoopCount; i++) {
maxValue = Math.Max(num1, num2);
}
stopwatch.Stop();
Console.WriteLine("Math.Max() - " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
Console.WriteLine();
{
var stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < LoopCount; i++) {
maxValue = num1 > num2 ? num1 : num2;
}
stopwatch.Stop();
Console.WriteLine("Inline Max: " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
Console.ReadLine();
return maxValue;
}
}
}
UPDATED Results 2/7/2015
On a Windows 8.1, Surface 3 Pro, i7 4650U 2.3Ghz
Ran as a console application in release mode without the debugger attached.
Math.Max() - 0.3194749 seconds
Inline Max: 0.3465041 seconds
if statement considered beneficial
Summary
a statement of the form if (a > max) max = a is the fastest way to determine the maximum of a set of numbers. However the loop infrastructure itself takes most of the CPU time, so this optimization is questionable in the end.
Details
The answer by luisperezphd is interesting because it provides numbers, however I believe the method is flawed: the compiler will most likely move the comparison out of the loop, so the answer doesn't measure what it wants to measure. This explains the negligible timing difference between control loop and measurement loops.
To avoid this loop optimization, I added an operation that depends on the loop variable, to the empty control loop as well as to all measurement loops. I simulate the common use case of finding the maximum in a list of numbers, and used three data sets:
best case: the first number is the maximum, all numbers after it are smaller
worst case: every number is bigger than the previous, so the max changes each iteration
average case: a set of random numbers
See below for the code.
The result was rather surprising to me. On my Core i5 2520M laptop I got the following for 1 billion iterations (the empty control took about 2.6 sec in all cases):
max = Math.Max(max, a): 2.0 sec best case / 1.3 sec worst case / 2.0 sec average case
max = Math.Max(a, max): 1.6 sec best case / 2.0 sec worst case / 1.5 sec average case
max = max > a ? max : a: 1.2 sec best case / 1.2 sec worst case / 1.2 sec average case
if (a > max) max = a: 0.2 sec best case / 0.9 sec worst case / 0.3 sec average case
So despite long CPU pipelines and the resulting penalties for branching, the good old if statement is the clear winner for all simulated data sets; in the best case it is 10 times faster than Math.Max, and in the worst case still more than 30% faster.
Another surprise is that the order of the arguments to Math.Max matters. Presumably this is because of CPU branch prediction logic working differently for the two cases, and mispredicting branches more or less depending on the order of arguments.
However, the majority of the CPU time is spent in the loop infrastructure, so in the end this optimization is questionable at best. It provides a measurable but minor reduction in overall execution time.
UPDATED by luisperezphd
I couldn't fit this as a comment and it made more sense to write it here instead of as part of my answer so that it was in context.
Your theory makes sense, but I was not able to reproduce the results. First for some reason using your code my control loop was taking longer than the loops containing work.
For that reason I made the numbers here relative to the lowest time instead of the control loop. The seconds in the results are how much longer it took than the fastest time. For example in the results immediately below the fastest time was for Math.Max(a, max) best case, so every other result represents how much longer they took than that.
Below are the results I got:
max = Math.Max(max, a): 0.012 sec best case / 0.007 sec worst case / 0.028 sec average case
max = Math.Max(a, max): 0.000 best case / 0.021 worst case / 0.019 sec average case
max = max > a ? max : a: 0.022 sec best case / 0.02 sec worst case / 0.01 sec average case
if (a > max) max = a: 0.015 sec best case / 0.024 sec worst case / 0.019 sec average case
The second time I ran it I got:
max = Math.Max(max, a): 0.024 sec best case / 0.010 sec worst case / 0.009 sec average case
max = Math.Max(a, max): 0.001 sec best case / 0.000 sec worst case / 0.018 sec average case
max = max > a ? max : a: 0.011 sec best case / 0.005 sec worst case / 0.018 sec average case
if (a > max) max = a: 0.000 sec best case / 0.005 sec worst case / 0.039 sec average case
There is enough volume in these tests that any anomalies should have been wiped out. Yet despite that the results are pretty different. Maybe the large memory allocation for the array has something to do with it. Or possibly the difference is so small that anything else happening on the computer at the time is the true cause of the variation.
Note the fastest time, represented in the results above by 0.000 is about 8 seconds. So if you consider that the longest run then was 8.039, the variation in time is about half a percent (0.5%) - aka too small to matter.
The computer
The code was ran on Windows 8.1, i7 4810MQ 2.8Ghz and compiled in .NET 4.0.
Code modifications
I modified your code a bit to output the results in the format shown above. I also added additional code to wait 1 second after starting to account for any additional loading time .NET might need when running the assembly.
Also I ran all the tests twice to account for any CPU optimizations. Finally I changed the int for i to a unit so I could run the loop 4 billion times instead of 1 billion to get a longer timespan.
That's probably all overkill, but it's all to make sure as much as possible that the tests are not affected by any of those factors.
You can find the code at: http://pastebin.com/84qi2cbD
Code
using System;
using System.Diagnostics;
namespace ProfileMathMax
{
class Program
{
static double controlTotalSeconds;
const int InnerLoopCount = 100000;
const int OuterLoopCount = 1000000000 / InnerLoopCount;
static int[] values = new int[InnerLoopCount];
static int total = 0;
static void ProfileBase()
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
int maxValue;
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
// baseline
total += values[i];
}
}
stopwatch.Stop();
controlTotalSeconds = stopwatch.Elapsed.TotalSeconds;
Console.WriteLine("Control - Empty Loop - " + controlTotalSeconds + " seconds");
}
static void ProfileMathMax()
{
int maxValue;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
maxValue = Math.Max(values[i], maxValue);
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("Math.Max(a, max) - " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void ProfileMathMaxReverse()
{
int maxValue;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
maxValue = Math.Max(maxValue, values[i]);
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("Math.Max(max, a) - " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void ProfileInline()
{
int maxValue = 0;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
maxValue = maxValue > values[i] ? values[i] : maxValue;
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("max = max > a ? a : max: " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void ProfileIf()
{
int maxValue = 0;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
if (values[i] > maxValue)
maxValue = values[i];
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("if (a > max) max = a: " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void Main(string[] args)
{
Random rnd = new Random();
for (int i = 0; i < InnerLoopCount; i++)
{
//values[i] = i; // worst case: every new number biggest than the previous
//values[i] = i == 0 ? 1 : 0; // best case: first number is the maximum
values[i] = rnd.Next(int.MaxValue); // average case: random numbers
}
ProfileBase();
Console.WriteLine();
ProfileMathMax();
Console.WriteLine();
ProfileMathMaxReverse();
Console.WriteLine();
ProfileInline();
Console.WriteLine();
ProfileIf();
Console.ReadLine();
}
}
}
I'd say it is quicker to understand what Math.Max is doing, and that should really be the only deciding factor here.
But as an indulgence, it's interesting to consider that Math.Max(a,b) evaluates the arguments once, whilst a > b ? a : b evaluates one of them twice. Not a problem with local variables, but for properties with side effects, the side effect may happen twice.
If the JITer chooses to inline the Math.Max function, the executable code will be identical to the if statement. If Math.Max isn't inlined, it will execute as a function call with call and return overhead not present in the if statement. So, the if statement will give identical performance to Math.Max() in the inlining case or the if statement may be a few clock cycles faster in the non-inlined case, but the difference won't be noticeable unless you are running tens of millions of comparisons.
Since the performance difference between the two is small enough to be negligible in most situations, I'd prefer the Math.Max(a,b) because it's easier to read.
Regarding performance, Modern CPUs have internal command pipeline such that every assembly command is executed in several internal steps. (e.g. fetching, interpretation, calculation, storage)
In most cases the CPU is smart enough to run these steps in parallel for sequential commands so the overall throughput is very high.
This is fine till there comes a branch (if, ?: etc.) .
The branch may break the sequence and force the CPU to trash the pipeline.
This costs a lot of clock cycles.
In theory, if the compiler is smart enough, the Math.Max can be implemented using a built it CPU command and the branching can be avoided.
In this case the Math.Max would actually be faster than the if - but it depends on the compiler..
In case of more complicated Max - like working on vectors, double []v; v.Max() the compiler can utilize highly optimized library code, that can be much faster than regular compiled code.
So it's best to go with Math.Max, but it is also recommended to check on your particular target system and compiler if it is important enough.
Math.Max(a,b)
is NOT equivalent to a > b ? a : b in all cases.
Math.Max returns the greater value of the two arguments, that is:
if (a == b) return a; // or b, doesn't matter since they're identical
else if (a > b && b < a) return a;
else if (b > a && a < b) return b;
else return undefined;
The undefined is mapped to double.NaN in case of the double overload of Math.Max for example.
a > b ? a : b
evaluates to a if a is greater than b, which does not necessarily mean that b is less than a.
A simple example that demonstrates that they are not equivalent:
var a = 0.0/0.0; // or double.NaN
var b = 1.0;
a > b ? a : b // evaluates to 1.0
Math.Max(a, b) // returns double.NaN
Take an operation;
N must be >= 0
General solutions:
A) N = Math.Max(0, N)
B) if(N < 0){N = 0}
Sorting by speed:
Slow: Math.Max (A) < (B) if-then statement :Fast (3% more faster than solution 'A')
But my solution is 4% faster than solution 'B':
N *= Math.Sign(1 + Math.Sign(N));

Categories