I have a program, in which an event happens randomly. I'm trying to write some code to calculate and store the average time it takes for these events to happen. Here is the code I use to calculate the mean:
int EventCount = 0;
var s = Stopwatch.StartNew();
while(true)
{
if (EventTriggered)
{
Console.WriteLine("Event detected");
EventCount++;
s.Stop();
AverageMS+= s.ElapsedMilliseconds;
AverageMS /= EventCount;
Console.WriteLine("Current average ms: " + AverageMS);
s.Restart();
}
}
The supposed average milliseconds it displays look to be closer to the individual times instead of the average.
Here is a sample of 100 events:
http://pastebin.com/cmwQPqfR
You are dividing the average by the event count, meaning you are dividing over and over again. You need to accumulate the total, then recompute the average each time.
More like this:
long TotalMS = 0;
int EventCount = 0;
var s = Stopwatch.StartNew();
while(true)
{
if (EventTriggered)
{
s.Stop();
Console.WriteLine("Event detected");
EventCount++;
TotalMS+= s.ElapsedMilliseconds;
AverageMS = TotalMS / EventCount;
Console.WriteLine("Current average ms: " + AverageMS);
s.Restart();
}
}
Note that you should stop your timer before the WriteLine, otherwise you are timing the console operation as well.
Your calculations are wrong. You should keep the count and total elapsed time separately an do the division when you need to. Look at What you are doing if your elapsed times are 1, 3, 5. Clearly the averages should go 1, 2 and 3. What you will get is (EventCount =1; AverageMS = 1). Then you'll get EventCount = 2, AverageMS has 3 added then is divided by 2 so Average is (1+3)/2 = 2. Then EVentCount goes to 3 and Average MS = (2+5)/3 = 2.333 (going wrong now, should be 3). If the next is 3 then the average would be (2.333+3)/4 = 1.3333!.
Just keep the count and running total separate to fix it.
Related
So, I'm working on a project where I'm displaying two amounts. The first one is the minimum amount which is constant. However, the second one is the progress (increment) every 60 seconds but the figure must not exceed the maximum amount set within a certain duration.
an example is:
I have a min amount of 10,000.
max amount = 50,000.
savings duration= 93 days.
now, I want the progress count (a label showing how the savings is growing) to keep increasing by a certain given time (probably 60 seconds) until the 93rd day without exceeding the maximum amount.
My question is, how do I achieve this? what method can best give a good result?
Here is my current implementation:
public string TotalBalance
{
get
{
//string newBal;
double min;
double max;
double dys;
dys = double.Parse(days);
double calc = dys * 1440;
min = double.Parse(amount);
max = double.Parse(totalReturn);
double costpermin = max / calc;
if (dys>0)
{
string kems;
Device.StartTimer(new TimeSpan(0, 0, 60), () =>
{
// do something every 60 seconds
Device.BeginInvokeOnMainThread(() =>
{
double pel = min + costpermin++;
string polo = pel.ToString();
string Bal = Math.Round(Convert.ToDouble(polo), 2).ToString("C", System.Globalization.CultureInfo.GetCultureInfo("en-us")).Replace("$", "N");
kems = Bal;
var updAmt = Bal;
MessagingCenter.Send<object, string>(this, "timer", updAmt);
});
return true; // runs again, or false to stop
});
return kems;
}
else
{
string meeBal = Math.Round(Convert.ToDouble(this.totalReturn), 2).ToString("C", System.Globalization.CultureInfo.GetCultureInfo("en-us")).Replace("$", "N");
return meeBal;
}
}
set
{
TotalBalance = value;
OnPropertyChanged(nameof(TotalBalance));
}
}
I added deviceTimer in my model to be able to update the view real-time. however, there are some issues. 1. the calculation is right but only calculates after the first 60secs. so onAppearance, the amount label shows empty. and the view doesn't get updated still. 2. the calculation doesn't continue tomorrow, so if today your total growth shows 10,200.12, when you open the app again tomorrow, it will start the counting again instead of starting from where it started.
try using Device.StartTimer() to increment count periodically
https://learn.microsoft.com/en-us/dotnet/api/xamarin.forms.device.starttimer?view=xamarin-forms
I have an array of integers. Value of each element represents the time taken to process a file. The processing of files consists of merging two files at a time. What is the algorithm to find the minimum time that can be taken for processing all the files. E.g. - {3,5,9,12,14,18}.
The time of processing can be calculated as -
Case 1) -
a) [8],9,12,14,18
b) [17],12,14,18
c) [26],17,18
d) 26,[35]
e) 61
So total time for processing is 61 + 35 + 26 + 17 + 8 = 147
Case 2) -
a) [21],5,9,12,14
b) [17],[21],9,14
c) [21],[17],[23]
d) [40],[21]
e) 61
This time the total time is 61 + 40 + 23 + 17 + 21 = 162
Seems to me that continuously sorting the array and adding the least two elements is the best bet for the minimum as in Case 1. Is my logic right? If not what is the right and easiest way to achieve this with best performance?
Once you have the sorted list, since you are only removing the two minimum items and replacing them with one, it makes more sense to do a sorted insert and place the new item in the correct place instead of re-sorting the entire list. However, this only saves a fractional amount of time - about 1% faster.
My method CostOfMerge doesn't assume the input is a List but if it is, you can remove the conversion ToList step.
public static class IEnumerableExt {
public static int CostOfMerge(this IEnumerable<int> psrc) {
var src = psrc.ToList();
src.Sort();
while (src.Count > 1) {
var sum = src[0]+src[1];
src.RemoveRange(0, 2);
var index = src.BinarySearch(sum);
if (index < 0)
index = ~index;
src.Insert(index, sum);
total += sum;
}
return total;
}
}
As already discussed in other answers, the best strategy will be to always work on the two items with minimal cost for each iteration. So the only remaining question is how to efficiently take the two smallest items each time.
Since you asked for best performance, I shamelessly took the algorithm from NetMage and modified it to speed it up roughly 40% for my test case (thanks and +1 to NetMage).
The idea is to work mostly in place on a single array.
Each iteration increase the starting index by 1 and move the elements within the array to make space for the sum from current iteration.
public static long CostOfMerge2(this IEnumerable<int> psrc)
{
long total = 0;
var src = psrc.ToArray();
Array.Sort(src);
var i = 1;
int length = src.Length;
while (i < length)
{
var sum = src[i - 1] + src[i];
total += sum;
// find insert position for sum
var index = Array.BinarySearch(src, i + 1, length - i - 1, sum);
if (index < 0)
index = ~index;
--index;
// shift items that come before insert position one place to the left
if (i < index)
Array.Copy(src, i + 1, src, i, index - i);
src[index] = sum;
++i;
}
return total;
}
I tested with the following calling code (switching between CostOfMerge and CostOfMerge2), with a few different values for random-seed, count of elements and max value of initial items.
static void Main(string[] args)
{
var r = new Random(10);
var testcase = Enumerable.Range(0, 400000).Select(x => r.Next(1000)).ToList();
var sw = Stopwatch.StartNew();
long resultCost = testcase.CostOfMerge();
sw.Stop();
Console.WriteLine($"Cost of Merge: {resultCost}");
Console.WriteLine($"Time of Merge: {sw.Elapsed}");
Console.ReadLine();
}
Result for shown configuration for NetMage CostOfMerge:
Cost of Merge: 3670570720
Time of Merge: 00:00:15.4472251
My CostOfMerge2:
Cost of Merge: 3670570720
Time of Merge: 00:00:08.7193612
Ofcourse the detailed numbers are hardware dependent and difference might be bigger or smaller depending on a load of stuff.
No, that's the minimum for a polyphase merge: where N is the bandwidth (number of files you can merge simultaneously), then you want to merge the smallest (N-1) files at each step. However, with this more general problem, you want to delay the larger files as long as possible -- you may want an early step or two to merge fewer than (N-1) files, somewhat like having a "bye" in an elimination tourney. You want all the latter steps to involve the full (N-1) files.
For instance, given N=4 and files 1, 6, 7, 8, 14, 22:
Early merge:
[22], 14, 22
[58]
total = 80
Late merge:
[14], 8, 14, 22
[58]
total = 72
Here, you can apply the following logic to get the desired output.
Get first two minimum values from list.
Remove first two minimum values from list.
Append the sum of first two minimum values in list
And continue until the list become of size 1
Return the only element from list. i.e, this will be your minimum time taken to process every item.
You can follow my Java code out there, if you find helpful .. :)
public class MinimumSums {
private static Integer getFirstMinimum(ArrayList<Integer> list) {
Integer min = Integer.MAX_VALUE;
for(int i=0; i<list.size(); i++) {
if(list.get(i) <= min)
min = list.get(i);
}
return min;
}
private static Integer getSecondMinimum(ArrayList<Integer> list, Integer firstItem) {
Integer min = Integer.MAX_VALUE;
for(int i=0; i<list.size(); i++) {
if(list.get(i) <= min && list.get(i)> firstItem)
min = list.get(i);
}
return min;
}
public static void main(String[] args) {
Integer[] processes = {5, 9, 3, 14, 12, 18};
ArrayList<Integer> list = new ArrayList<Integer>();
ArrayList<Integer> temp = new ArrayList<Integer>();
list.addAll(Arrays.asList(processes));
while(list.size()!= 1) {
Integer firstMin = getFirstMinimum(list); // getting first min value
Integer secondMin = getSecondMinimum(list, firstMin); // getting second min
list.remove(firstMin);
list.remove(secondMin);
list.add(firstMin+secondMin);
temp.add(firstMin + secondMin);
}
System.out.println(temp); // prints all the minimum pairs..
System.out.println(list.get(0)); // prints the output
}
}
I was surprised that getting value by index from the first array need more time then from second one. It is not depend on arrays lenght, in my tests it is true for any combinations. I guess that it is depends on some low level optimizations. Can somebody explain it?
code example is bellow:
var a1 = new int[10];
var a2 = new int[1000000];
#region init
var random = new Random(12345);
for (int i = 0; i < a1.Length; i++)
a1[i] = random.Next(1000000000);
for (int i = 0; i < a2.Length; i++)
a2[i] = random.Next(1000000000);
#endregion
Console.WriteLine("a1 Length = " + a1.Length);
var watcher = Stopwatch.StartNew();
var t1 = a1[a1.Length / 2];
watcher.Stop();
Console.WriteLine("a1 timestamp = " + watcher.ElapsedTicks); // average value 130-150 ticks
Console.WriteLine("a2 Length = " + a2.Length);
watcher = Stopwatch.StartNew();
var t2 = a2[a2.Length / 2];
watcher.Stop();
Console.WriteLine("a2 timestamp = " + watcher.ElapsedTicks); //average value 10 - 15 ticks
Console.ReadLine();
My result is:
- getting value by the index from array with lenght 10 is ~130-150 ticks
- getting value by the index from array with lenght 1000000 is ~10-15 ticks
I would suggest you to change how you measuring the performance, but let's assume that your measurement is correct. It could be a few reasons here and one of them is branch prediction. In short, modern processors are using branch prediction for their computations.
As it says on Wikipedia:
The purpose of the branch predictor is to improve the flow in the
instruction pipeline. Branch predictors play a critical role in
achieving high effective performance in many modern pipelined
microprocessor architectures such as x86.
So the digital circuit is trying to identify a pattern and follow it. If you guess right every time, the execution will never have to stop and it goes fast and if you guess wrong too often, you spend a lot of time rolling back and restarting. For the same reason processing sorted array is faster than processing unsorted array.
I have a while loop and all it does is a method call. I have a timer on the outside of the loop and another timer that incrementally adds up the time the method call takes inside the loop. The outer time takes about 17 seconds and the total on the inner timer is 40 ms. The loop is executing 50,000 times. Here is an example of the code:
long InnerTime = 0;
long OutterTime = 0;
Stopw1.Start();
int count = 1;
while (count <= TestCollection.Count) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedMilliseconds;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedMilliseconds;
Stopw1.Reset();
Any help would be much appreciated.
Massimo
You are comparing apples and oranges. Your outer timer measures the total time taken. Your inner timer measures the number of whole milliseconds taken by the call to Method1.
The ElapsedMilliseconds property "represents elapsed time rounded down to the nearest whole millisecond value." So, you are rounding down to the nearest millisecond about 50,000 times.
If your call to Method1 takes, on average, less than 1ms, then most of the time, the `ElapsedMilliseconds' property will return 0 and your inner count will be much, much less than the actual time. In fact, your method takes about 0.3ms on average, so you're lucky even to get it to go over 1ms 40 times.
Use the Elapsed.TotalMilliseconds or ElapsedTicks property instead of ElapsedMilliseconds. One millisecond is equivalent to 10,000 ticks.
What is this doing: TestCollection.Count ?
I suspect your 17 seconds are being spent counting your 50,000 items over and over again.
Try changing this:
while (count <= TestCollection.Count) {
...
}
to this:
int total = TestCollection.Count;
while (count <= total) {
...
}
To add to what the others have already said, in general the C# compiler must re-evaluate any property, including
TestCollection.Count
for every single loop iteration. The property's value could change from iteration to iteration.
Assigning the value to a local variable removes the compiler's need to re-evaluate for every loop iteration.
The one exception that I'm aware of is for Array.Length, which benefits from an optimization specifically for arrays. This is referred to as Array Bounds Check Elimination.
To have a correct measurement of the time that your calls take,
you should use the Ticks
Please try the following:
long InnerTime = 0;
long OutterTime = 0;
Stopwatch Stopw1 = new Stopwatch();
Stopwatch Stopw2 = new Stopwatch();
Stopw1.Start();
int count = 1;
int run = TestCollection.Count;
while (count <= run) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedTicks;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedTicks;
Stopw1.Reset();
You should not measure such a tiny method individually. But if you really want to, try this:
long innertime = 0;
while (count <= TestCollection.Count)
{
innertime -= Stopw2.GetTimestamp();
Medthod1();
innertime += Stopw2.GetTimestamp();
count++;
}
Console.WriteLine("{0} ms", innertime * 1000.0 / Stopw2.Frequency);
I was working on a project today, and found myself using Math.Max in several places and inline if statements in other places. So, I was wondering if anybody knew which is "better"... or rather, what the real differences are.
For example, in the following, c1 = c2:
Random rand = new Random();
int a = rand.next(0,10000);
int b = rand.next(0,10000);
int c1 = Math.Max(a, b);
int c2 = a>b ? a : b;
I'm asking specifically about C#, but I suppose the answer could be different in different languages, though I'm not sure which ones have similar concepts.
One of the major differences I would notice right away would be for readability sake, as far as I know for implementation/performance sake, they would be nearly equivalent.
Math.Max(a,b) is very simple to understand, regardless of previous coding knowledge.
a>b ? a : b would require the user to have some knowledge of the ternary operator, at least.
"When in doubt - go for readability"
I thought it would be fun to throw in some numbers into this discussion so I wrote some code to profile it. As expected they are almost identical for all practical purposes.
The code does a billion loops (yep 1 billion). Subtracting the overhead of the loop you get:
Math.Max() took .0044 seconds to run 1 billion times
The inline if took .0055 seconds to run 1 billion times
I subtracted the overhead which I calculated by running an empty loop 1 billion times, the overhead was 1.2 seconds.
I ran this on a laptop, 64-bit Windows 7, 1.3 Ghz Intel Core i5 (U470). The code was compiled in release mode and ran without a debugger attached.
Here's the code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace TestMathMax {
class Program {
static int Main(string[] args) {
var num1 = 10;
var num2 = 100;
var maxValue = 0;
var LoopCount = 1000000000;
double controlTotalSeconds;
{
var stopwatch = new Stopwatch();
stopwatch.Start();
for (var i = 0; i < LoopCount; i++) {
// do nothing
}
stopwatch.Stop();
controlTotalSeconds = stopwatch.Elapsed.TotalSeconds;
Console.WriteLine("Control - Empty Loop - " + controlTotalSeconds + " seconds");
}
Console.WriteLine();
{
var stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < LoopCount; i++) {
maxValue = Math.Max(num1, num2);
}
stopwatch.Stop();
Console.WriteLine("Math.Max() - " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
Console.WriteLine();
{
var stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < LoopCount; i++) {
maxValue = num1 > num2 ? num1 : num2;
}
stopwatch.Stop();
Console.WriteLine("Inline Max: " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
Console.ReadLine();
return maxValue;
}
}
}
UPDATED Results 2/7/2015
On a Windows 8.1, Surface 3 Pro, i7 4650U 2.3Ghz
Ran as a console application in release mode without the debugger attached.
Math.Max() - 0.3194749 seconds
Inline Max: 0.3465041 seconds
if statement considered beneficial
Summary
a statement of the form if (a > max) max = a is the fastest way to determine the maximum of a set of numbers. However the loop infrastructure itself takes most of the CPU time, so this optimization is questionable in the end.
Details
The answer by luisperezphd is interesting because it provides numbers, however I believe the method is flawed: the compiler will most likely move the comparison out of the loop, so the answer doesn't measure what it wants to measure. This explains the negligible timing difference between control loop and measurement loops.
To avoid this loop optimization, I added an operation that depends on the loop variable, to the empty control loop as well as to all measurement loops. I simulate the common use case of finding the maximum in a list of numbers, and used three data sets:
best case: the first number is the maximum, all numbers after it are smaller
worst case: every number is bigger than the previous, so the max changes each iteration
average case: a set of random numbers
See below for the code.
The result was rather surprising to me. On my Core i5 2520M laptop I got the following for 1 billion iterations (the empty control took about 2.6 sec in all cases):
max = Math.Max(max, a): 2.0 sec best case / 1.3 sec worst case / 2.0 sec average case
max = Math.Max(a, max): 1.6 sec best case / 2.0 sec worst case / 1.5 sec average case
max = max > a ? max : a: 1.2 sec best case / 1.2 sec worst case / 1.2 sec average case
if (a > max) max = a: 0.2 sec best case / 0.9 sec worst case / 0.3 sec average case
So despite long CPU pipelines and the resulting penalties for branching, the good old if statement is the clear winner for all simulated data sets; in the best case it is 10 times faster than Math.Max, and in the worst case still more than 30% faster.
Another surprise is that the order of the arguments to Math.Max matters. Presumably this is because of CPU branch prediction logic working differently for the two cases, and mispredicting branches more or less depending on the order of arguments.
However, the majority of the CPU time is spent in the loop infrastructure, so in the end this optimization is questionable at best. It provides a measurable but minor reduction in overall execution time.
UPDATED by luisperezphd
I couldn't fit this as a comment and it made more sense to write it here instead of as part of my answer so that it was in context.
Your theory makes sense, but I was not able to reproduce the results. First for some reason using your code my control loop was taking longer than the loops containing work.
For that reason I made the numbers here relative to the lowest time instead of the control loop. The seconds in the results are how much longer it took than the fastest time. For example in the results immediately below the fastest time was for Math.Max(a, max) best case, so every other result represents how much longer they took than that.
Below are the results I got:
max = Math.Max(max, a): 0.012 sec best case / 0.007 sec worst case / 0.028 sec average case
max = Math.Max(a, max): 0.000 best case / 0.021 worst case / 0.019 sec average case
max = max > a ? max : a: 0.022 sec best case / 0.02 sec worst case / 0.01 sec average case
if (a > max) max = a: 0.015 sec best case / 0.024 sec worst case / 0.019 sec average case
The second time I ran it I got:
max = Math.Max(max, a): 0.024 sec best case / 0.010 sec worst case / 0.009 sec average case
max = Math.Max(a, max): 0.001 sec best case / 0.000 sec worst case / 0.018 sec average case
max = max > a ? max : a: 0.011 sec best case / 0.005 sec worst case / 0.018 sec average case
if (a > max) max = a: 0.000 sec best case / 0.005 sec worst case / 0.039 sec average case
There is enough volume in these tests that any anomalies should have been wiped out. Yet despite that the results are pretty different. Maybe the large memory allocation for the array has something to do with it. Or possibly the difference is so small that anything else happening on the computer at the time is the true cause of the variation.
Note the fastest time, represented in the results above by 0.000 is about 8 seconds. So if you consider that the longest run then was 8.039, the variation in time is about half a percent (0.5%) - aka too small to matter.
The computer
The code was ran on Windows 8.1, i7 4810MQ 2.8Ghz and compiled in .NET 4.0.
Code modifications
I modified your code a bit to output the results in the format shown above. I also added additional code to wait 1 second after starting to account for any additional loading time .NET might need when running the assembly.
Also I ran all the tests twice to account for any CPU optimizations. Finally I changed the int for i to a unit so I could run the loop 4 billion times instead of 1 billion to get a longer timespan.
That's probably all overkill, but it's all to make sure as much as possible that the tests are not affected by any of those factors.
You can find the code at: http://pastebin.com/84qi2cbD
Code
using System;
using System.Diagnostics;
namespace ProfileMathMax
{
class Program
{
static double controlTotalSeconds;
const int InnerLoopCount = 100000;
const int OuterLoopCount = 1000000000 / InnerLoopCount;
static int[] values = new int[InnerLoopCount];
static int total = 0;
static void ProfileBase()
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
int maxValue;
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
// baseline
total += values[i];
}
}
stopwatch.Stop();
controlTotalSeconds = stopwatch.Elapsed.TotalSeconds;
Console.WriteLine("Control - Empty Loop - " + controlTotalSeconds + " seconds");
}
static void ProfileMathMax()
{
int maxValue;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
maxValue = Math.Max(values[i], maxValue);
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("Math.Max(a, max) - " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void ProfileMathMaxReverse()
{
int maxValue;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
maxValue = Math.Max(maxValue, values[i]);
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("Math.Max(max, a) - " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void ProfileInline()
{
int maxValue = 0;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
maxValue = maxValue > values[i] ? values[i] : maxValue;
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("max = max > a ? a : max: " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void ProfileIf()
{
int maxValue = 0;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int j = 0; j < OuterLoopCount; j++)
{
maxValue = 0;
for (int i = 0; i < InnerLoopCount; i++)
{
if (values[i] > maxValue)
maxValue = values[i];
total += values[i];
}
}
stopwatch.Stop();
Console.WriteLine("if (a > max) max = a: " + stopwatch.Elapsed.TotalSeconds + " seconds");
Console.WriteLine("Relative: " + (stopwatch.Elapsed.TotalSeconds - controlTotalSeconds) + " seconds");
}
static void Main(string[] args)
{
Random rnd = new Random();
for (int i = 0; i < InnerLoopCount; i++)
{
//values[i] = i; // worst case: every new number biggest than the previous
//values[i] = i == 0 ? 1 : 0; // best case: first number is the maximum
values[i] = rnd.Next(int.MaxValue); // average case: random numbers
}
ProfileBase();
Console.WriteLine();
ProfileMathMax();
Console.WriteLine();
ProfileMathMaxReverse();
Console.WriteLine();
ProfileInline();
Console.WriteLine();
ProfileIf();
Console.ReadLine();
}
}
}
I'd say it is quicker to understand what Math.Max is doing, and that should really be the only deciding factor here.
But as an indulgence, it's interesting to consider that Math.Max(a,b) evaluates the arguments once, whilst a > b ? a : b evaluates one of them twice. Not a problem with local variables, but for properties with side effects, the side effect may happen twice.
If the JITer chooses to inline the Math.Max function, the executable code will be identical to the if statement. If Math.Max isn't inlined, it will execute as a function call with call and return overhead not present in the if statement. So, the if statement will give identical performance to Math.Max() in the inlining case or the if statement may be a few clock cycles faster in the non-inlined case, but the difference won't be noticeable unless you are running tens of millions of comparisons.
Since the performance difference between the two is small enough to be negligible in most situations, I'd prefer the Math.Max(a,b) because it's easier to read.
Regarding performance, Modern CPUs have internal command pipeline such that every assembly command is executed in several internal steps. (e.g. fetching, interpretation, calculation, storage)
In most cases the CPU is smart enough to run these steps in parallel for sequential commands so the overall throughput is very high.
This is fine till there comes a branch (if, ?: etc.) .
The branch may break the sequence and force the CPU to trash the pipeline.
This costs a lot of clock cycles.
In theory, if the compiler is smart enough, the Math.Max can be implemented using a built it CPU command and the branching can be avoided.
In this case the Math.Max would actually be faster than the if - but it depends on the compiler..
In case of more complicated Max - like working on vectors, double []v; v.Max() the compiler can utilize highly optimized library code, that can be much faster than regular compiled code.
So it's best to go with Math.Max, but it is also recommended to check on your particular target system and compiler if it is important enough.
Math.Max(a,b)
is NOT equivalent to a > b ? a : b in all cases.
Math.Max returns the greater value of the two arguments, that is:
if (a == b) return a; // or b, doesn't matter since they're identical
else if (a > b && b < a) return a;
else if (b > a && a < b) return b;
else return undefined;
The undefined is mapped to double.NaN in case of the double overload of Math.Max for example.
a > b ? a : b
evaluates to a if a is greater than b, which does not necessarily mean that b is less than a.
A simple example that demonstrates that they are not equivalent:
var a = 0.0/0.0; // or double.NaN
var b = 1.0;
a > b ? a : b // evaluates to 1.0
Math.Max(a, b) // returns double.NaN
Take an operation;
N must be >= 0
General solutions:
A) N = Math.Max(0, N)
B) if(N < 0){N = 0}
Sorting by speed:
Slow: Math.Max (A) < (B) if-then statement :Fast (3% more faster than solution 'A')
But my solution is 4% faster than solution 'B':
N *= Math.Sign(1 + Math.Sign(N));