Even though it is good to check performance of code in terms of algorithmic analysis and Big-Oh! notation i wanted to see how much it takes for the code to execute in my PC. I had initialized a List to 9999count and removed even elements out from the them. Sadly the timespan to execute this seems to be 0:0:0. Surprised by the result there must be something wrong in the way i time the execution. Could someone help me time the code correct?
IList<int> source = new List<int>(100);
for (int i = 0; i < 9999; i++)
{
source.Add(i);
}
TimeSpan startTime, duration;
startTime = Process.GetCurrentProcess().Threads[0].UserProcessorTime;
RemoveEven(ref source);
duration = Process.GetCurrentProcess().Threads[0].UserProcessorTime.Subtract(startTime);
Console.WriteLine(duration.Milliseconds);
Console.Read();
The most appropriate thing to use there would be Stopwatch - anything involving TimeSpan has nowhere near enough precision for this:
var watch = Stopwatch.StartNew();
// something to time
watch.Stop();
Console.WriteLine(watch.ElapsedMilliseconds);
However, a modern CPU is very fast, and it would not surprise me if it can remove them in that time. Normally, for timing, you need to repeat an operation a large number of times to get a reasonable measurement.
Aside: the ref in RemoveEven(ref source) is almost certainly not needed.
In .Net 2.0 you can use the Stopwatch class
IList<int> source = new List<int>(100);
for (int i = 0; i < 9999; i++)
{
source.Add(i);
}
Stopwatch watch = new Stopwatch();
watch.Start();
RemoveEven(ref source);
//watch.ElapsedMilliseconds contains the execution time in ms
watch.Stop()
Adding to previous answers:
var sw = Stopwatch.StartNew();
// instructions to time
sw.Stop();
sw.ElapsedMilliseconds returns a long and has a resolution of:
1 millisecond = 1000000 nanoseconds
sw.Elapsed.TotalMilliseconds returns a double and has a resolution equal to the inverse of Stopwatch.Frequency. On my PC for example Stopwatch.Frequency has a value of 2939541 ticks per second, that gives sw.Elapsed.TotalMilliseconds a resolution of:
1/2939541 seconds = 3,401891655874165e-7 seconds = 340 nanoseconds
Related
I need to generate a random number between 0 and 1 in C#. It doesn't need to be more accurate than to a single decimal place but it's not a problem if it is.
I can either do Random.Next(0, 10) / 10.0 or Random.NextDouble().
I could not find any concrete information on the time complexity of either method. I assume Random.Next() will be more efficient as in Java, however the addition of the division (the complexity of which would depend on the method used by C#) complicates things.
Is it possible to find out which is more efficient purely from a theoretical standpoint? I realise I can time both over a series of tests, but want to understand why one has better complexity than the other.
Looking at the implmenentation source code, NextDouble() will be more efficient.
NextDouble() simply calls the Sample() method:
public virtual double NextDouble() {
return Sample();
}
Next(maxValue) performs a comparison on maxvalue, calls Sample(), multiplies the value by maxvalue, converts it to int and returns it:
public virtual int Next(int maxValue) {
if (maxValue<0) {
throw new ArgumentOutOfRangeException("maxValue", Environment.GetResourceString("ArgumentOutOfRange_MustBePositive", "maxValue"));
}
Contract.EndContractBlock();
return (int)(Sample()*maxValue);
}
So, as you can see, Next(maxValue) is doing the same work as NextDouble() and then doing some more, so NextDouble() will be more efficient in returning a number between 0 and 1.
For Mono users, you can see NextDouble() and Next(maxValue) implementations here. Mono does it a little differently, but it basically involves the same steps as the official implementation.
As Zoran says, you would need to be generating a huge amount of random numbers to notice a difference.
Either way, you'll be able to generate many many millions, if not billions, of random numbers every second. Do you really need that many?
On a more concrete level, both variants have time complexity O(1), meaning that you could measure the time difference between the two methods and that would be it.
Random generator = new Random();
int count = 1_000_000;
Stopwatch sw = new Stopwatch();
sw.Start();
double res;
for (int i = 0; i < count; i++)
res = generator.Next(0, 10) / 10.0;
sw.Stop();
Stopwatch sw1 = new Stopwatch();
sw1.Start();
for (int i = 0; i < count; i++)
res = generator.NextDouble();
sw1.Stop();
Console.WriteLine($"{sw.ElapsedMilliseconds} - {sw1.ElapsedMilliseconds}");
This code prints 44 msec : 29 msec on my computer. And again - I don't think that you should optimize an operation which takes 44 milliseconds on a million executions.
If 15 nanoseconds per execution still makes the difference, then the second method is one tiny bit faster.
How do I compare these two iterations to determine which is most efficient?
Process.GetProcessesByName("EXCEL")
.Where(p => p.StartTime >= _createdOn)
.ToList()
.ForEach(p => p.Kill());
vs
foreach (var proc in Process.GetProcessesByName("EXCEL").Where(p => p.StartTime >= _createdOn))
proc.Kill();
The actual difference can only be determined by running it both ways in exact situations and measuring by using Stopwatch or other profiling tools, however here are a few observations:
The only real difference is the call to ToList() which is necessary to use .ForEach(). Other than that they are equivalent.
Since I would assume that this is not run very often (how often do you need to kill every Excel process?) the performance difference should be immaterial in this scenario.
You may be fixing the symptom rather than the problem. I would be curious why you have have multiple excel processes that you need to kill.
They are almost absolutely the same. Time consuming operations here are Process.GetProcessesByName("EXCEL"), p.StartTime and proc.Kill(). You are doing the same amount of them in both cases. Everything else just takes very short time. If you want some real optimization here, you can experiment with WinAPI to do those long operations, sometimes it works faster.
EDIT:
I measured speed of these operations before, for my own project. But I didn't have exact numbers, so I rechecked again.
These are my results:
Process.GetProcessesByName():
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
Process.GetProcessesByName("chrome");
}
TimeSpan duration = DateTime.Now - start;
It takes 5 seconds for every 1000 of opeartions.
p.Kill():
TimeSpan duratiom = TimeSpan.Zero;
for (int i = 0 ; i < 1000 ; i++) {
Process p = Process.Start("c:/windows/notepad");
DateTime start = DateTime.Now;
p.Kill();
duratiom += DateTime.Now - start;
}
It takes 300 millisecond for every 1000 operations. Still a big number.
And StartTime, just compare numbers:
Without p.StartTime:
Process[] ps = Process.GetProcessesByName("chrome");
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
ps.Where(p => true).ToList();
}
TimeSpan duratiom = DateTime.Now - start;
6 milliseconds
With p.StartTime:
Process[] ps = Process.GetProcessesByName("chrome");
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
ps.Where(p => p.StartTime < DateTime.Now).ToList();
}
TimeSpan duratiom = DateTime.Now - start;
408 milliseconds
So, these numbers tell me that there is no point to optimize .Where(), ToList() or foreach. Operations with Process take dozens times more time anyway. Also, I know about profilers, and I use them to measure and optimize, but I made these examples to get the exact numbers and to show the point.
I have a while loop and all it does is a method call. I have a timer on the outside of the loop and another timer that incrementally adds up the time the method call takes inside the loop. The outer time takes about 17 seconds and the total on the inner timer is 40 ms. The loop is executing 50,000 times. Here is an example of the code:
long InnerTime = 0;
long OutterTime = 0;
Stopw1.Start();
int count = 1;
while (count <= TestCollection.Count) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedMilliseconds;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedMilliseconds;
Stopw1.Reset();
Any help would be much appreciated.
Massimo
You are comparing apples and oranges. Your outer timer measures the total time taken. Your inner timer measures the number of whole milliseconds taken by the call to Method1.
The ElapsedMilliseconds property "represents elapsed time rounded down to the nearest whole millisecond value." So, you are rounding down to the nearest millisecond about 50,000 times.
If your call to Method1 takes, on average, less than 1ms, then most of the time, the `ElapsedMilliseconds' property will return 0 and your inner count will be much, much less than the actual time. In fact, your method takes about 0.3ms on average, so you're lucky even to get it to go over 1ms 40 times.
Use the Elapsed.TotalMilliseconds or ElapsedTicks property instead of ElapsedMilliseconds. One millisecond is equivalent to 10,000 ticks.
What is this doing: TestCollection.Count ?
I suspect your 17 seconds are being spent counting your 50,000 items over and over again.
Try changing this:
while (count <= TestCollection.Count) {
...
}
to this:
int total = TestCollection.Count;
while (count <= total) {
...
}
To add to what the others have already said, in general the C# compiler must re-evaluate any property, including
TestCollection.Count
for every single loop iteration. The property's value could change from iteration to iteration.
Assigning the value to a local variable removes the compiler's need to re-evaluate for every loop iteration.
The one exception that I'm aware of is for Array.Length, which benefits from an optimization specifically for arrays. This is referred to as Array Bounds Check Elimination.
To have a correct measurement of the time that your calls take,
you should use the Ticks
Please try the following:
long InnerTime = 0;
long OutterTime = 0;
Stopwatch Stopw1 = new Stopwatch();
Stopwatch Stopw2 = new Stopwatch();
Stopw1.Start();
int count = 1;
int run = TestCollection.Count;
while (count <= run) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedTicks;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedTicks;
Stopw1.Reset();
You should not measure such a tiny method individually. But if you really want to, try this:
long innertime = 0;
while (count <= TestCollection.Count)
{
innertime -= Stopw2.GetTimestamp();
Medthod1();
innertime += Stopw2.GetTimestamp();
count++;
}
Console.WriteLine("{0} ms", innertime * 1000.0 / Stopw2.Frequency);
Is this a valid way to do performance analysis? I want to get nanosecond accuracy and determine the performance of typecasting:
class PerformanceTest
{
static double last = 0.0;
static List<object> numericGenericData = new List<object>();
static List<double> numericTypedData = new List<double>();
static void Main(string[] args)
{
double totalWithCasting = 0.0;
double totalWithoutCasting = 0.0;
for (double d = 0.0; d < 1000000.0; ++d)
{
numericGenericData.Add(d);
numericTypedData.Add(d);
}
Stopwatch stopwatch = new Stopwatch();
for (int i = 0; i < 10; ++i)
{
stopwatch.Start();
testWithTypecasting();
stopwatch.Stop();
totalWithCasting += stopwatch.ElapsedTicks;
stopwatch.Start();
testWithoutTypeCasting();
stopwatch.Stop();
totalWithoutCasting += stopwatch.ElapsedTicks;
}
Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/10));
Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting/10));
Console.ReadKey();
}
static void testWithTypecasting()
{
foreach (object o in numericGenericData)
{
last = ((double)o*(double)o)/200;
}
}
static void testWithoutTypeCasting()
{
foreach (double d in numericTypedData)
{
last = (d * d)/200;
}
}
}
The output is:
Avg with typecasting = 468872.3
Avg without typecasting = 501157.9
I'm a little suspicious... it looks like there is nearly no impact on the performance. Is casting really that cheap?
Update:
class PerformanceTest
{
static double last = 0.0;
static object[] numericGenericData = new object[100000];
static double[] numericTypedData = new double[100000];
static Stopwatch stopwatch = new Stopwatch();
static double totalWithCasting = 0.0;
static double totalWithoutCasting = 0.0;
static void Main(string[] args)
{
for (int i = 0; i < 100000; ++i)
{
numericGenericData[i] = (double)i;
numericTypedData[i] = (double)i;
}
for (int i = 0; i < 10; ++i)
{
stopwatch.Start();
testWithTypecasting();
stopwatch.Stop();
totalWithCasting += stopwatch.ElapsedTicks;
stopwatch.Reset();
stopwatch.Start();
testWithoutTypeCasting();
stopwatch.Stop();
totalWithoutCasting += stopwatch.ElapsedTicks;
stopwatch.Reset();
}
Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/(10.0)));
Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting / (10.0)));
Console.ReadKey();
}
static void testWithTypecasting()
{
foreach (object o in numericGenericData)
{
last = ((double)o * (double)o) / 200;
}
}
static void testWithoutTypeCasting()
{
foreach (double d in numericTypedData)
{
last = (d * d) / 200;
}
}
}
The output is:
Avg with typecasting = 4791
Avg without typecasting = 3303.9
Note that it's not typecasting that you are measuring, it's unboxing. The values are doubles all along, there is no type casting going on.
You forgot to reset the stopwatch between tests, so you are adding the accumulated time of all previous tests over and over. If you convert the ticks to actual time, you see that it adds up to much more than the time it took to run the test.
If you add a stopwatch.Reset(); before each stopwatch.Start();, you get a much more reasonable result like:
Avg with typecasting = 41027,1
Avg without typecasting = 20594,3
Unboxing a value is not so expensive, it only has to check that the data type in the object is correct, then get the value. Still it's a lot more work than when the type is already known. Remember that you are also measuring the looping, calculation and assigning of the result, which is the same for both tests.
Boxing a value is more expensive than unboxing it, as that allocates an object on the heap.
1) Yes, casting is usually (very) cheap.
2) You are not going to get nanosecond accuracy in a managed language. Or in an unmanaged language under most operating systems.
Consider
other processes
garbage collection
different JITters
different CPUs
And, your measurement includes the foreach loop, looks like 50% or more to me. Maybe 90%.
When you call Stopwatch.Start it is letting the timer continue to run from wherever it left off. You need to call Stopwatch.Reset() to set the timers back to zero before starting again. Personally I just use stopwatch = Stopwatch.StartNew() whenever I want to start a timer to avoid this sort of confusion.
Furthermore, you probably want to call both of your test methods before starting the "timing loop" so that they get a fair chance to "warm up" that piece of code and ensure that the JIT has had a chance to run to even the playing field.
When I do that on my machine, I see that testWithTypecasting runs in approximately half the time as testWithoutTypeCasting.
That being said however, the cast itself it not likely to be the most significant part of that performance penalty. The testWithTypecasting method is operating on a list of boxed doubles which means that there is an additional level of indirection required to retrieve each value (follow a reference to the value somewhere else in memory) in addition to increasing the total amount of memory consumed. This increases the amount of time spent on memory access and is likely to be a bigger effect than the CPU time spent "in the cast" itself.
Look into performance counters in the System.Diagnostics namespace, When you create a new counter, you first create a category, and then specify one or more counters to be placed in it.
// Create a collection of type CounterCreationDataCollection.
System.Diagnostics.CounterCreationDataCollection CounterDatas =
new System.Diagnostics.CounterCreationDataCollection();
// Create the counters and set their properties.
System.Diagnostics.CounterCreationData cdCounter1 =
new System.Diagnostics.CounterCreationData();
System.Diagnostics.CounterCreationData cdCounter2 =
new System.Diagnostics.CounterCreationData();
cdCounter1.CounterName = "Counter1";
cdCounter1.CounterHelp = "help string1";
cdCounter1.CounterType = System.Diagnostics.PerformanceCounterType.NumberOfItems64;
cdCounter2.CounterName = "Counter2";
cdCounter2.CounterHelp = "help string 2";
cdCounter2.CounterType = System.Diagnostics.PerformanceCounterType.NumberOfItems64;
// Add both counters to the collection.
CounterDatas.Add(cdCounter1);
CounterDatas.Add(cdCounter2);
// Create the category and pass the collection to it.
System.Diagnostics.PerformanceCounterCategory.Create(
"Multi Counter Category", "Category help", CounterDatas);
see MSDN docs
Just a thought but sometimes identical machine code can take a different number of cycles to execute depending on its alignment in memory so you might want to add a control or controls.
Don't "do" C# myself but in C for x86-32 and later the rdtsc instruction is usually available which is much more accurate than OS ticks. More info on rdtsc can be found by searching stackoverflow. Under C it is usually available as an intrinsic or built-in function and returns the number of clock cycles (in an 8 byte - long long/__int64 - unsigned integer) since the computer was powered up. So if the CPU has a clock speed of 3 Ghz the underlying counter is incremented 3 billion times per second. Save for a few early AMD processors, all multi-core CPUs will have their counters synchronized.
If C# does not have it you might consider writing a VERY short C function to access it from C#. There is a great deal of overhead if you access the instruction through a function vs inline. The difference between two back-to-back calls to the function will be the basic measurement overhead. If you're thinking of metering your application you'll have to determine several more complex overhead values.
You might consider shutting off the CPU energy-saving mode (and restarting the PC) as it lowers the clock frequency being fed to the CPU during periods of low activity. This is since it causes the time stamp counters of the different cores to become un-synchronized.
I'm trying to profile my code to check how long it takes to execute some parts of my code.
I've wrapped my most time-consuming part of the code in something like that:
DateTime start = DateTime.Now;
...
... // Here comes the time-consuming part
...
Console.WriteLine((DateTime.Now - start).Miliseconds);
The program is executing this part of code for couple of seconds (about 20 s) but in console I get the result of something about 800 miliseconds. Why is that so? What am I doing wrong?
Try using the Stopwatch class for this. It was intended for this exact purpose.
Stopwatch sw = Stopwatch.StartNew();
// ...
// Here comes the time-consuming part
// ...
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
Are you actually wanting the TotalMilliseconds property? Milliseconds returns the milliseconds component of the timespan, not the actual length of the timespan in milliseconds.
That said, you probably want to use Stopwatch (as the others said) since it will be more accurate.
This is a much better way to profile your code.
var result = CallMethod(); // This will JIT the method
var sw = Stopwatch.StartNew();
for (int i = 0; i < 5; i++)
{
result = CallMethod();
}
sw.Stop();
Console.WriteLine(result);
Console.WriteLine(TimeSpan.FromTick(sw.ElapsedTicks / 5));
If you instead reference the TotalMilliseconds property, you will get the result you were looking for. But, I think the other answers recommending Stopwatch to be a better practice.