Slow While loop in C# - c#

I have a while loop and all it does is a method call. I have a timer on the outside of the loop and another timer that incrementally adds up the time the method call takes inside the loop. The outer time takes about 17 seconds and the total on the inner timer is 40 ms. The loop is executing 50,000 times. Here is an example of the code:
long InnerTime = 0;
long OutterTime = 0;
Stopw1.Start();
int count = 1;
while (count <= TestCollection.Count) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedMilliseconds;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedMilliseconds;
Stopw1.Reset();
Any help would be much appreciated.
Massimo

You are comparing apples and oranges. Your outer timer measures the total time taken. Your inner timer measures the number of whole milliseconds taken by the call to Method1.
The ElapsedMilliseconds property "represents elapsed time rounded down to the nearest whole millisecond value." So, you are rounding down to the nearest millisecond about 50,000 times.
If your call to Method1 takes, on average, less than 1ms, then most of the time, the `ElapsedMilliseconds' property will return 0 and your inner count will be much, much less than the actual time. In fact, your method takes about 0.3ms on average, so you're lucky even to get it to go over 1ms 40 times.
Use the Elapsed.TotalMilliseconds or ElapsedTicks property instead of ElapsedMilliseconds. One millisecond is equivalent to 10,000 ticks.

What is this doing: TestCollection.Count ?
I suspect your 17 seconds are being spent counting your 50,000 items over and over again.

Try changing this:
while (count <= TestCollection.Count) {
...
}
to this:
int total = TestCollection.Count;
while (count <= total) {
...
}

To add to what the others have already said, in general the C# compiler must re-evaluate any property, including
TestCollection.Count
for every single loop iteration. The property's value could change from iteration to iteration.
Assigning the value to a local variable removes the compiler's need to re-evaluate for every loop iteration.
The one exception that I'm aware of is for Array.Length, which benefits from an optimization specifically for arrays. This is referred to as Array Bounds Check Elimination.

To have a correct measurement of the time that your calls take,
you should use the Ticks
Please try the following:
long InnerTime = 0;
long OutterTime = 0;
Stopwatch Stopw1 = new Stopwatch();
Stopwatch Stopw2 = new Stopwatch();
Stopw1.Start();
int count = 1;
int run = TestCollection.Count;
while (count <= run) {
Stopw2.Start();
Medthod1();
Stopw2.Stop();
InnerTime = InnerTime + Stopw2.ElapsedTicks;
Stopw2.Reset();
count++;
}
Stopw1.Stop();
OutterTime = Stopw1.ElapsedTicks;
Stopw1.Reset();

You should not measure such a tiny method individually. But if you really want to, try this:
long innertime = 0;
while (count <= TestCollection.Count)
{
innertime -= Stopw2.GetTimestamp();
Medthod1();
innertime += Stopw2.GetTimestamp();
count++;
}
Console.WriteLine("{0} ms", innertime * 1000.0 / Stopw2.Frequency);

Related

Why is there such big time difference in searching for element with higher index in ConcurrentBag?

I was looking at different times for searching a specific element in ConcurrentBag with the use of .ElementAt() and found this strange time difference between searching for an element with an index of 950,000 and searching for an element with an index of 1,000,000.
The time it took to find the element on 950,000th place took between 62 and 68 milliseconds.
The time it took to find the element on 1,000,000th place took between 20 and 23 milliseconds.
And I'm not sure why that is.
The code looks like this:
ConcurrentBag<int?> concurrentBag = new ConcurrentBag<int?>();
int n = 1000000;
int? n1 = n;
for (int i = 0; i <= n1; i++)
{
concurrentBag.Add(i);
}
DateTime before = DateTime.Now;
int? a = concurrentBag.ElementAt(n);
DateTime after = DateTime.Now;
TimeSpan time = after - before;
Console.WriteLine(time.TotalMilliseconds);
You should not be using the DateTime library to check accuracy and benchmarks at this small of an interval. When I run this code I get anywhere from 11ms to 70ms each time. It's not going to be consistent.
You are doing one single lookup. Your machine could be doing any number of other operations that would affect the speed of a single lookup. You should run this code many thousand times and get the average to have any sort of valid data.

How to determine efficiency?

How do I compare these two iterations to determine which is most efficient?
Process.GetProcessesByName("EXCEL")
.Where(p => p.StartTime >= _createdOn)
.ToList()
.ForEach(p => p.Kill());
vs
foreach (var proc in Process.GetProcessesByName("EXCEL").Where(p => p.StartTime >= _createdOn))
proc.Kill();
The actual difference can only be determined by running it both ways in exact situations and measuring by using Stopwatch or other profiling tools, however here are a few observations:
The only real difference is the call to ToList() which is necessary to use .ForEach(). Other than that they are equivalent.
Since I would assume that this is not run very often (how often do you need to kill every Excel process?) the performance difference should be immaterial in this scenario.
You may be fixing the symptom rather than the problem. I would be curious why you have have multiple excel processes that you need to kill.
They are almost absolutely the same. Time consuming operations here are Process.GetProcessesByName("EXCEL"), p.StartTime and proc.Kill(). You are doing the same amount of them in both cases. Everything else just takes very short time. If you want some real optimization here, you can experiment with WinAPI to do those long operations, sometimes it works faster.
EDIT:
I measured speed of these operations before, for my own project. But I didn't have exact numbers, so I rechecked again.
These are my results:
Process.GetProcessesByName():
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
Process.GetProcessesByName("chrome");
}
TimeSpan duration = DateTime.Now - start;
It takes 5 seconds for every 1000 of opeartions.
p.Kill():
TimeSpan duratiom = TimeSpan.Zero;
for (int i = 0 ; i < 1000 ; i++) {
Process p = Process.Start("c:/windows/notepad");
DateTime start = DateTime.Now;
p.Kill();
duratiom += DateTime.Now - start;
}
It takes 300 millisecond for every 1000 operations. Still a big number.
And StartTime, just compare numbers:
Without p.StartTime:
Process[] ps = Process.GetProcessesByName("chrome");
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
ps.Where(p => true).ToList();
}
TimeSpan duratiom = DateTime.Now - start;
6 milliseconds
With p.StartTime:
Process[] ps = Process.GetProcessesByName("chrome");
DateTime start = DateTime.Now;
for (int i = 0 ; i < 1000 ; i++) {
ps.Where(p => p.StartTime < DateTime.Now).ToList();
}
TimeSpan duratiom = DateTime.Now - start;
408 milliseconds
So, these numbers tell me that there is no point to optimize .Where(), ToList() or foreach. Operations with Process take dozens times more time anyway. Also, I know about profilers, and I use them to measure and optimize, but I made these examples to get the exact numbers and to show the point.

Algorithm to update average transfer rate on-the-go with C#

I have a lengthy method that writes data into a database. It is called repeatedly. I also maintain the counter of records written so far, as well as the total number of records that need to be written as such:
private int currentCount;
private int totalCount;
private double fAverageTransferRate;
bool processingMethod()
{
//Processes one record at a time
DateTime dtNow = DateTime.Now; //Time now
fAverageTransferRate = //?
}
I know that to calculate a transfer rate I need to take the number of records written in one second, right, but here come two questions:
How would I time my calculation exactly at 1 second mark?
And, most of all, how do I calculate an average transfer rate?
PS. I need this done, on the go, so to speak, while this method is running (and not after it is finished.)
You could think about it a different way, since what you're really interested in is the rate of processing records. Therefore, you con't need to make the calculation happen at precisely 1 second intervals. Rather, you need it happen about every second but then know exactly when it happens.
To calculate the average transfer rate, just keep a running count of the number of records you are transferring. If more than 1 second has elapsed since the last time you computed the average, its time to compute the average anew. Zero out the running count when you're done, in preparation for the next round.
Pseudo-code follows:
// somewhere outside:
int lastdonetime = 0;
int numprocessed = 0;
bool processingMethod()
{
DateTime dtNow = DateTime.Now; //Time now
if (lastdonetime == 0) lastdonetime = dtNow;
if (dtNow - lastdonetime > 1) {
fAverageTransferRate = numprocessed / (dtNow - lastdonetime);
// Do what you want with fAverageTransferRate
lastdonetime = dtNow;
numprocessed = 0;
}
}

How come this algorithm in Ruby runs faster than in Parallel'd C#?

The following ruby code runs in ~15s. It barely uses any CPU/Memory (about 25% of one CPU):
def collatz(num)
num.even? ? num/2 : 3*num + 1
end
start_time = Time.now
max_chain_count = 0
max_starter_num = 0
(1..1000000).each do |i|
count = 0
current = i
current = collatz(current) and count += 1 until (current == 1)
max_chain_count = count and max_starter_num = i if (count > max_chain_count)
end
puts "Max starter num: #{max_starter_num} -> chain of #{max_chain_count} elements. Found in: #{Time.now - start_time}s"
And the following TPL C# puts all my 4 cores to 100% usage and is orders of magnitude slower than the ruby version:
static void Euler14Test()
{
Stopwatch sw = new Stopwatch();
sw.Start();
int max_chain_count = 0;
int max_starter_num = 0;
object locker = new object();
Parallel.For(1, 1000000, i =>
{
int count = 0;
int current = i;
while (current != 1)
{
current = collatz(current);
count++;
}
if (count > max_chain_count)
{
lock (locker)
{
max_chain_count = count;
max_starter_num = i;
}
}
if (i % 1000 == 0)
Console.WriteLine(i);
});
sw.Stop();
Console.WriteLine("Max starter i: {0} -> chain of {1} elements. Found in: {2}s", max_starter_num, max_chain_count, sw.Elapsed.ToString());
}
static int collatz(int num)
{
return num % 2 == 0 ? num / 2 : 3 * num + 1;
}
How come ruby runs faster than C#? I've been told that Ruby is slow. Is that not true when it comes to algorithms?
Perf AFTER correction:
Ruby (Non parallel): 14.62s
C# (Non parallel): 2.22s
C# (With TPL): 0.64s
Actually, the bug is quite subtle, and has nothing to do with threading. The reason that your C# version takes so long is that the intermediate values computed by the collatz method eventually start to overflow the int type, resulting in negative numbers which may then take ages to converge.
This first happens when i is 134,379, for which the 129th term (assuming one-based counting) is 2,482,111,348. This exceeds the maximum value of 2,147,483,647 and therefore gets stored as -1,812,855,948.
To get good performance (and correct results) on the C# version, just change:
int current = i;
…to:
long current = i;
…and:
static int collatz(int num)
…to:
static long collatz(long num)
That will bring down your performance to a respectable 1.5 seconds.
Edit: CodesInChaos raises a very valid point about enabling overflow checking when debugging math-oriented applications. Doing so would have allowed the bug to be immediately identified, since the runtime would throw an OverflowException.
Should be:
Parallel.For(1L, 1000000L, i =>
{
Otherwise, you have integer overfill and start checking negative values. The same collatz method should operate with long values.
I experienced something like that. And I figured out that's because each of your loop iterations need to start other thread and this takes some time, and in this case it's comparable (I think it's more time) than the operations you acctualy do in the loop body.
There is an alternative for that: You can get how many CPU cores you have and than use a parallelism loop with the same number of iterations you have cores, each loop will evaluate part of the acctual loop you want, it's done by making an inner for loop that depends on the parallel loop.
EDIT: EXAMPLE
int start = 1, end = 1000000;
Parallel.For(0, N_CORES, n =>
{
int s = start + (end - start) * n / N_CORES;
int e = n == N_CORES - 1 ? end : start + (end - start) * (n + 1) / N_CORES;
for (int i = s; i < e; i++)
{
// Your code
}
});
You should try this code, I'm pretty sure this will do the job faster.
EDIT: ELUCIDATION
Well, quite a long time since I answered this question, but I faced the problem again and finally understood what's going on.
I've been using AForge implementation of Parallel for loop, and it seems like, it fires a thread for each iteration of the loop, so, that's why if the loop takes relatively a small amount of time to execute, you end up with a inefficient parallelism.
So, as some of you pointed out, System.Threading.Tasks.Parallel methods are based on Tasks, which are kind of a higher level of abstraction of a Thread:
"Behind the scenes, tasks are queued to the ThreadPool, which has been enhanced with algorithms that determine and adjust to the number of threads and that provide load balancing to maximize throughput. This makes tasks relatively lightweight, and you can create many of them to enable fine-grained parallelism."
So yeah, if you use the default library's implementation, you won't need to use this kind of "bogus".

Timing C# code using Timer

Even though it is good to check performance of code in terms of algorithmic analysis and Big-Oh! notation i wanted to see how much it takes for the code to execute in my PC. I had initialized a List to 9999count and removed even elements out from the them. Sadly the timespan to execute this seems to be 0:0:0. Surprised by the result there must be something wrong in the way i time the execution. Could someone help me time the code correct?
IList<int> source = new List<int>(100);
for (int i = 0; i < 9999; i++)
{
source.Add(i);
}
TimeSpan startTime, duration;
startTime = Process.GetCurrentProcess().Threads[0].UserProcessorTime;
RemoveEven(ref source);
duration = Process.GetCurrentProcess().Threads[0].UserProcessorTime.Subtract(startTime);
Console.WriteLine(duration.Milliseconds);
Console.Read();
The most appropriate thing to use there would be Stopwatch - anything involving TimeSpan has nowhere near enough precision for this:
var watch = Stopwatch.StartNew();
// something to time
watch.Stop();
Console.WriteLine(watch.ElapsedMilliseconds);
However, a modern CPU is very fast, and it would not surprise me if it can remove them in that time. Normally, for timing, you need to repeat an operation a large number of times to get a reasonable measurement.
Aside: the ref in RemoveEven(ref source) is almost certainly not needed.
In .Net 2.0 you can use the Stopwatch class
IList<int> source = new List<int>(100);
for (int i = 0; i < 9999; i++)
{
source.Add(i);
}
Stopwatch watch = new Stopwatch();
watch.Start();
RemoveEven(ref source);
//watch.ElapsedMilliseconds contains the execution time in ms
watch.Stop()
Adding to previous answers:
var sw = Stopwatch.StartNew();
// instructions to time
sw.Stop();
sw.ElapsedMilliseconds returns a long and has a resolution of:
1 millisecond = 1000000 nanoseconds
sw.Elapsed.TotalMilliseconds returns a double and has a resolution equal to the inverse of Stopwatch.Frequency. On my PC for example Stopwatch.Frequency has a value of 2939541 ticks per second, that gives sw.Elapsed.TotalMilliseconds a resolution of:
1/2939541 seconds = 3,401891655874165e-7 seconds = 340 nanoseconds

Categories