C# Crashing after modifying static variable during benchmarking - c#

I have implemented B-Tree and now I am trying to find the best size per node. I am using time benchmarking to measure the speed.
The problem is that it crashes on the second tested number in benchmarking method.
For the example down the output in console is
Benchmarking 10
Benchmarking 11
The crash is in insert method of Node class, but it does not matter because when I tested using any number as SIZE it works well. Maybe I do not understand how the class objects are created or something like that.
I also tried calling benchmarking method with variuos numbers from main, but same result, during second call it crashed.
Can somebody please look at it and explain me what am I doing wrong? Thanks in advance!
I put part of the code here and here is the whole thing http://pastebin.com/AcihW1Qk
public static void benchmark(String filename, int d, int h)
{
using (System.IO.StreamWriter file = new System.IO.StreamWriter(filename))
{
Stopwatch sw = new Stopwatch();
for(int i = d; i <= h; i++)
{
Console.WriteLine("Benchmarking SIZE = " + i.ToString());
file.WriteLine("SIZE = " + i.ToString());
sw.Start();
// code here
Node.setSize(i);
Tree tree = new Tree();
for (int k = 0; k < 10000000; k++)
tree.insert(k);
Random r = new Random(10);
for (int k = 0; k < 10000; k++)
{
int x = r.Next(10000000);
tree.contains(x);
}
file.WriteLine("Depth of tree is " + tree.depth.ToString());
// end of code
sw.Stop();
file.WriteLine("TIME = " + sw.ElapsedMilliseconds.ToString());
file.WriteLine();
sw.Reset();
}
}
static void Main(string[] args)
{
benchmark("benchmark 10-11.txt", 10,11);
}

Related

How to create multiple threads that write to the same file in C#

I want to write 10^5 lines of 10^5 randomly generated numbers to a file, so that each line
contains 10^5 numbers. Therefore I wanted to know what the best approach would be
for doing this quickly. I thought of creating 10^5 threads that are launched concurrently and
each of them writes one line, so that the file is filled in the time it takes to write
only 1 line.
public static void GenerateNumbers(string path)
{
using(StreamWriter sw = new StreamWriter(path))
{
for (int i = 0; i < 100000; i++)
{
for (int j = 0; j < 100000; j++)
{
Random rnd = new Random();
int number = rnd.Next(1, 101);
sw.Write(number + " ");
}
sw.Write('\n');
}
}
}
Currently I am doing it like this, is there a faster way?
Now that there's a code snippet, some optimization can be applied.
static void Main(string[] args)
{
var sw = new Stopwatch();
const int pow = 5;
sw.Start();
GenerateNumbers("test.txt", pow);
sw.Stop();
Console.WriteLine($"Wrote 10^{pow} lines of 10^{pow} numbers in {sw.Elapsed}");
}
public static void GenerateNumbers(string path, int pow)
{
var rnd = new Random();
using var sw = new StreamWriter(path, false);
var max = Math.Pow(10, pow);
var sb = new StringBuilder();
for (long i = 0; i < max; i++)
{
for (long j = 0; j < max; j++)
{
sb.Append(rnd.Next(1, 101));
sb.Append(' ');
}
sw.WriteLine(sb.ToString());
sb.Clear();
if (i % 100 == 0)
Console.WriteLine((i / max).ToString("P"));
}
}
The above code does IO writes at a fairly decent pace (remember the limit is the IO speed, not CPU / number generation). Also note that I'm running the code from inside a VM, so I'm likely not getting the best IO results.
As mentioned by Neil Moss in the comments, you don't need to instantiate the Random class on each run.
I'm generating a single line to write in-memory using a StringBuilder, then I write this to the disk.
Since this does take a bit of time I've added a progress tracker (this adds a miniscule amount of overhead).
A 10^4 lines of 10^4 numbers file already is 285MB in size and was generated in 4.6767592 seconds.
A 10^5 case like the above yields a 25.5 GB file and takes 5:54.2580683 to generate.
I haven't tried this, but I'm wondering if you couldn't save time by writing the data to a ZIP file, assuming you're more interested in just getting the data onto the disk, and not the format itself. A compressed TXT file of numbers should be a fair-bit smaller and as such should be much faster to write.

Weird Protobuf speedup by serializing list of objects

I have found a very weird issue with ProtoBuf performance regarding serialization of large number of complex objects. Lets have these two scenarios :
A) Serialize a list of objects one by one
B) Serialize the list as a whole
By intuition, this should have similar performance. However, in my application, there 10x difference in deserialization just by putting the objects in the list and serializing the list.
Bellow you can find code to test this. The results vary in this example between 2x and 5x speedup, however in my code its prretty consistent 10x speedup.
What is causing this ? I have an app where I need to serialize objects one by one and its really downgrading performance, is there any way to increase performance of one by one serialization ?
Thanks
Output of code bellow
One by one serialization = 329204 ; deserialization = 41342
List serialization = 19531 ; deserialization = 27716
Code
[ProtoContract]
class TestObject
{
[ProtoMember(1)]public string str1;
[ProtoMember(2)]public string str2;
[ProtoMember(3)]public int i1;
[ProtoMember(4)]public int i2;
[ProtoMember(5)]public double d1;
[ProtoMember(6)]public double d2;
public TestObject(int cnt)
{
str1 = $"Hello World {cnt}";
str2 = $"Lorem ipsum {cnt}";
for (int i = 0; i < 2 ; i++) str1 = str1 + str1;
d1 = i1 = cnt;
d2 = i2 = cnt * 2;
}
public TestObject() { }
}
private void ProtoBufTest()
{
//init test data
List<TestObject> objects = new List<TestObject>();
int numObjects = 1000;
for(int i = 0; i < numObjects;i++)
{
objects.Add(new TestObject(i));
}
Stopwatch sw = new Stopwatch();
MemoryStream memStream = new MemoryStream();
//test 1
sw.Restart();
for (int i = 0; i < numObjects; i++)
{
ProtoBuf.Serializer.SerializeWithLengthPrefix<TestObject>(memStream, objects[i], ProtoBuf.PrefixStyle.Base128);
}
long timeToSerializeSeparately = sw.ElapsedTicks;
memStream.Position = 0;
sw.Restart();
for (int i = 0; i < numObjects; i++)
{
ProtoBuf.Serializer.DeserializeWithLengthPrefix<TestObject>(memStream, ProtoBuf.PrefixStyle.Base128);
}
long timeToDeserializeSeparately = sw.ElapsedTicks;
//test 2
memStream.Position = 0;
sw.Restart();
ProtoBuf.Serializer.SerializeWithLengthPrefix<List<TestObject>>(memStream, objects, ProtoBuf.PrefixStyle.Base128);
long timeToSerializeList = sw.ElapsedTicks;
memStream.Position = 0;
sw.Restart();
ProtoBuf.Serializer.DeserializeWithLengthPrefix<List<TestObject>>(memStream, ProtoBuf.PrefixStyle.Base128);
long timeToDeserializeList = sw.ElapsedTicks;
Console.WriteLine($"One by one serialization = {timeToSerializeSeparately} ; deserialization = {timeToDeserializeSeparately}");
Console.WriteLine($"List serialization = {timeToSerializeList} ; deserialization = {timeToDeserializeList}");
}
I think you are misrepresenting the initial reflection pre-processing and the JIT costs; if we change it so it runs the test multiple times:
static void Main()
{
ProtoBufTest(1);
for (int i = 0; i < 10; i++)
{
ProtoBufTest(1000);
}
}
private static void ProtoBufTest(int numObjects)
{
...
then I the results I would expect, where the single object code is faster.
Basically, it does a lot of work the first time it is needed, essentially exactly what you ask here:
is there some way to force ProtoBuf to cache reflection data between calls ? That would help a lot probably
already happens. As a side note, you can also do:
Serializer.PrepareSerializer<TestObject>();
once right at the start of your app, and it will do as much as possible then. I can't force JIT to happen, though - to do that, you need to invoke the code once.

Initiate a float list with zeros in C#

I want to initiate a list of N objects with zeros( 0.0 ) . I thought of doing it like that:
var TempList = new List<float>(new float[(int)(N)]);
Is there any better(more efficeint) way to do that?
Your current solution creates an array with the sole purpose of initialising a list with zeros, and then throws that array away. This might appear to be not efficient. However, as we shall see, it is in fact very efficient!
Here's a method that doesn't create an intermediary array:
int n = 100;
var list = new List<float>(n);
for (int i = 0; i < n; ++i)
list.Add(0f);
Alternatively, you can use Enumerable.Repeat() to provide 0f "n" times, like so:
var list = new List<float>(n);
list.AddRange(Enumerable.Repeat(0f, n));
But both these methods turn out to be a slower!
Here's a little test app to do some timings.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
namespace Demo
{
public class Program
{
private static void Main()
{
var sw = new Stopwatch();
int n = 1024*1024*16;
int count = 10;
int dummy = 0;
for (int trial = 0; trial < 4; ++trial)
{
sw.Restart();
for (int i = 0; i < count; ++i)
dummy += method1(n).Count;
Console.WriteLine("Enumerable.Repeat() took " + sw.Elapsed);
sw.Restart();
for (int i = 0; i < count; ++i)
dummy += method2(n).Count;
Console.WriteLine("list.Add() took " + sw.Elapsed);
sw.Restart();
for (int i = 0; i < count; ++i)
dummy += method3(n).Count;
Console.WriteLine("(new float[n]) took " + sw.Elapsed);
Console.WriteLine("\n");
}
}
private static List<float> method1(int n)
{
var list = new List<float>(n);
list.AddRange(Enumerable.Repeat(0f, n));
return list;
}
private static List<float> method2(int n)
{
var list = new List<float>(n);
for (int i = 0; i < n; ++i)
list.Add(0f);
return list;
}
private static List<float> method3(int n)
{
return new List<float>(new float[n]);
}
}
}
Here's my results for a RELEASE build:
Enumerable.Repeat() took 00:00:02.9508207
list.Add() took 00:00:01.1986594
(new float[n]) took 00:00:00.5318123
So it turns out that creating an intermediary array is quite a lot faster. However, be aware that this testing code is flawed because it doesn't account for garbage collection overhead caused by allocating the intermediary array (which is very hard to time properly).
Finally, there is a REALLY EVIL, NASTY way you can optimise this using reflection. But this is brittle, will probably fail to work in the future, and should never, ever be used in production code.
I present it here only as a curiosity:
private static List<float> method4(int n)
{
var list = new List<float>(n);
list.GetType().GetField("_size", BindingFlags.NonPublic | BindingFlags.Instance).SetValue(list, n);
return list;
}
Doing this reduces the time to less than a tenth of a second, compared to the next fastest method which takes half a second. But don't do it.
What is wrong with
float[] A = new float[N];
or
List<float> A = new List<float>(N);
Note that trying to micromanage the compiler is not optimization. Start with the cleanest code that does what you want and let the compiler do its thing.
Edit 1
The solution with List<float> produces an empty list, with only internally N items initialized. So we can trick it with some reflection
static void Main(string[] args)
{
int N=100;
float[] array = new float[N];
List<float> list=new List<float>(N);
var size=typeof(List<float>).GetField("_size", BindingFlags.Instance|BindingFlags.NonPublic);
size.SetValue(list, N);
// Now list has 100 zero items
}
Why not:
var itemsWithZeros = new float[length];

Why is List<string>.Sort() slow?

So I noticed that a treeview took unusually long to sort, first I figured that most of the time was spent repainting the control after adding each sorted item. But eitherway I had a gut feeling that List<T>.Sort() was taking longer than reasonable so I used a custom sort method to benchmark it against. The results were interesting, List<T>.Sort() took ~20 times longer, that's the biggest disappointment in performance I've ever encountered in .NET for such a simple task.
My question is, what could be the reason for this? My guess is the overhead of invoking the comparison delegate, which further has to call String.Compare() (in case of string sorting). Increasing the size of the list appears to increase the performance gap. Any ideas? I'm trying to use .NET classes as much as possible but in cases like this I just can't.
Edit:
static List<string> Sort(List<string> list)
{
if (list.Count == 0)
{
return new List<string>();
}
List<string> _list = new List<string>(list.Count);
_list.Add(list[0]);
int length = list.Count;
for (int i = 1; i < length; i++)
{
string item = list[i];
int j;
for (j = _list.Count - 1; j >= 0; j--)
{
if (String.Compare(item, _list[j]) > 0)
{
_list.Insert(j + 1, item);
break;
}
}
if (j == -1)
{
_list.Insert(0, item);
}
}
return _list;
}
Answer: It's not.
I ran the following benchmark in a simple console app and your code was slower:
static void Main(string[] args)
{
long totalListSortTime = 0;
long totalCustomSortTime = 0;
for (int c = 0; c < 100; c++)
{
List<string> list1 = new List<string>();
List<string> list2 = new List<string>();
for (int i = 0; i < 5000; i++)
{
var rando = RandomString(15);
list1.Add(rando);
list2.Add(rando);
}
Stopwatch watch1 = new Stopwatch();
Stopwatch watch2 = new Stopwatch();
watch2.Start();
list2 = Sort(list2);
watch2.Stop();
totalCustomSortTime += watch2.ElapsedMilliseconds;
watch1.Start();
list1.Sort();
watch1.Stop();
totalListSortTime += watch1.ElapsedMilliseconds;
}
Console.WriteLine("totalListSortTime = " + totalListSortTime);
Console.WriteLine("totalCustomSortTime = " + totalCustomSortTime);
Console.ReadLine();
}
Result:
I haven't had the time to fully test it because I had a blackout (writing from phone now), but it would seem your code (from Pastebin) is sorting several times an already ordered list, so it would seem that your algorithm could be faster to...sort an already sorted list. In case the standard .NET implementation is a Quick Sort, this would be natural since QS has its worst case scenario on already sorted lists.

C# FindAll VS Where Speed

Anyone know any speed differences between Where and FindAll on List. I know Where is part of IEnumerable and FindAll is part of List, I'm just curious what's faster.
The FindAll method of the List<T> class actually constructs a new list object, and adds results to it. The Where extension method for IEnumerable<T> will simply iterate over an existing list and yield an enumeration of the matching results without creating or adding anything (other than the enumerator itself.)
Given a small set, the two would likely perform comparably. However, given a larger set, Where should outperform FindAll, as the new List created to contain the results will have to dynamically grow to contain additional results. Memory usage of FindAll will also start to grow exponentially as the number of matching results increases, where as Where should have constant minimal memory usage (in and of itself...excluding whatever you do with the results.)
FindAll is obviously slower than Where, because it needs to create a new list.
Anyway, I think you really should consider Jon Hanna comment - you'll probably need to do some operations on your results and list would be more useful than IEnumerable in many cases.
I wrote small test, just paste it in Console App project. It measures time/ticks of: function execution, operations on results collection(to get perf. of 'real' usage, and to be sure that compiler won't optimize unused data etc. - I'm new to C# and don't know how it works yet,sorry).
Notice: every measured function except WhereIENumerable() creates new List of elements. I might be doing something wrong, but clearly iterating IEnumerable takes much more time than iterating list.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace Tests
{
public class Dummy
{
public int Val;
public Dummy(int val)
{
Val = val;
}
}
public class WhereOrFindAll
{
const int ElCount = 20000000;
const int FilterVal =1000;
const int MaxVal = 2000;
const bool CheckSum = true; // Checks sum of elements in list of resutls
static List<Dummy> list = new List<Dummy>();
public delegate void FuncToTest();
public static long TestTicks(FuncToTest function, string msg)
{
Stopwatch watch = new Stopwatch();
watch.Start();
function();
watch.Stop();
Console.Write("\r\n"+msg + "\t ticks: " + (watch.ElapsedTicks));
return watch.ElapsedTicks;
}
static void Check(List<Dummy> list)
{
if (!CheckSum) return;
Stopwatch watch = new Stopwatch();
watch.Start();
long res=0;
int count = list.Count;
for (int i = 0; i < count; i++) res += list[i].Val;
for (int i = 0; i < count; i++) res -= (long)(list[i].Val * 0.3);
watch.Stop();
Console.Write("\r\n\nCheck sum: " + res.ToString() + "\t iteration ticks: " + watch.ElapsedTicks);
}
static void Check(IEnumerable<Dummy> ieNumerable)
{
if (!CheckSum) return;
Stopwatch watch = new Stopwatch();
watch.Start();
IEnumerator<Dummy> ieNumerator = ieNumerable.GetEnumerator();
long res = 0;
while (ieNumerator.MoveNext()) res += ieNumerator.Current.Val;
ieNumerator=ieNumerable.GetEnumerator();
while (ieNumerator.MoveNext()) res -= (long)(ieNumerator.Current.Val * 0.3);
watch.Stop();
Console.Write("\r\n\nCheck sum: " + res.ToString() + "\t iteration ticks :" + watch.ElapsedTicks);
}
static void Generate()
{
if (list.Count > 0)
return;
var rand = new Random();
for (int i = 0; i < ElCount; i++)
list.Add(new Dummy(rand.Next(MaxVal)));
}
static void For()
{
List<Dummy> resList = new List<Dummy>();
int count = list.Count;
for (int i = 0; i < count; i++)
{
if (list[i].Val < FilterVal)
resList.Add(list[i]);
}
Check(resList);
}
static void Foreach()
{
List<Dummy> resList = new List<Dummy>();
int count = list.Count;
foreach (Dummy dummy in list)
{
if (dummy.Val < FilterVal)
resList.Add(dummy);
}
Check(resList);
}
static void WhereToList()
{
List<Dummy> resList = list.Where(x => x.Val < FilterVal).ToList<Dummy>();
Check(resList);
}
static void WhereIEnumerable()
{
Stopwatch watch = new Stopwatch();
IEnumerable<Dummy> iEnumerable = list.Where(x => x.Val < FilterVal);
Check(iEnumerable);
}
static void FindAll()
{
List<Dummy> resList = list.FindAll(x => x.Val < FilterVal);
Check(resList);
}
public static void Run()
{
Generate();
long[] ticks = { 0, 0, 0, 0, 0 };
for (int i = 0; i < 10; i++)
{
ticks[0] += TestTicks(For, "For \t\t");
ticks[1] += TestTicks(Foreach, "Foreach \t");
ticks[2] += TestTicks(WhereToList, "Where to list \t");
ticks[3] += TestTicks(WhereIEnumerable, "Where Ienum \t");
ticks[4] += TestTicks(FindAll, "FindAll \t");
Console.Write("\r\n---------------");
}
for (int i = 0; i < 5; i++)
Console.Write("\r\n"+ticks[i].ToString());
}
}
class Program
{
static void Main(string[] args)
{
WhereOrFindAll.Run();
Console.Read();
}
}
}
Results(ticks) - CheckSum enabled(some operations on results), mode: release without debugging(CTRL+F5):
- 16,222,276 (for ->list)
- 17,151,121 (foreach -> list)
- 4,741,494 (where ->list)
- 27,122,285 (where ->ienum)
- 18,821,571 (findall ->list)
CheckSum disabled (not using returned list at all):
- 10,885,004 (for ->list)
- 11,221,888 (foreach ->list)
- 18,688,433 (where ->list)
- 1,075 (where ->ienum)
- 13,720,243 (findall ->list)
Your results can be slightly different, to get real results you need more iterations.
UPDATE(from comment): Looking through that code I agree, .Where should have, at worst, equal performance but almost always better.
Original answer:
.FindAll() should be faster, it takes advantage of already knowing the List's size and looping through the internal array with a simple for loop. .Where() has to fire up an enumerator (a sealed framework class called WhereIterator in this case) and do the same job in a less specific way.
Keep in mind though, that .Where() is enumerable, not actively creating a List in memory and filling it. It's more like a stream, so the memory use on something very large can have a significant difference. Also, you could start using the results in a parallel fashion much faster using there .Where() approach in 4.0.
Where is much, much faster than FindAll. No matter how big the list is, Where takes exactly the same amount of time.
Of course Where just creates a query. It doesn't actually do anything, unlike FindAll which does create a list.
The answer from jrista makes senses. However, the new list adds the same objects, thus just growing with reference to existing objects, which should not be that slow.
As long as 3.5 / Linq extension are possible, Where stays better anyway.
FindAll makes much more sense when limited with 2.0

Categories