Weird Protobuf speedup by serializing list of objects - c#

I have found a very weird issue with ProtoBuf performance regarding serialization of large number of complex objects. Lets have these two scenarios :
A) Serialize a list of objects one by one
B) Serialize the list as a whole
By intuition, this should have similar performance. However, in my application, there 10x difference in deserialization just by putting the objects in the list and serializing the list.
Bellow you can find code to test this. The results vary in this example between 2x and 5x speedup, however in my code its prretty consistent 10x speedup.
What is causing this ? I have an app where I need to serialize objects one by one and its really downgrading performance, is there any way to increase performance of one by one serialization ?
Thanks
Output of code bellow
One by one serialization = 329204 ; deserialization = 41342
List serialization = 19531 ; deserialization = 27716
Code
[ProtoContract]
class TestObject
{
[ProtoMember(1)]public string str1;
[ProtoMember(2)]public string str2;
[ProtoMember(3)]public int i1;
[ProtoMember(4)]public int i2;
[ProtoMember(5)]public double d1;
[ProtoMember(6)]public double d2;
public TestObject(int cnt)
{
str1 = $"Hello World {cnt}";
str2 = $"Lorem ipsum {cnt}";
for (int i = 0; i < 2 ; i++) str1 = str1 + str1;
d1 = i1 = cnt;
d2 = i2 = cnt * 2;
}
public TestObject() { }
}
private void ProtoBufTest()
{
//init test data
List<TestObject> objects = new List<TestObject>();
int numObjects = 1000;
for(int i = 0; i < numObjects;i++)
{
objects.Add(new TestObject(i));
}
Stopwatch sw = new Stopwatch();
MemoryStream memStream = new MemoryStream();
//test 1
sw.Restart();
for (int i = 0; i < numObjects; i++)
{
ProtoBuf.Serializer.SerializeWithLengthPrefix<TestObject>(memStream, objects[i], ProtoBuf.PrefixStyle.Base128);
}
long timeToSerializeSeparately = sw.ElapsedTicks;
memStream.Position = 0;
sw.Restart();
for (int i = 0; i < numObjects; i++)
{
ProtoBuf.Serializer.DeserializeWithLengthPrefix<TestObject>(memStream, ProtoBuf.PrefixStyle.Base128);
}
long timeToDeserializeSeparately = sw.ElapsedTicks;
//test 2
memStream.Position = 0;
sw.Restart();
ProtoBuf.Serializer.SerializeWithLengthPrefix<List<TestObject>>(memStream, objects, ProtoBuf.PrefixStyle.Base128);
long timeToSerializeList = sw.ElapsedTicks;
memStream.Position = 0;
sw.Restart();
ProtoBuf.Serializer.DeserializeWithLengthPrefix<List<TestObject>>(memStream, ProtoBuf.PrefixStyle.Base128);
long timeToDeserializeList = sw.ElapsedTicks;
Console.WriteLine($"One by one serialization = {timeToSerializeSeparately} ; deserialization = {timeToDeserializeSeparately}");
Console.WriteLine($"List serialization = {timeToSerializeList} ; deserialization = {timeToDeserializeList}");
}

I think you are misrepresenting the initial reflection pre-processing and the JIT costs; if we change it so it runs the test multiple times:
static void Main()
{
ProtoBufTest(1);
for (int i = 0; i < 10; i++)
{
ProtoBufTest(1000);
}
}
private static void ProtoBufTest(int numObjects)
{
...
then I the results I would expect, where the single object code is faster.
Basically, it does a lot of work the first time it is needed, essentially exactly what you ask here:
is there some way to force ProtoBuf to cache reflection data between calls ? That would help a lot probably
already happens. As a side note, you can also do:
Serializer.PrepareSerializer<TestObject>();
once right at the start of your app, and it will do as much as possible then. I can't force JIT to happen, though - to do that, you need to invoke the code once.

Related

c# performance changes rapidly by looping just 0,001% more often

I'm working on an UI project which has to work with huge datasets (every second 35 new values) which will then be displayed in a graph. The user shall be able to change the view from 10 Minutes up to Month view. To archive this I wrote myself a helper function which truncate a lot of data to a 600 byte array which then should be displayed on a LiveView Chart.
I found out that at the beginning the software works very well and fast, but as longer the software runs (e.g. for a month) and the memory usage raises (to ca. 600 mb) the function get's a lot of slower (up to 8x).
So I made some tests to to find the source of this. Quite surprised I found out that there is something like a magic number where the function get's 2x slower , just by changing 71494 loops to 71495 from 19ms to 39ms runtime
I'm really confused. Even when you comment out the second for loop (where the arrays are geting truncated) it is a lot of slower.
Maybe this has something to do with the Garbage Collector? Or does C# compress memory automatically?
Using Visual Studio 2017 with newest updates.
The Code
using System;
using System.Collections.Generic;
using System.Diagnostics;
namespace TempoaryTest
{
class ProductNameStream
{
public struct FileValue
{
public DateTime Time;
public ushort[] Value;
public ushort[] Avg1;
public ushort[] Avg2;
public ushort[] DAvg;
public ushort AlarmDelta;
public ushort AlarmAverage;
public ushort AlarmSum;
}
}
public static class Program
{
private const int MAX_MEASURE_MODEL = 600;
private const int TEST = 71494;
//private const int TEST = 71495;//this one doubles the consuming time!
public static void Main(string[] bleg)
{
List<ProductNameStream.FileValue> fileValues = new List<ProductNameStream.FileValue>();
ProductNameStream.FileValue fil = new ProductNameStream.FileValue();
DateTime testTime = DateTime.Now;
Console.WriteLine("TEST: {0} {1:X}", TEST, TEST);
//Creating example List
for (int n = 0; n < TEST; n++)
{
fil = new ProductNameStream.FileValue
{
Time = testTime = testTime.AddSeconds(1),
Value = new ushort[8],
Avg1 = new ushort[8],
Avg2 = new ushort[8],
DAvg = new ushort[8]
};
for (int i = 0; i < 8; i++)
{
fil.Value[i] = (ushort)(n + i);
fil.Avg1[i] = (ushort)(TEST - n - i);
fil.Avg2[i] = (ushort)(n / (i + 1));
fil.DAvg[i] = (ushort)(n * (i + 1));
}
fil.AlarmDelta = (ushort)DateTime.Now.Ticks;
fil.AlarmAverage = (ushort)(fil.AlarmDelta / 2);
fil.AlarmSum = (ushort)(n);
fileValues.Add(fil);
}
var sw = Stopwatch.StartNew();
/* May look like the same as MAX_MEASURE_MODEL but since we use int
* as counter we must be aware of the int round down.*/
int cnt = (fileValues.Count / (fileValues.Count / MAX_MEASURE_MODEL)) + 1;
ProductNameStream.FileValue[] newFileValues = new ProductNameStream.FileValue[cnt];
ProductNameStream.FileValue[] fileValuesArray = fileValues.ToArray();
//Truncate the big list to a 600 Array
for (int n = 0; n < fileValues.Count; n++)
{
if ((n % (fileValues.Count / MAX_MEASURE_MODEL)) == 0)
{
cnt = n / (fileValues.Count / MAX_MEASURE_MODEL);
newFileValues[cnt] = fileValuesArray[n];
newFileValues[cnt].Value = new ushort[8];
newFileValues[cnt].Avg1 = new ushort[8];
newFileValues[cnt].Avg2 = new ushort[8];
newFileValues[cnt].DAvg = new ushort[8];
}
else
{
for (int i = 0; i < 8; i++)
{
if (newFileValues[cnt].Value[i] < fileValuesArray[n].Value[i])
newFileValues[cnt].Value[i] = fileValuesArray[n].Value[i];
if (newFileValues[cnt].Avg1[i] < fileValuesArray[n].Avg1[i])
newFileValues[cnt].Avg1[i] = fileValuesArray[n].Avg1[i];
if (newFileValues[cnt].Avg2[i] < fileValuesArray[n].Avg2[i])
newFileValues[cnt].Avg2[i] = fileValuesArray[n].Avg2[i];
if (newFileValues[cnt].DAvg[i] < fileValuesArray[n].DAvg[i])
newFileValues[cnt].DAvg[i] = fileValuesArray[n].DAvg[i];
}
if (newFileValues[cnt].AlarmSum < fileValuesArray[n].AlarmSum)
newFileValues[cnt].AlarmSum = fileValuesArray[n].AlarmSum;
if (newFileValues[cnt].AlarmDelta < fileValuesArray[n].AlarmDelta)
newFileValues[cnt].AlarmDelta = fileValuesArray[n].AlarmDelta;
if (newFileValues[cnt].AlarmAverage < fileValuesArray[n].AlarmAverage)
newFileValues[cnt].AlarmAverage = fileValuesArray[n].AlarmAverage;
}
}
Console.WriteLine(sw.ElapsedMilliseconds);
}
}
}
This is most likely being caused by the garbage collector, as you suggested.
I can offer two pieces of evidence to indicate that this is so:
If you put GC.Collect() just before you start the stopwatch, the difference in times goes away.
If you instead change the initialisation of the list to new List<ProductNameStream.FileValue>(TEST);, the difference in time also goes away.
(Initialising the list's capacity to the final size in its constructor prevents multiple reallocations of its internal array while items are being added to it, which will reduce pressure on the garbage collector.)
Therefore, I assert based on this evidence that it is indeed the garbage collector that is impacting your timings.
Incidentally, the threshold value was slightly different for me, and for at least one other person too (which isn't surprising if the timing differences are being caused by the garbage collector).
Your data structure is inefficient and is forcing you to do a lot of allocations during computation. Have a look of thisfixed size array inside a struct
. Also preallocate the list. Don't rely on the list to constantly adjust its size which also creates garbage.

Initiate a float list with zeros in C#

I want to initiate a list of N objects with zeros( 0.0 ) . I thought of doing it like that:
var TempList = new List<float>(new float[(int)(N)]);
Is there any better(more efficeint) way to do that?
Your current solution creates an array with the sole purpose of initialising a list with zeros, and then throws that array away. This might appear to be not efficient. However, as we shall see, it is in fact very efficient!
Here's a method that doesn't create an intermediary array:
int n = 100;
var list = new List<float>(n);
for (int i = 0; i < n; ++i)
list.Add(0f);
Alternatively, you can use Enumerable.Repeat() to provide 0f "n" times, like so:
var list = new List<float>(n);
list.AddRange(Enumerable.Repeat(0f, n));
But both these methods turn out to be a slower!
Here's a little test app to do some timings.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
namespace Demo
{
public class Program
{
private static void Main()
{
var sw = new Stopwatch();
int n = 1024*1024*16;
int count = 10;
int dummy = 0;
for (int trial = 0; trial < 4; ++trial)
{
sw.Restart();
for (int i = 0; i < count; ++i)
dummy += method1(n).Count;
Console.WriteLine("Enumerable.Repeat() took " + sw.Elapsed);
sw.Restart();
for (int i = 0; i < count; ++i)
dummy += method2(n).Count;
Console.WriteLine("list.Add() took " + sw.Elapsed);
sw.Restart();
for (int i = 0; i < count; ++i)
dummy += method3(n).Count;
Console.WriteLine("(new float[n]) took " + sw.Elapsed);
Console.WriteLine("\n");
}
}
private static List<float> method1(int n)
{
var list = new List<float>(n);
list.AddRange(Enumerable.Repeat(0f, n));
return list;
}
private static List<float> method2(int n)
{
var list = new List<float>(n);
for (int i = 0; i < n; ++i)
list.Add(0f);
return list;
}
private static List<float> method3(int n)
{
return new List<float>(new float[n]);
}
}
}
Here's my results for a RELEASE build:
Enumerable.Repeat() took 00:00:02.9508207
list.Add() took 00:00:01.1986594
(new float[n]) took 00:00:00.5318123
So it turns out that creating an intermediary array is quite a lot faster. However, be aware that this testing code is flawed because it doesn't account for garbage collection overhead caused by allocating the intermediary array (which is very hard to time properly).
Finally, there is a REALLY EVIL, NASTY way you can optimise this using reflection. But this is brittle, will probably fail to work in the future, and should never, ever be used in production code.
I present it here only as a curiosity:
private static List<float> method4(int n)
{
var list = new List<float>(n);
list.GetType().GetField("_size", BindingFlags.NonPublic | BindingFlags.Instance).SetValue(list, n);
return list;
}
Doing this reduces the time to less than a tenth of a second, compared to the next fastest method which takes half a second. But don't do it.
What is wrong with
float[] A = new float[N];
or
List<float> A = new List<float>(N);
Note that trying to micromanage the compiler is not optimization. Start with the cleanest code that does what you want and let the compiler do its thing.
Edit 1
The solution with List<float> produces an empty list, with only internally N items initialized. So we can trick it with some reflection
static void Main(string[] args)
{
int N=100;
float[] array = new float[N];
List<float> list=new List<float>(N);
var size=typeof(List<float>).GetField("_size", BindingFlags.Instance|BindingFlags.NonPublic);
size.SetValue(list, N);
// Now list has 100 zero items
}
Why not:
var itemsWithZeros = new float[length];

C# Crashing after modifying static variable during benchmarking

I have implemented B-Tree and now I am trying to find the best size per node. I am using time benchmarking to measure the speed.
The problem is that it crashes on the second tested number in benchmarking method.
For the example down the output in console is
Benchmarking 10
Benchmarking 11
The crash is in insert method of Node class, but it does not matter because when I tested using any number as SIZE it works well. Maybe I do not understand how the class objects are created or something like that.
I also tried calling benchmarking method with variuos numbers from main, but same result, during second call it crashed.
Can somebody please look at it and explain me what am I doing wrong? Thanks in advance!
I put part of the code here and here is the whole thing http://pastebin.com/AcihW1Qk
public static void benchmark(String filename, int d, int h)
{
using (System.IO.StreamWriter file = new System.IO.StreamWriter(filename))
{
Stopwatch sw = new Stopwatch();
for(int i = d; i <= h; i++)
{
Console.WriteLine("Benchmarking SIZE = " + i.ToString());
file.WriteLine("SIZE = " + i.ToString());
sw.Start();
// code here
Node.setSize(i);
Tree tree = new Tree();
for (int k = 0; k < 10000000; k++)
tree.insert(k);
Random r = new Random(10);
for (int k = 0; k < 10000; k++)
{
int x = r.Next(10000000);
tree.contains(x);
}
file.WriteLine("Depth of tree is " + tree.depth.ToString());
// end of code
sw.Stop();
file.WriteLine("TIME = " + sw.ElapsedMilliseconds.ToString());
file.WriteLine();
sw.Reset();
}
}
static void Main(string[] args)
{
benchmark("benchmark 10-11.txt", 10,11);
}

Why is List<string>.Sort() slow?

So I noticed that a treeview took unusually long to sort, first I figured that most of the time was spent repainting the control after adding each sorted item. But eitherway I had a gut feeling that List<T>.Sort() was taking longer than reasonable so I used a custom sort method to benchmark it against. The results were interesting, List<T>.Sort() took ~20 times longer, that's the biggest disappointment in performance I've ever encountered in .NET for such a simple task.
My question is, what could be the reason for this? My guess is the overhead of invoking the comparison delegate, which further has to call String.Compare() (in case of string sorting). Increasing the size of the list appears to increase the performance gap. Any ideas? I'm trying to use .NET classes as much as possible but in cases like this I just can't.
Edit:
static List<string> Sort(List<string> list)
{
if (list.Count == 0)
{
return new List<string>();
}
List<string> _list = new List<string>(list.Count);
_list.Add(list[0]);
int length = list.Count;
for (int i = 1; i < length; i++)
{
string item = list[i];
int j;
for (j = _list.Count - 1; j >= 0; j--)
{
if (String.Compare(item, _list[j]) > 0)
{
_list.Insert(j + 1, item);
break;
}
}
if (j == -1)
{
_list.Insert(0, item);
}
}
return _list;
}
Answer: It's not.
I ran the following benchmark in a simple console app and your code was slower:
static void Main(string[] args)
{
long totalListSortTime = 0;
long totalCustomSortTime = 0;
for (int c = 0; c < 100; c++)
{
List<string> list1 = new List<string>();
List<string> list2 = new List<string>();
for (int i = 0; i < 5000; i++)
{
var rando = RandomString(15);
list1.Add(rando);
list2.Add(rando);
}
Stopwatch watch1 = new Stopwatch();
Stopwatch watch2 = new Stopwatch();
watch2.Start();
list2 = Sort(list2);
watch2.Stop();
totalCustomSortTime += watch2.ElapsedMilliseconds;
watch1.Start();
list1.Sort();
watch1.Stop();
totalListSortTime += watch1.ElapsedMilliseconds;
}
Console.WriteLine("totalListSortTime = " + totalListSortTime);
Console.WriteLine("totalCustomSortTime = " + totalCustomSortTime);
Console.ReadLine();
}
Result:
I haven't had the time to fully test it because I had a blackout (writing from phone now), but it would seem your code (from Pastebin) is sorting several times an already ordered list, so it would seem that your algorithm could be faster to...sort an already sorted list. In case the standard .NET implementation is a Quick Sort, this would be natural since QS has its worst case scenario on already sorted lists.

How to initialize integer array in C# [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
c# Leaner way of initializing int array
Basically I would like to know if there is a more efficent code than the one shown below
private static int[] GetDefaultSeriesArray(int size, int value)
{
int[] result = new int[size];
for (int i = 0; i < size; i++)
{
result[i] = value;
}
return result;
}
where size can vary from 10 to 150000. For small arrays is not an issue, but there should be a better way to do the above.
I am using VS2010(.NET 4.0)
C#/CLR does not have built in way to initalize array with non-default values.
Your code is as efficient as it could get if you measure in operations per item.
You can get potentially faster initialization if you initialize chunks of huge array in parallel. This approach will need careful tuning due to non-trivial cost of mutlithread operations.
Much better results can be obtained by analizing your needs and potentially removing whole initialization alltogether. I.e. if array is normally contains constant value you can implement some sort of COW (copy on write) approach where your object initially have no backing array and simpy returns constant value, that on write to an element it would create (potentially partial) backing array for modified segment.
Slower but more compact code (that potentially easier to read) would be to use Enumerable.Repeat. Note that ToArray will cause significant amount of memory to be allocated for large arrays (which may also endup with allocations on LOH) - High memory consumption with Enumerable.Range?.
var result = Enumerable.Repeat(value, size).ToArray();
One way that you can improve speed is by utilizing Array.Copy. It's able to work at a lower level in which it's bulk assigning larger sections of memory.
By batching the assignments you can end up copying the array from one section to itself.
On top of that, the batches themselves can be quite effectively paralleized.
Here is my initial code up. On my machine (which only has two cores) with a sample array of size 10 million items, I was getting a 15% or so speedup. You'll need to play around with the batch size (try to stay in multiples of your page size to keep it efficient) to tune it to the size of items that you have. For smaller arrays it'll end up almost identical to your code as it won't get past filling up the first batch, but it also won't be (noticeably) worse in those cases either.
private const int batchSize = 1048576;
private static int[] GetDefaultSeriesArray2(int size, int value)
{
int[] result = new int[size];
//fill the first batch normally
int end = Math.Min(batchSize, size);
for (int i = 0; i < end; i++)
{
result[i] = value;
}
int numBatches = size / batchSize;
Parallel.For(1, numBatches, batch =>
{
Array.Copy(result, 0, result, batch * batchSize, batchSize);
});
//handle partial leftover batch
for (int i = numBatches * batchSize; i < size; i++)
{
result[i] = value;
}
return result;
}
Another way to improve performance is with a pretty basic technique: loop unrolling.
I have written some code to initialize an array with 20 million items, this is done repeatedly 100 times and an average is calculated. Without unrolling the loop, this takes about 44 MS. With loop unrolling of 10 the process is finished in 23 MS.
private void Looper()
{
int repeats = 100;
float avg = 0;
ArrayList times = new ArrayList();
for (int i = 0; i < repeats; i++)
times.Add(Time());
Console.WriteLine(GetAverage(times)); //44
times.Clear();
for (int i = 0; i < repeats; i++)
times.Add(TimeUnrolled());
Console.WriteLine(GetAverage(times)); //22
}
private float GetAverage(ArrayList times)
{
long total = 0;
foreach (var item in times)
{
total += (long)item;
}
return total / times.Count;
}
private long Time()
{
Stopwatch sw = new Stopwatch();
int size = 20000000;
int[] result = new int[size];
sw.Start();
for (int i = 0; i < size; i++)
{
result[i] = 5;
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
return sw.ElapsedMilliseconds;
}
private long TimeUnrolled()
{
Stopwatch sw = new Stopwatch();
int size = 20000000;
int[] result = new int[size];
sw.Start();
for (int i = 0; i < size; i += 10)
{
result[i] = 5;
result[i + 1] = 5;
result[i + 2] = 5;
result[i + 3] = 5;
result[i + 4] = 5;
result[i + 5] = 5;
result[i + 6] = 5;
result[i + 7] = 5;
result[i + 8] = 5;
result[i + 9] = 5;
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
return sw.ElapsedMilliseconds;
}
Enumerable.Repeat(value, size).ToArray();
Reading up Enumerable.Repeat is 20 times slower than the ops standard for loop and the only thing I found which might improve its speed is
private static int[] GetDefaultSeriesArray(int size, int value)
{
int[] result = new int[size];
for (int i = 0; i < size; ++i)
{
result[i] = value;
}
return result;
}
NOTE the i++ is changed to ++i. i++ copies i, increments i, and returns the original value. ++i just returns the incremented value
As someone already mentioned, you can leverage parallel processing like this:
int[] result = new int[size];
Parallel.ForEach(result, x => x = value);
return result;
Sorry I had no time to do performance testing on this (don't have VS installed on this machine) but if you can do it and share the results it would be great.
EDIT: As per comment, while I still think that in terms of performance they are equivalent, you can try the parallel for loop:
Parallel.For(0, size, i => result[i] = value);

Categories