My application receives data from a serial port, which is send in packets. The packets are defined as following
1 byte - Identifier
2 bytes - lenght of data
n bytes - data
1 bytes - Checksum
For example if the length is specified as 508 there will be 508 bytes, which would be 127 uint32_t values.
Currently I use the following code to assemble the uint32_t values from the data that is sent in bytes:
private UInt32[] number_array = new UInt32[16384];
private void decodePacket(int startpos, byte[] data, int lenght)
{
/* Starting position */
int pos = startpos;
for(int i=0; i<lenght; i++)
{
/* Convert 4 bytes to one uint32_t value */
int value = data[i] | data[i + 1]<<8 | data[i + 2]<<16 | data[i + 3]<<24;
/* Write to array */
number_array[pos] = Convert.ToUInt32(value);
/* Advance i by 4 (bytes */
i += 4;
/* Advance pos */
pos++;
}
}
It does work fine, but I'm thinking it's very inefficient. There are usually 16384 uint32_t values to process, so this function is called a lot of times.
Is there a more efficient / faster way to do this?
Look at this simple code:
static void Main(string[] args)
{
byte[] data = new byte[]
{
1, 10, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 5
};
byte id = data[0];
byte[] len = new byte[4];
Array.Copy(data, 1, len, 0, 2);
int dataLen = BitConverter.ToInt32(len, 0);
byte[] dataRead = new byte[dataLen];
Array.Copy(data, 3, dataRead, 0, dataLen);
byte checksum = data[data.Length - 1];
Console.ReadKey();
}
Now, first you get identifier that is on the first pos.
Next, you get length of the data. You have to create 4 byte array to copy values from data array to this new array. It should be 4 bytes, because you would like to convert those bytes into Int32. You could have this array 2 bytes length and convert it to Int16, but Int32 should have better performance.
So, when you have length data in this len array, you can convert those values into Int32. But be careful of endianes. It may be different.
At the end you create an array that will contain all the REAL data. And then copy the real data to new array.
This solution should be faster than yours. But... Be careful of endianess.
Here is the best possible way using C#
private void decodePacket(int startpos, byte[] data, int lenght)
{
/* Starting position */
int pos = startpos;
for (int i = 0; i < lenght; i += 4)
{
/* Convert 4 bytes to one uint32_t value */
int value = BitConverter.ToInt32(data, i);
/* Write to array */
number_array[pos] = Convert.ToUInt32(value);
/* Advance pos */
pos++;
}
}
This is same code as in question but with two changes.
Index i was being incremented at two places which was resulting in increments of 5 instead of 4. Changed it to just one increment of 4.
Use of BitConveter instead of bitwise logic. Although it might not provide any significant performance boost. It is better to use BitConverter for platform independence.
UPDATE
In case your bytes are stored at 32-bit aligned memory addresses BitConverter provides you with maximum performance conversion. But in C# you cannot guarantee memory location alignment. In that case bit shifting is the only way.
In case of bit shifting BitConverter also uses same logic for little endian systems as shown in question. But, it can help keeping your code platform independent by using another bit shifting pattern for big endian systems.
I need to get values in UInt16 and UInt64 as Byte[]. At the moment I am using BitConverter.GetBytes, but this method gives me a new array instance each time.
I would like to use a method that allow me to "copy" those values to already existing arrays, something like:
.ToBytes(UInt64 value, Byte[] array, Int32 offset);
.ToBytes(UInt16 value, Byte[] array, Int32 offset);
I have been taking a look to the .NET source code with ILSpy, but I not very sure how this code works and how can I safely modify it to fit my requirement:
public unsafe static byte[] GetBytes(long value)
{
byte[] array = new byte[8];
fixed (byte* ptr = array)
{
*(long*)ptr = value;
}
return array;
}
Which would be the right way of accomplishing this?
Updated: I cannot use unsafe code. It should not create new array instances.
You can do like this:
static unsafe void ToBytes(ulong value, byte[] array, int offset)
{
fixed (byte* ptr = &array[offset])
*(ulong*)ptr = value;
}
Usage:
byte[] array = new byte[9];
ToBytes(0x1122334455667788, array, 1);
You can set offset only in bytes.
If you want managed way to do it:
static void ToBytes(ulong value, byte[] array, int offset)
{
byte[] valueBytes = BitConverter.GetBytes(value);
Array.Copy(valueBytes, 0, array, offset, valueBytes.Length);
}
Or you can fill values by yourself:
static void ToBytes(ulong value, byte[] array, int offset)
{
for (int i = 0; i < 8; i++)
{
array[offset + i] = (byte)value;
value >>= 8;
}
}
Now that .NET has added Span<T> support for better working with arrays, unmanaged memory, etc without excess allocations, they've also added System.Buffer.Binary.BinaryPrimitives.
This works as you would want, e.g. WriteUInt64BigEndian has this signature:
public static void WriteUInt64BigEndian (Span<byte> destination, ulong value);
which avoids allocating.
You say you want to avoid creating new arrays and you cannot use unsafe. Either use Ulugbek Umirov's answer with cached arrays (be careful with threading issues) or:
static void ToBytes(ulong value, byte[] array, int offset) {
unchecked {
array[offset + 0] = (byte)(value >> (8*7));
array[offset + 1] = (byte)(value >> (8*6));
array[offset + 2] = (byte)(value >> (8*5));
array[offset + 3] = (byte)(value >> (8*4));
//...
}
}
It seems that you wish to avoid, for some reason, creating any temporary new arrays. And you also want to avoid unsafe code.
You could pin the object and then copy to the array.
public static void ToBytes(ulong value, byte[] array, int offset)
{
GCHandle handle = GCHandle.Alloc(value, GCHandleType.Pinned);
try
{
Marshal.Copy(handle.AddrOfPinnedObject(), array, offset, 8);
}
finally
{
handle.Free();
}
}
BinaryWriter can be a good solution.
var writer = new BinaryWriter(new MemoryStream(yourbuffer, youroffset, yourbuffer.Length-youroffset));
writer.Write(someuint64);
It's useful when you need to convert a lot of data into a buffer continuously
var writer = new BinaryWriter(new MemoryStream(yourbuffer));
foreach(var value in yourints){
writer.Write(value);
}
or when you just want to write to a file, that's the best case to use BinaryWriter.
var writer = new BinaryWriter(yourFileStream);
foreach(var value in yourints){
writer.Write(value);
}
For anyone else that stumbles across this.
If Big Endian support is NOT required and Unsafe is allowed then I've written a library specifically to minimize GC allocations during serialization for single threaded serializers such as when serializing across TCP sockets.
Note: supports Little Endian ONLY example X86 / X64 (might add big endian support some day)
https://github.com/tcwicks/ChillX/tree/master/src/ChillX.Serialization
In the process also re-wrote BitConverter so that it works similar to BitPrimitives but with a lot of extras such as supporting arrays as well.
https://github.com/tcwicks/ChillX/blob/master/src/ChillX.Serialization/BitConverterExtended.cs
Random rnd = new Random();
RentedBuffer<byte> buffer = RentedBuffer<byte>.Shared.Rent(BitConverterExtended.SizeOfUInt64
+ (20 * BitConverterExtended.SizeOfUInt16)
+ (20 * BitConverterExtended.SizeOfTimeSpan)
+ (10 * BitConverterExtended.SizeOfSingle);
UInt64 exampleLong = long.MaxValue;
int startIndex = 0;
startIndex += BitConverterExtended.GetBytes(exampleLong, buffer.BufferSpan, startIndex);
UInt16[] shortArray = new UInt16[20];
for (int I = 0; I < shortArray.Length; I++) { shortArray[I] = (ushort)rnd.Next(0, UInt16.MaxValue); }
//When using reflection / expression trees CLR cannot distinguish between UInt16 and Int16 or Uint64 and Int64 etc...
//Therefore Uint methods are renamed.
startIndex += BitConverterExtended.GetBytesUShortArray(shortArray, buffer.BufferSpan, startIndex);
TimeSpan[] timespanArray = new TimeSpan[20];
for (int I = 0; I < timespanArray.Length; I++) { timespanArray[I] = TimeSpan.FromSeconds(rnd.Next(0, int.MaxValue)); }
startIndex += BitConverterExtended.GetBytes(timespanArray, buffer.BufferSpan, startIndex);
float[] floatArray = new float[10];
for (int I = 0; I < floatArray.Length; I++) { floatArray[I] = MathF.PI * rnd.Next(short.MinValue, short.MaxValue); }
startIndex += BitConverterExtended.GetBytes(floatArray, buffer.BufferSpan, startIndex);
//Do stuff with buffer and then
buffer.Return();
//Or
buffer = null;
//and let RentedBufferContract do this automatically
The underlying problem is actually two fold. Even if you use bitprimitives we still have the issue of the byte array buffers that we are writing to / reading from. The issue is further compounded if we have lots of array fields T[] for example int[]
Using standard BitConverter leads to situations like 80% time spent in GC.
bitprimitives is a much better but we still need to manage the byte[] buffers. If we create one byte[] buffer for each int in an array of 1000 ints that is 1000 byte[4] arrays to garbage collect which means rent a buffer big enough for all 1000 ints (int[4000]) from ArrayPool but then we have to manage returning these buffers.
Further complicating it is the fact that we have to track the length of each buffer that is actually used since ArrayPool will return arrays that are larger than that which is requested. System.Buffers.ArrayPool.Shared.Rent(4000); will usually return a byte[4096] or maybe even a byte[8192]
This serialization code:
uses a wrappers around ArrayPool in order make its usage transparent:
https://github.com/tcwicks/ChillX/blob/master/src/ChillX.Core/Structures/ManagedPool.cs
For managing renting and returning of ArrayPool buffers:
https://github.com/tcwicks/ChillX/blob/master/src/ChillX.Core/Structures/RentedBuffer.cs
With operator overloads on + operator to automatically assign from Span
For preventing memory leaks and also managing use and forget style guaranteed return of rented buffers:
https://github.com/tcwicks/ChillX/blob/master/src/ChillX.Core/Structures/RentedBufferContract.cs
Example:
//Assume ArrayProperty_Long is a long[] array.
RentedBuffer<long> ClonedArray = RentedBuffer<long>.Shared.Rent(ArrayProperty_Long.Length);
ClonedArray += ArrayProperty_Long;
ClonedArray will be returned to ArrayPool.Shared automatically when ClonedArray goes out of scope (handled by RentedBufferContract).
Or call ClonedArray.Return();
p.s. Any feedback is much appreciated.
Following benchmarks compare performance using rented buffers and this serializer implementation versus using MessagePack without rented buffers. Messagepack if micro benchmarked is a faster serializer. This performance difference is purely due to reducing GC collection overheads by pooling / renting buffers.
----------------------------------------------------------------------------------
Benchmarking ChillX Serializer: Test Object: Data class with 31 properties / fields of different types inlcuding multiple arrays of different types
Num Reps: 50000 - Array Size: 256
----------------------------------------------------------------------------------
Entity Size bytes: 20,678 - Count: 00,050,000 - Threads: 01 - Entities Per Second: 00,025,523 - Mbps: 503.32 - Time: 00:00:01.9589880
Entity Size bytes: 20,678 - Count: 00,050,000 - Threads: 01 - Entities Per Second: 00,026,129 - Mbps: 515.26 - Time: 00:00:01.9135886
Entity Size bytes: 20,678 - Count: 00,050,000 - Threads: 01 - Entities Per Second: 00,026,291 - Mbps: 518.46 - Time: 00:00:01.9018027
----------------------------------------------------------------------------------
Benchmarking MessagePack Serializer: Test Object: Data class with 31 properties / fields of different types inlcuding multiple arrays of different types
Num Reps: 50000 - Array Size: 256
----------------------------------------------------------------------------------
Entity Size bytes: 19,811 - Count: 00,050,000 - Threads: 01 - Entities Per Second: 00,012,386 - Mbps: 234.01 - Time: 00:00:04.0668261
Entity Size bytes: 19,811 - Count: 00,050,000 - Threads: 01 - Entities Per Second: 00,012,285 - Mbps: 232.10 - Time: 00:00:04.0523329
Entity Size bytes: 19,811 - Count: 00,050,000 - Threads: 01 - Entities Per Second: 00,012,642 - Mbps: 238.84 - Time: 00:00:03.9811276
I have a byte array. It contains 24 bit signed integers stored lsb to msb. The array could hold up to 4mb of data. The integers will be converted to 32 bit signed integers to be used in the application. I would like to hear about possible strategies for conversion and sampling of this data.
One thing I need to do with the data is graph it. With sequential sampling, I am worried about loosing some of the important peaks and valleys in the data. I also want to do some calculations to determine the highest and lowest values.
Given what I need to do, are there any algorithms or ways of doing things that will help me achieve my goal quickly and efficiently?
If your input has to be 3 byte ints, then you can convert to 4 byte ints as follows:
byte[] input = new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9}; //sample data
byte[] buffer = new byte[4]; //4 byte buffer for conversion from 3-> 4 byte int
int[] output = new int[input.Length / 3];
for (int i = 0, j = 0; i < input.Length; i += 3, j++)
{
Buffer.BlockCopy(input, i, buffer, 0, 3);
int signed32 = BitConverter.ToInt32(buffer, 0);
output[j] = signed32;
}
Edit
Fixed block copy for little endian.
I would suggest you to convert the byte array to an int[]. That way, you can work with it easily and today's computers can work with 32-bit integers much better than if you had to work with bytes that represent 24-bit integers all the time.
You should use the regular sized ints.
Storage is cheap (especially if you only need ~4MB of data) and if you are going to convert them to int32's for manipulation it's better if they're in that format from the beginning.
If the conversion will actually produce another array of int32s then you've just doubled the memory footprint. If you convert individual elements you've just increased execution time.
Best use the native int size.
It might be easier to implement and for future developers to understand if you use the bytes directly (3 at a time).
// If you're reading from a file, you don't have to read the whole array.
// Just read a large chunk (like 3 * 1024) bytes at a time (so it's divisible by 3).
byte [] data = new []{1,2,3, 4,5,6, 7,8,9};
int [] values = new [data.Length /3];
int min = int.MaxValue;
int max = int.MaxValue;
for (int i = 0,j = 1; i < data.Length - 2; i += 3, j++)
{
byte b1 = data[i];
byte b2 = data[i+1];
byte b3 = data[i+2];
// Are we dealing with 2's compliment or a sign bit? Let's assume sign bit.
int sign = b3 >> 7 == 1 ? -1 : 1;
int value = sign * ((int) (b3 <1)>1)<<16 + b2 << 8 + b1;
values[j] = value;
max = max > value ? max : value;
min = min < value ? min : value;
}
I need to fill a byte[] with a single non-zero value. How can I do this in C# without looping through each byte in the array?
Update: The comments seem to have split this into two questions -
Is there a Framework method to fill a byte[] that might be akin to memset
What is the most efficient way to do it when we are dealing with a very large array?
I totally agree that using a simple loop works just fine, as Eric and others have pointed out. The point of the question was to see if I could learn something new about C# :) I think Juliet's method for a Parallel operation should be even faster than a simple loop.
Benchmarks:
Thanks to Mikael Svenson: http://techmikael.blogspot.com/2009/12/filling-array-with-default-value.html
It turns out the simple for loop is the way to go unless you want to use unsafe code.
Apologies for not being clearer in my original post. Eric and Mark are both correct in their comments; need to have more focused questions for sure. Thanks for everyone's suggestions and responses.
You could use Enumerable.Repeat:
byte[] a = Enumerable.Repeat((byte)10, 100).ToArray();
The first parameter is the element you want repeated, and the second parameter is the number of times to repeat it.
This is OK for small arrays but you should use the looping method if you are dealing with very large arrays and performance is a concern.
Actually, there is little known IL operation called Initblk (English version) which does exactly that. So, let's use it as a method that doesn't require "unsafe". Here's the helper class:
public static class Util
{
static Util()
{
var dynamicMethod = new DynamicMethod("Memset", MethodAttributes.Public | MethodAttributes.Static, CallingConventions.Standard,
null, new [] { typeof(IntPtr), typeof(byte), typeof(int) }, typeof(Util), true);
var generator = dynamicMethod.GetILGenerator();
generator.Emit(OpCodes.Ldarg_0);
generator.Emit(OpCodes.Ldarg_1);
generator.Emit(OpCodes.Ldarg_2);
generator.Emit(OpCodes.Initblk);
generator.Emit(OpCodes.Ret);
MemsetDelegate = (Action<IntPtr, byte, int>)dynamicMethod.CreateDelegate(typeof(Action<IntPtr, byte, int>));
}
public static void Memset(byte[] array, byte what, int length)
{
var gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);
MemsetDelegate(gcHandle.AddrOfPinnedObject(), what, length);
gcHandle.Free();
}
public static void ForMemset(byte[] array, byte what, int length)
{
for(var i = 0; i < length; i++)
{
array[i] = what;
}
}
private static Action<IntPtr, byte, int> MemsetDelegate;
}
And what is the performance? Here's my result for Windows/.NET and Linux/Mono (different PCs).
Mono/for: 00:00:01.1356610
Mono/initblk: 00:00:00.2385835
.NET/for: 00:00:01.7463579
.NET/initblk: 00:00:00.5953503
So it's worth considering. Note that the resulting IL will not be verifiable.
Building on Lucero's answer, here is a faster version. It will double the number of bytes copied using Buffer.BlockCopy every iteration. Interestingly enough, it outperforms it by a factor of 10 when using relatively small arrays (1000), but the difference is not that large for larger arrays (1000000), it is always faster though. The good thing about it is that it performs well even down to small arrays. It becomes faster than the naive approach at around length = 100. For a one million element byte array, it was 43 times faster.
(tested on Intel i7, .Net 2.0)
public static void MemSet(byte[] array, byte value) {
if (array == null) {
throw new ArgumentNullException("array");
}
int block = 32, index = 0;
int length = Math.Min(block, array.Length);
//Fill the initial array
while (index < length) {
array[index++] = value;
}
length = array.Length;
while (index < length) {
Buffer.BlockCopy(array, 0, array, index, Math.Min(block, length-index));
index += block;
block *= 2;
}
}
A little bit late, but the following approach might be a good compromise without reverting to unsafe code. Basically it initializes the beginning of the array using a conventional loop and then reverts to Buffer.BlockCopy(), which should be as fast as you can get using a managed call.
public static void MemSet(byte[] array, byte value) {
if (array == null) {
throw new ArgumentNullException("array");
}
const int blockSize = 4096; // bigger may be better to a certain extent
int index = 0;
int length = Math.Min(blockSize, array.Length);
while (index < length) {
array[index++] = value;
}
length = array.Length;
while (index < length) {
Buffer.BlockCopy(array, 0, array, index, Math.Min(blockSize, length-index));
index += blockSize;
}
}
Looks like System.Runtime.CompilerServices.Unsafe.InitBlock now does the same thing as the OpCodes.Initblk instruction that Konrad's answer mentions (he also mentioned a source link).
The code to fill in the array is as follows:
byte[] a = new byte[N];
byte valueToFill = 255;
System.Runtime.CompilerServices.Unsafe.InitBlock(ref a[0], valueToFill, (uint) a.Length);
This simple implementation uses successive doubling, and performs quite well (about 3-4 times faster than the naive version according to my benchmarks):
public static void Memset<T>(T[] array, T elem)
{
int length = array.Length;
if (length == 0) return;
array[0] = elem;
int count;
for (count = 1; count <= length/2; count*=2)
Array.Copy(array, 0, array, count, count);
Array.Copy(array, 0, array, count, length - count);
}
Edit: upon reading the other answers, it seems I'm not the only one with this idea. Still, I'm leaving this here, since it's a bit cleaner and it performs on par with the others.
If performance is critical, you could consider using unsafe code and working directly with a pointer to the array.
Another option could be importing memset from msvcrt.dll and use that. However, the overhead from invoking that might easily be larger than the gain in speed.
Or use P/Invoke way:
[DllImport("msvcrt.dll",
EntryPoint = "memset",
CallingConvention = CallingConvention.Cdecl,
SetLastError = false)]
public static extern IntPtr MemSet(IntPtr dest, int c, int count);
static void Main(string[] args)
{
byte[] arr = new byte[3];
GCHandle gch = GCHandle.Alloc(arr, GCHandleType.Pinned);
MemSet(gch.AddrOfPinnedObject(), 0x7, arr.Length);
}
With the advent of Span<T> (which is dotnet core only, but it is the future of dotnet) you have yet another way of solving this problem:
var array = new byte[100];
var span = new Span<byte>(array);
span.Fill(255);
If performance is absolutely critical, then Enumerable.Repeat(n, m).ToArray() will be too slow for your needs. You might be able to crank out faster performance using PLINQ or Task Parallel Library:
using System.Threading.Tasks;
// ...
byte initialValue = 20;
byte[] data = new byte[size]
Parallel.For(0, size, index => data[index] = initialValue);
All answers are writing single bytes only - what if you want to fill a byte array with words? Or floats? I find use for that now and then. So after having written similar code to 'memset' in a non-generic way a few times and arriving at this page to find good code for single bytes, I went about writing the method below.
I think PInvoke and C++/CLI each have their drawbacks. And why not have the runtime 'PInvoke' for you into mscorxxx? Array.Copy and Buffer.BlockCopy are native code certainly. BlockCopy isn't even 'safe' - you can copy a long halfway over another, or over a DateTime as long as they're in arrays.
At least I wouldn't go file new C++ project for things like this - it's a waste of time almost certainly.
So here's basically an extended version of the solutions presented by Lucero and TowerOfBricks that can be used to memset longs, ints, etc as well as single bytes.
public static class MemsetExtensions
{
static void MemsetPrivate(this byte[] buffer, byte[] value, int offset, int length) {
var shift = 0;
for (; shift < 32; shift++)
if (value.Length == 1 << shift)
break;
if (shift == 32 || value.Length != 1 << shift)
throw new ArgumentException(
"The source array must have a length that is a power of two and be shorter than 4GB.", "value");
int remainder;
int count = Math.DivRem(length, value.Length, out remainder);
var si = 0;
var di = offset;
int cx;
if (count < 1)
cx = remainder;
else
cx = value.Length;
Buffer.BlockCopy(value, si, buffer, di, cx);
if (cx == remainder)
return;
var cachetrash = Math.Max(12, shift); // 1 << 12 == 4096
si = di;
di += cx;
var dx = offset + length;
// doubling up to 1 << cachetrash bytes i.e. 2^12 or value.Length whichever is larger
for (var al = shift; al <= cachetrash && di + (cx = 1 << al) < dx; al++) {
Buffer.BlockCopy(buffer, si, buffer, di, cx);
di += cx;
}
// cx bytes as long as it fits
for (; di + cx <= dx; di += cx)
Buffer.BlockCopy(buffer, si, buffer, di, cx);
// tail part if less than cx bytes
if (di < dx)
Buffer.BlockCopy(buffer, si, buffer, di, dx - di);
}
}
Having this you can simply add short methods to take the value type you need to memset with and call the private method, e.g. just find replace ulong in this method:
public static void Memset(this byte[] buffer, ulong value, int offset, int count) {
var sourceArray = BitConverter.GetBytes(value);
MemsetPrivate(buffer, sourceArray, offset, sizeof(ulong) * count);
}
Or go silly and do it with any type of struct (although the MemsetPrivate above only works for structs that marshal to a size that is a power of two):
public static void Memset<T>(this byte[] buffer, T value, int offset, int count) where T : struct {
var size = Marshal.SizeOf<T>();
var ptr = Marshal.AllocHGlobal(size);
var sourceArray = new byte[size];
try {
Marshal.StructureToPtr<T>(value, ptr, false);
Marshal.Copy(ptr, sourceArray, 0, size);
} finally {
Marshal.FreeHGlobal(ptr);
}
MemsetPrivate(buffer, sourceArray, offset, count * size);
}
I changed the initblk mentioned before to take ulongs to compare performance with my code and that silently fails - the code runs but the resulting buffer contains the least significant byte of the ulong only.
Nevertheless I compared the performance writing as big a buffer with for, initblk and my memset method. The times are in ms total over 100 repetitions writing 8 byte ulongs whatever how many times fit the buffer length. The for version is manually loop-unrolled for the 8 bytes of a single ulong.
Buffer Len #repeat For millisec Initblk millisec Memset millisec
0x00000008 100 For 0,0032 Initblk 0,0107 Memset 0,0052
0x00000010 100 For 0,0037 Initblk 0,0102 Memset 0,0039
0x00000020 100 For 0,0032 Initblk 0,0106 Memset 0,0050
0x00000040 100 For 0,0053 Initblk 0,0121 Memset 0,0106
0x00000080 100 For 0,0097 Initblk 0,0121 Memset 0,0091
0x00000100 100 For 0,0179 Initblk 0,0122 Memset 0,0102
0x00000200 100 For 0,0384 Initblk 0,0123 Memset 0,0126
0x00000400 100 For 0,0789 Initblk 0,0130 Memset 0,0189
0x00000800 100 For 0,1357 Initblk 0,0153 Memset 0,0170
0x00001000 100 For 0,2811 Initblk 0,0167 Memset 0,0221
0x00002000 100 For 0,5519 Initblk 0,0278 Memset 0,0274
0x00004000 100 For 1,1100 Initblk 0,0329 Memset 0,0383
0x00008000 100 For 2,2332 Initblk 0,0827 Memset 0,0864
0x00010000 100 For 4,4407 Initblk 0,1551 Memset 0,1602
0x00020000 100 For 9,1331 Initblk 0,2768 Memset 0,3044
0x00040000 100 For 18,2497 Initblk 0,5500 Memset 0,5901
0x00080000 100 For 35,8650 Initblk 1,1236 Memset 1,5762
0x00100000 100 For 71,6806 Initblk 2,2836 Memset 3,2323
0x00200000 100 For 77,8086 Initblk 2,1991 Memset 3,0144
0x00400000 100 For 131,2923 Initblk 4,7837 Memset 6,8505
0x00800000 100 For 263,2917 Initblk 16,1354 Memset 33,3719
I excluded the first call every time, since both initblk and memset take a hit of I believe it was about .22ms for the first call. Slightly surprising my code is faster for filling short buffers than initblk, seeing it got half a page full of setup code.
If anybody feels like optimizing this, go ahead really. It's possible.
Tested several ways, described in different answers.
See sources of test in c# test class
You could do it when you initialize the array but I don't think that's what you are asking for:
byte[] myBytes = new byte[5] { 1, 1, 1, 1, 1};
.NET Core has a built-in Array.Fill() function, but sadly .NET Framework is missing it. .NET Core has two variations: fill the entire array and fill a portion of the array starting at an index.
Building on the ideas above, here is a more generic Fill function that will fill the entire array of several data types. This is the fastest function when benchmarking against other methods discussed in this post.
This function, along with the version that fills a portion an array are available in an open source and free NuGet package (HPCsharp on nuget.org). Also included is a slightly faster version of Fill using SIMD/SSE instructions that performs only memory writes, whereas BlockCopy-based methods perform memory reads and writes.
public static void FillUsingBlockCopy<T>(this T[] array, T value) where T : struct
{
int numBytesInItem = 0;
if (typeof(T) == typeof(byte) || typeof(T) == typeof(sbyte))
numBytesInItem = 1;
else if (typeof(T) == typeof(ushort) || typeof(T) != typeof(short))
numBytesInItem = 2;
else if (typeof(T) == typeof(uint) || typeof(T) != typeof(int))
numBytesInItem = 4;
else if (typeof(T) == typeof(ulong) || typeof(T) != typeof(long))
numBytesInItem = 8;
else
throw new ArgumentException(string.Format("Type '{0}' is unsupported.", typeof(T).ToString()));
int block = 32, index = 0;
int endIndex = Math.Min(block, array.Length);
while (index < endIndex) // Fill the initial block
array[index++] = value;
endIndex = array.Length;
for (; index < endIndex; index += block, block *= 2)
{
int actualBlockSize = Math.Min(block, endIndex - index);
Buffer.BlockCopy(array, 0, array, index * numBytesInItem, actualBlockSize * numBytesInItem);
}
}
Most of answers is for byte memset but if you want to use it for float or any other struct you should multiply index by size of your data. Because Buffer.BlockCopy will copy based on the bytes.
This code will be work for float values
public static void MemSet(float[] array, float value) {
if (array == null) {
throw new ArgumentNullException("array");
}
int block = 32, index = 0;
int length = Math.Min(block, array.Length);
//Fill the initial array
while (index < length) {
array[index++] = value;
}
length = array.Length;
while (index < length) {
Buffer.BlockCopy(array, 0, array, index * sizeof(float), Math.Min(block, length-index)* sizeof(float));
index += block;
block *= 2;
}
}
The Array object has a method called Clear. I'm willing to bet that the Clear method is faster than any code you can write in C#.
you can try the following codes, it will work.
byte[] test = new byte[65536];
Array.Clear(test,0,test.Length);