In C#, there are some different ways to copy the elements of an array to another. To the best of my knowledge, they are "For" loop, Array.CopyTo, Span<T>.CopyTo, T[].CopyTo and Buffer.BlockCopy.
Since looping to copy the elements is always the slowest way, I skip it and run benchmark test for the other four methods. However, it seems that the speed of them are related with the length of the array, which really confused me.
My code of benchmark test is shown below. My experiment environment is Windows 11, .NET 6, Intel 12700 CPU, 64bits, using "BenchmarkDotnet" as the benchmark test framework.
public class UnitTest1
{
static readonly int times = 1000;
static readonly int arrayLength = 8;
int[] src = GetRandomArray(arrayLength);
int[] dst = new int[arrayLength];
public static int[] GetRandomArray(int length)
{
int[] array = new int[length];
for (int i = 0; i < length; i++)
{
array[i] = new Random(DateTime.Now.Millisecond).Next(int.MinValue, int.MaxValue);
}
System.Threading.Thread.Sleep(2000);
return array;
}
[Benchmark]
public void TestArrayCopy()
{
for (var j = 0; j < times; j++)
{
src.CopyTo(dst, 0);
}
}
[Benchmark]
public void TestSingleSpanCopy()
{
var dstSpan = dst.AsSpan();
for (var j = 0; j < times; j++)
{
src.CopyTo(dstSpan);
}
}
[Benchmark]
public void TestDoubleSpanCopy()
{
var srcSpan = src.AsSpan();
var dstSpan = dst.AsSpan();
for (var j = 0; j < times; j++)
{
srcSpan.CopyTo(dstSpan);
}
}
[Benchmark]
public void BufferCopy()
{
for (var j = 0; j < times; j++)
{
System.Buffer.BlockCopy(src, 0, dst, 0, sizeof(int) * src.Length);
}
}
}
Here are the test results.
times = 1000, arrayLength = 8
| Method | Mean | Error | StdDev |
|------------------- |---------:|----------:|----------:|
| TestArrayCopy | 3.061 us | 0.0370 us | 0.0543 us |
| TestSingleSpanCopy | 1.297 us | 0.0041 us | 0.0038 us |
| TestDoubleSpanCopy | 1.113 us | 0.0190 us | 0.0203 us |
| BufferCopy | 7.162 us | 0.1250 us | 0.1044 us |
times = 1000, arrayLength = 16
| Method | Mean | Error | StdDev |
|------------------- |---------:|----------:|----------:|
| TestArrayCopy | 3.426 us | 0.0677 us | 0.0806 us |
| TestSingleSpanCopy | 1.609 us | 0.0264 us | 0.0206 us |
| TestDoubleSpanCopy | 1.478 us | 0.0228 us | 0.0202 us |
| BufferCopy | 7.465 us | 0.0866 us | 0.0723 us |
times = 1000, arrayLength = 32
| Method | Mean | Error | StdDev | Median |
|------------------- |----------:|----------:|----------:|----------:|
| TestArrayCopy | 4.063 us | 0.0417 us | 0.0390 us | 4.076 us |
| TestSingleSpanCopy | 4.115 us | 0.3552 us | 1.0473 us | 4.334 us |
| TestDoubleSpanCopy | 3.576 us | 0.3391 us | 0.9998 us | 3.601 us |
| BufferCopy | 12.922 us | 0.7339 us | 2.1640 us | 13.814 us |
times = 1000, arrayLength = 128
| Method | Mean | Error | StdDev | Median |
|------------------- |----------:|----------:|----------:|----------:|
| TestArrayCopy | 7.865 us | 0.0919 us | 0.0815 us | 7.842 us |
| TestSingleSpanCopy | 7.036 us | 0.2694 us | 0.7900 us | 7.256 us |
| TestDoubleSpanCopy | 7.351 us | 0.0914 us | 0.0855 us | 7.382 us |
| BufferCopy | 10.955 us | 0.1157 us | 0.1083 us | 10.947 us |
times = 1000, arrayLength = 1024
| Method | Mean | Error | StdDev | Median |
|------------------- |---------:|---------:|----------:|---------:|
| TestArrayCopy | 45.16 us | 3.619 us | 10.670 us | 48.95 us |
| TestSingleSpanCopy | 36.85 us | 3.608 us | 10.638 us | 34.77 us |
| TestDoubleSpanCopy | 38.88 us | 3.378 us | 9.960 us | 39.91 us |
| BufferCopy | 48.83 us | 4.352 us | 12.833 us | 53.65 us |
times = 1000, arrayLength = 16384
| Method | Mean | Error | StdDev |
|------------------- |---------:|----------:|----------:|
| TestArrayCopy | 1.417 ms | 0.1096 ms | 0.3233 ms |
| TestSingleSpanCopy | 1.487 ms | 0.1012 ms | 0.2983 ms |
| TestDoubleSpanCopy | 1.438 ms | 0.1115 ms | 0.3287 ms |
| BufferCopy | 1.423 ms | 0.1147 ms | 0.3383 ms |
times = 100, arrayLength = 65536
| Method | Mean | Error | StdDev |
|------------------- |---------:|---------:|----------:|
| TestArrayCopy | 630.9 us | 47.01 us | 138.61 us |
| TestSingleSpanCopy | 629.5 us | 46.83 us | 138.08 us |
| TestDoubleSpanCopy | 655.4 us | 47.23 us | 139.25 us |
| BufferCopy | 419.0 us | 3.31 us | 2.93 us |
When the arrayLength is 8 or 16, the Span<T>.CopyTo() is the fastest. When the arrayLength is 32 or 128, the first three way are almost the same and all faster than Buffer.BlockCopy.Ehen the arrayLength is 1024, however, the Span<T>.CopyTo and T[].CopyTo are again faster than the other two ways. When the arrayLength is 16384, these four ways are almost the same. But when the arrayLength is 65536, the Buffer.BlockCopy is the fastest! Besides, the Span<T>.CopyTo here is a bit slower than the first two ways.
I really can't understand the results. At first I guess it's the cpu cache that matters. However, the L1 Cache of my CPU is 960KB, which is larger than the space of the array of any test case. Maybe it's the different implementation that causes this?
I will appreciate it if you are willing to explain it for me or discuss with me. I will also think about it and update the question if I get an idea.
As #Ralf mentioned, the source and destination of the array in each time are all the same, which could impact on the results. I modified my code and tried the test again, as is shown below. To avoid the time consume, I just declare a new array each time instead of randomize it manually.
using System.Buffers;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
public class Program
{
public static void Main(string[] args)
{
var summary = BenchmarkRunner.Run(typeof(Program).Assembly);
Console.WriteLine(summary);
}
}
public class UnitTest1
{
static readonly int times = 1000;
static readonly int arrayLength = 8;
public static int[] GetRandomArray(int length)
{
int[] array = new int[length];
//for (int i = 0; i < length; i++)
//{
// array[i] = new Random(DateTime.Now.Millisecond).Next(int.MinValue, int.MaxValue);
//}
return array;
}
[Benchmark]
public void ArrayCopy()
{
for (var j = 0; j < times; j++)
{
int[] src = GetRandomArray(arrayLength);
int[] dst = new int[arrayLength];
src.CopyTo(dst, 0);
}
}
[Benchmark]
public void SingleSpanCopy()
{
for (var j = 0; j < times; j++)
{
int[] src = GetRandomArray(arrayLength);
int[] dst = new int[arrayLength];
src.CopyTo(dst.AsSpan());
}
}
[Benchmark]
public void DoubleSpanCopy()
{
for (var j = 0; j < times; j++)
{
int[] src = GetRandomArray(arrayLength);
int[] dst = new int[arrayLength];
src.AsSpan().CopyTo(dst.AsSpan());
}
}
[Benchmark]
public void BufferCopy()
{
for (var j = 0; j < times; j++)
{
int[] src = GetRandomArray(arrayLength);
int[] dst = new int[arrayLength];
System.Buffer.BlockCopy(src, 0, dst, 0, sizeof(int) * src.Length);
}
}
}
times = 1000, arrayLength = 8
| Method | Mean | Error | StdDev | Median |
|--------------- |----------:|----------:|----------:|----------:|
| ArrayCopy | 8.843 us | 0.1762 us | 0.3040 us | 8.843 us |
| SingleSpanCopy | 6.864 us | 0.1366 us | 0.1519 us | 6.880 us |
| DoubleSpanCopy | 10.543 us | 0.9496 us | 2.7999 us | 10.689 us |
| BufferCopy | 21.270 us | 1.3477 us | 3.9738 us | 22.630 us |
times = 1000, arrayLength = 16
| Method | Mean | Error | StdDev | Median |
|--------------- |---------:|---------:|---------:|---------:|
| ArrayCopy | 16.94 us | 0.952 us | 2.808 us | 17.27 us |
| SingleSpanCopy | 12.54 us | 1.054 us | 3.109 us | 12.32 us |
| DoubleSpanCopy | 13.23 us | 0.930 us | 2.741 us | 13.25 us |
| BufferCopy | 23.43 us | 1.218 us | 3.591 us | 24.99 us |
times = 1000, arrayLength = 32
| Method | Mean | Error | StdDev | Median |
|--------------- |---------:|---------:|---------:|---------:|
| ArrayCopy | 24.35 us | 1.774 us | 5.229 us | 26.23 us |
| SingleSpanCopy | 20.64 us | 1.726 us | 5.089 us | 21.09 us |
| DoubleSpanCopy | 19.97 us | 1.915 us | 5.646 us | 20.08 us |
| BufferCopy | 26.24 us | 2.547 us | 7.511 us | 24.59 us |
times = 1000, arrayLength = 128
| Method | Mean | Error | StdDev |
|--------------- |---------:|---------:|---------:|
| ArrayCopy | 39.11 us | 0.529 us | 0.495 us |
| SingleSpanCopy | 39.14 us | 0.782 us | 1.070 us |
| DoubleSpanCopy | 40.24 us | 0.798 us | 1.398 us |
| BufferCopy | 42.20 us | 0.480 us | 0.426 us |
times = 1000, arrayLength = 1024
| Method | Mean | Error | StdDev |
|--------------- |---------:|--------:|--------:|
| ArrayCopy | 254.6 us | 4.92 us | 8.87 us |
| SingleSpanCopy | 241.4 us | 2.98 us | 2.78 us |
| DoubleSpanCopy | 243.7 us | 4.75 us | 4.66 us |
| BufferCopy | 243.0 us | 2.85 us | 2.66 us |
times = 1000, arayLength = 16384
| Method | Mean | Error | StdDev |
|--------------- |---------:|----------:|----------:|
| ArrayCopy | 4.325 ms | 0.0268 ms | 0.0250 ms |
| SingleSpanCopy | 4.300 ms | 0.0120 ms | 0.0112 ms |
| DoubleSpanCopy | 4.307 ms | 0.0348 ms | 0.0325 ms |
| BufferCopy | 4.293 ms | 0.0238 ms | 0.0222 ms |
times = 100, arrayLength = 65536
| Method | Mean | Error | StdDev | Median |
|--------------- |---------:|---------:|---------:|---------:|
| ArrayCopy | 153.6 ms | 1.46 ms | 1.29 ms | 153.1 ms |
| SingleSpanCopy | 213.4 ms | 8.78 ms | 25.87 ms | 218.2 ms |
| DoubleSpanCopy | 221.2 ms | 9.51 ms | 28.04 ms | 229.7 ms |
| BufferCopy | 203.1 ms | 10.92 ms | 32.18 ms | 205.6 ms |
#Ralf is right, there is indeed some differences. The most significant one is that when arrayLength = 65536, Array.Copy instead of Buffer.BlockCopy is the fastest.
But still, the results are very confusing..
Are you sure you can repeat the same benchmark and get the same results? Perhaps it was just a one time occurence, maybe caused by heat issues or another app taking processor time. When I try it on my machine, the values I get are more in line with what you'd expect.
It says Windows 10 for some reason, I'm using 11 too.
BenchmarkDotNet=v0.13.1, OS=Windows 10.0.22000
11th Gen Intel Core i9-11980HK 2.60GHz, 1 CPU, 16 logical and 8 physical cores
.NET SDK=6.0.201
[Host] : .NET 6.0.3 (6.0.322.12309), X64 RyuJIT
DefaultJob : .NET 6.0.3 (6.0.322.12309), X64 RyuJIT
| Method | Mean | Error | StdDev |
|------------------- |---------:|--------:|--------:|
| TestArrayCopy | 466.6 us | 0.69 us | 0.61 us |
| TestSingleSpanCopy | 444.7 us | 1.07 us | 1.00 us |
| TestDoubleSpanCopy | 443.8 us | 0.62 us | 0.52 us |
| BufferCopy | 447.1 us | 7.28 us | 6.08 us |
Just before posting this, I realized: Your CPU, 12700, has performance and efficiency cores. What if it ran most of the benchmark on efficiency cores and just so happened to run the BufferCopy part on performance cores? Can you try disabling your efficiency cores in BIOS?
I have several methods where I need to convert data (arrays) from one type into the other.
Sometimes I can work with generics, sometimes not, because the type is loaded from a configuration after the object is created. Since I need many different type conversions I created a ArrayConvert class that is able to handle this for me. My data is extremely large and I have to do it very that of course I try as much as possible to prevent the conversion, but in my situation this is not always possible.
The ArrayConvert class looks like following:
public static class ArrayConvert
{
public delegate void Converter(Array a1, Array a2);
static readonly Dictionary<(Type srcType, Type tgtType), Converter> converters = new Dictionary<(Type fromType, Type toType), Converter>();
static ArrayConvert()
{
converters.Add((typeof(float), typeof(int)), FloatToInt);
}
[MethodImpl(MethodImplOptions.AggressiveInlining | MethodImplOptions.AggressiveOptimization)]
public static void FloatToInt(Array a1, Array a2)
{
int N = a1.Length;
var srcArray = (float[])a1;
var tgtArray = (int[])a2;
for (int i = 0; i < N; i++)
tgtArray[i] = (int)srcArray[i];
}
[MethodImpl(MethodImplOptions.AggressiveInlining | MethodImplOptions.AggressiveOptimization)]
public static void FloatToInt(float[] a1, int[] a2)
{
int N = a1.Length;
var srcArray = a1;
var tgtArray = a2;
for (int i = 0; i < N; i++)
tgtArray[i] = (int)srcArray[i];
}
[MethodImpl(MethodImplOptions.AggressiveInlining | MethodImplOptions.AggressiveOptimization)]
public static void Convert(Type srcType, Array srcArray, Type tgtType, Array tgtArray)
{
if (converters.TryGetValue((srcType, tgtType), out var converter))
{
converter(srcArray, tgtArray);
return;
}
throw new NotImplementedException();
}
[MethodImpl(MethodImplOptions.AggressiveInlining | MethodImplOptions.AggressiveOptimization)]
public static void ConvertGenericFast<TSrcType, TTgtType>(TSrcType[] srcArray, TTgtType[] tgtArray)
{
if (converters.TryGetValue((typeof(TSrcType), typeof(TTgtType)), out var converter))
{
converter(srcArray, tgtArray);
return;
}
throw new NotImplementedException();
}
[MethodImpl(MethodImplOptions.AggressiveInlining | MethodImplOptions.AggressiveOptimization)]
public static void ConvertGenericSlow<TSrcType, TTgtType>(TSrcType[] srcArray, TTgtType[] tgtArray)
{
Convert(typeof(TSrcType), srcArray, typeof(TTgtType), tgtArray);
}
}
When I know write a benchmark around this conversion methods, I can see pretty weird results.
Here's the benchmark class:
public class Tester
{
public readonly static int N = 100000;
public readonly static float[] SrcData;
public readonly static int[] TgtData;
static Tester()
{
SrcData = new float[N];
TgtData = new int[N];
for (int i = 0; i < N; i++)
{
SrcData[i] = i;
TgtData[i] = i;
}
}
[Benchmark]
public void ConvertWithType() => ArrayConvert.Convert(typeof(float), SrcData, typeof(int), TgtData);
[Benchmark]
public void ConvertWithGenericFast() => ArrayConvert.ConvertGenericFast<float, int>(SrcData, TgtData);
[Benchmark]
public void ConvertWithGenericSlow() => ArrayConvert.ConvertGenericSlow<float, int>(SrcData, TgtData);
[Benchmark]
public void ConvertWithKnownDirectWithType() => ArrayConvert.FloatToInt(SrcData, TgtData);
[Benchmark]
public void ConvertWithKnownDirectWithArray() => ArrayConvert.FloatToInt((Array)SrcData, (Array)TgtData);
}
Benchmarks:
Runtime = .NET Core 3.1.2 (CoreCLR 4.700.20.6602, CoreFX 4.700.20.6702), X64 RyuJIT; GC = Concurrent Workstation
Array Size: 10.000
| Method | Mean | Error | StdDev |
|-------------------------------- |---------:|----------:|----------:|
| ConvertWithType | 8.518 us | 0.0080 us | 0.0071 us |
| ConvertWithGenericFast | 8.684 us | 0.1163 us | 0.1088 us |
| ConvertWithGenericSlow | 8.482 us | 0.0028 us | 0.0023 us |
| ConvertWithKnownDirectWithType | 8.334 us | 0.0027 us | 0.0024 us |
| ConvertWithKnownDirectWithArray | 8.562 us | 0.0893 us | 0.0746 us |
Array Size: 100.000
| Method | Mean | Error | StdDev | Median |
|-------------------------------- |---------:|---------:|---------:|---------:|
| ConvertWithType | 68.40 us | 0.772 us | 1.372 us | 67.77 us |
| ConvertWithGenericFast | 68.03 us | 0.627 us | 0.770 us | 67.83 us |
| ConvertWithGenericSlow | 69.11 us | 0.944 us | 0.883 us | 68.90 us |
| ConvertWithKnownDirectWithType | 67.45 us | 0.689 us | 0.611 us | 67.34 us |
| ConvertWithKnownDirectWithArray | 67.24 us | 0.425 us | 0.398 us | 67.20 us |
Array Size: 1.000.000
| Method | Mean | Error | StdDev | Median |
|-------------------------------- |---------:|---------:|---------:|---------:|
| ConvertWithType | 693.9 us | 8.06 us | 7.54 us | 693.4 us |
| ConvertWithGenericFast | 800.2 us | 26.99 us | 79.58 us | 865.8 us |
| ConvertWithGenericSlow | 872.7 us | 6.27 us | 5.86 us | 870.1 us |
| ConvertWithKnownDirectWithType | 743.3 us | 24.66 us | 71.94 us | 704.1 us |
| ConvertWithKnownDirectWithArray | 870.9 us | 7.82 us | 7.32 us | 866.5 us |
Array Size: 10.000.000
| Method | Mean | Error | StdDev |
|-------------------------------- |----------:|----------:|----------:|
| ConvertWithType | 8.739 ms | 0.1120 ms | 0.0993 ms |
| ConvertWithGenericFast | 10.052 ms | 0.0918 ms | 0.0859 ms |
| ConvertWithGenericSlow | 10.015 ms | 0.0563 ms | 0.0439 ms |
| ConvertWithKnownDirectWithType | 10.070 ms | 0.0058 ms | 0.0045 ms |
| ConvertWithKnownDirectWithArray | 10.096 ms | 0.0996 ms | 0.0931 ms |
Why is the ConvertWithType always faster except the first two with 10.000 and 100.000 elements?
Why is the ConvertWithKnownDirectWithType not the fastest?
Why is there almost no difference between ConvertWithGenericFast and ConvertWithGenericSlow?
Why is there in 1.000.000 elements such a high standard deviation and error?
Furthermore, with Span and Memory I do not have a common "typeless" interface anymore as I had with array, since there is no common "base". So would there be a way to use the Span including the Array as the example above, or is there even a better and faster solution?
This question is more theoretical than practical, but still.
I've been looking for a chance to improve the following code from the string memory allocation standpoint:
/* Output for n = 3:
*
* ' #'
* ' ##'
* '###'
*
*/
public static string[] staircase(int n) {
string[] result = new string[n];
for(var i = 0; i < result.Length; i++) {
var spaces = string.Empty.PadLeft(n - i - 1, ' ');
var sharpes = string.Empty.PadRight(i + 1, '#');
result[i] = spaces + sharpes;
}
return result;
}
PadHelper is the method, that is eventually called under the hood twice per iteration.
So, correct me if I'm wrong, but it seems like memory is allocated at least 3 times per iteration.
Any code improvements will be highly appreciated.
how about:
result[i] = new string('#',i).PadLeft(n)
?
Note that this still allocates two strings internally, but I honestly don't see that as a problem. The garbage collector will take care of it for you.
StringBuilder is always an answer when it comes to string allocations; I'm sure you know that so apparently you want something else. Well, since your strings are all the same length, you can declare a single char[] array, populate it every time (only requires changing one array element on each iteration) and then use the string(char[]) constructor:
public static string[] staircase(int n)
{
char[] buf = new char[n];
string[] result = new string[n];
for (var i = 0; i < n - 1; i++)
{
buf[i] = ' ';
}
for (var i = 0; i < n; i++)
{
buf[n - i - 1] = '#';
result[i] = new string(buf);
}
return result;
}
You can save on both allocations and speed by starting with a string that contains all the Spaces and all the Sharpes you're ever going to need, and then taking substrings from that, as follows:
public string[] Staircase2()
{
string allChars = new string(' ', n - 1) + new string('#', n); // n-1 spaces + n sharpes
string[] result = new string[n];
for (var i = 0; i < result.Length; i++)
result[i] = allChars.Substring(i, n);
return result;
}
I used BenchmarkDotNet to compare Staircase1 (your original approach) with Staircase2 (my approach above) from n=2 upto n=8, see the results below.
It shows that Staircase2 is always faster (see the Mean column), and it allocates fewer bytes starting from n=3.
| Method | n | Mean | Error | StdDev | Allocated |
|----------- |-- |------------:|-----------:|-----------:|----------:|
| Staircase1 | 2 | 229.36 ns | 4.3320 ns | 4.0522 ns | 92 B |
| Staircase2 | 2 | 92.00 ns | 0.7200 ns | 0.6735 ns | 116 B |
| Staircase1 | 3 | 375.06 ns | 3.3043 ns | 3.0908 ns | 156 B |
| Staircase2 | 3 | 114.12 ns | 2.8933 ns | 3.2159 ns | 148 B |
| Staircase1 | 4 | 507.32 ns | 3.8995 ns | 3.2562 ns | 236 B |
| Staircase2 | 4 | 142.78 ns | 1.4575 ns | 1.3634 ns | 196 B |
| Staircase1 | 5 | 650.03 ns | 15.1515 ns | 25.7284 ns | 312 B |
| Staircase2 | 5 | 169.25 ns | 1.9076 ns | 1.6911 ns | 232 B |
| Staircase1 | 6 | 785.75 ns | 16.9353 ns | 15.8413 ns | 412 B |
| Staircase2 | 6 | 195.91 ns | 2.9852 ns | 2.4928 ns | 292 B |
| Staircase1 | 7 | 919.15 ns | 11.4145 ns | 10.6771 ns | 500 B |
| Staircase2 | 7 | 237.55 ns | 4.6380 ns | 4.9627 ns | 332 B |
| Staircase1 | 8 | 1,075.66 ns | 26.7013 ns | 40.7756 ns | 620 B |
| Staircase2 | 8 | 255.50 ns | 2.6894 ns | 2.3841 ns | 404 B |
This doesn't mean that Staircase2 is the absolute best possible, but certainly there is a way that is better than the original.
You can project your desired results using the Linq Select method. For example, something like this:
public static string[] staircase(int n) {
return Enumerable.Range(1, n).Select(i => new string('#', i).PadLeft(n)).ToArray();
}
Alternate approach using an int array:
public static string[] staircase(int n) {
return (new int[n]).Select((x,i) => new string('#', i+1).PadLeft(n)).ToArray();
}
HTH
The partial digit subsequence of an array A is a subsequence of integers in which each consecutive integers have at least 1 digit in common
I keep a dictionary with 0 to 9 characters and the count of each subsequent characters. then i loop through all values in integer array and take each digit and check my dictionary for the count of that digit.
public static void Main(string[] args)
{
Dictionary<char, int> dct = new Dictionary<char, int>
{
{ '0', 0 },
{ '1', 0 },
{ '2', 0 },
{ '3', 0 },
{ '4', 0 },
{ '5', 0 },
{ '6', 0 },
{ '7', 0 },
{ '8', 0 },
{ '9', 0 }
};
string[] arr = Console.ReadLine().Split(' ');
for (int i = 0; i < arr.Length; i++)
{
string str = string.Join("", arr[i].Distinct());
for (int j = 0; j < str.Length; j++)
{
int count = dct[str[j]];
if (count == i || (i > 0 && arr[i - 1].Contains(str[j])))
{
count++;
dct[str[j]] = count;
}
else dct[str[j]] = 1;
}
}
string s = dct.Aggregate((l, r) => l.Value > r.Value ? l : r).Key.ToString();
Console.WriteLine(s);
}
for e.g,
12 23 231
Answer would be 2 because it occurs 3 times
The array can contain 10^18 elements.
can someone help me with an optimal solution. This algorithm is not fit to handle large data in an array.
All the posted answers are wrong because all of them have ignored the most important part of the question:
The array can contain 10^18 elements.
This array is being read from disk? Supposing each element is two bytes, that's two million terabyte drives just for the array. I don't think that's going to fit into memory. You'll have to go with a streaming solution.
How long will the streaming solution take? If you can process a billion array items a second, which seems within reason, your program will take 32 years to execute.
Your requirements are not realistic, and so the problem cannot feasibly be solved with the resources of a single person. You'll need the resources of a large corporation or nation to attack this problem, and you'll need a lot of funding for hardware acquisition and management.
The linear algorithm is trivial; it's the size of the data that is the entire problem. Start building your data center somewhere with cheap power and friendly tax laws, because you are going to be importing a lot of disks.
You shouldn't need to go through the array elements one by one, you can simply merge the entire string array into 1 string and go through the characters
12 23 231 -> "1223231" , loop through and count it.
It should be fast enough O(n) and require only 10 entries in your dictionary. How "fast" do you exactly need ?
I didn't use arrays, I'm not sure if u must use arrays, if not, check this solution.
static void Main(string[] args)
{
List<char> numbers = new List<char>();
Dictionary<char, int> dct = new Dictionary<char, int>()
{
{ '0',0 },
{ '1',0 },
{ '2',0 },
{ '3',0 },
{ '4',0 },
{ '5',0 },
{ '6',0 },
{ '7',0 },
{ '8',0 },
{ '9',0 },
};
string option;
do
{
Console.Write("Enter number: ");
string number = Console.ReadLine();
numbers.AddRange(number);
Console.Write("Enter 'X' if u want to finish work: ");
option = Console.ReadLine();
} while (option.ToLower() != "x");
foreach(char c in numbers)
{
if(dct.ContainsKey(c))
{
dct[c]++;
}
}
foreach(var keyValue in dct)
{
Console.WriteLine($"Char {keyValue.Key} was used {keyValue.Value} times");
}
Console.ReadKey(true);
}
Certainly not an efficient solution but this will work.
public class Program
{
public static int arrLength = 0;
public static string[] arr;
public static Dictionary<char, int> dct = new Dictionary<char, int>();
public static void Main(string[] args)
{
dct.Add('0', 0);
dct.Add('1', 0);
dct.Add('2', 0);
dct.Add('3', 0);
dct.Add('4', 0);
dct.Add('5', 0);
dct.Add('6', 0);
dct.Add('7', 0);
dct.Add('8', 0);
dct.Add('9', 0);
arr = Console.ReadLine().Split(' ');
arrLength = arr.Length;
foreach (string str in arr)
{
char[] ch = str.ToCharArray();
ch = ch.Distinct<char>().ToArray();
foreach (char c in ch)
{
Exists(c, Array.IndexOf(arr, str));
}
}
int val = dct.Values.Max();
foreach(KeyValuePair<char,int> v in dct.Where(x => x.Value == val))
{
Console.WriteLine("Common digit {0} with frequency {1} ",v.Key,v.Value+1);
}
Console.ReadLine();
}
public static bool Exists(char c, int pos)
{
int count = 0;
if (pos == arrLength - 1)
return false;
for (int i = pos; i < arrLength - 1; i++)
{
if (arr[i + 1].ToCharArray().Contains(c))
{
count++;
if (count > dct[c])
dct[c] = count;
}
else
break;
}
return true;
}
}
As somebody else pointed out, if you have 10^18 numbers then this is going to be a lot more data than you can fit into memory. So you need a streaming solution. You also don't want to spend a lot of time on memory allocation or converting strings to character arrays, calling functions to de-duplicate digits, etc. Ideally, you need a solution that looks at each character once.
The memory requirement of the program below is very small: just two small arrays of long integers.
The algorithm I developed maintains two arrays of counts per digit. One is the maximum number of consecutive occurrences of a digit, and the other is the most recent count of consecutive occurrences.
The code itself reads the file character-by-character, accumulating digits until it encounters a character that is not a digit, then it updates the current counts array for each digit encountered. If the current count exceeds the maximum count, then the max count for that digit is updated. If a digit doesn't appear in a number, then its current count is reset to 0.
The occurrence of individual digits in a number is maintained by setting bits in the digits variable. That way, a number like 1221 will not count the digits twice.
using (var input = File.OpenText("filename"))
{
var maxCounts = new long[]{0,0,0,0,0,0,0,0,0,0};
var latestCounts = new long[]{0,0,0,0,0,0,0,0,0,0};
char prevChar = ' ';
word digits = 0;
while (!input.EndOfStream)
{
var c = input.Read();
// If the character is a digit, set the corresponding bit
if (char.IsDigit(c))
{
digits |= (1 << (c-'0'));
prevChar = c;
continue;
}
// test here to prevent resetting counts when there are multiple non-digit
// characters between numbers.
if (!char.IsDigit(prevChar))
{
continue;
}
prevChar = c;
// digits has a bit set for every digit
// that occurred in the number.
// Update the counts arrays.
// For each of the first 10 bits, update the corresponding count.
for (int i = 0; i < 10; ++i)
{
if ((digits & 1) == 1)
{
++latestCounts[i];
if (latestCounts[i] > maxCounts[i])
{
maxCounts[i] = latestCounts[i];
}
}
else
{
latestCounts[i] = 0;
}
// Shift the next bit into place.
digits >> 1;
}
digits = 0;
}
}
This code minimizes the processing required, but the program's running time will be dominated by the speed at which you can read the file. There are optimizations you can make to increase the input speed, but ultimately you're limited to your system's data transfer speed.
I'll give you three versions.
Basically, I just loaded up a list of random ints as string, the scale is how many, and run it on Core and Framework to see. Each test was run 10 times and averaged.
Mine1
Uses Distinct
public unsafe class Mine : Benchmark<List<string>, char>
{
protected override char InternalRun()
{
var result = new int[10];
var asd = Input.Select(x => new string(x.Distinct().ToArray())).ToList();
var raw = string.Join("", asd);
fixed (char* pInput = raw)
{
var len = pInput + raw.Length;
for (var p = pInput; p < len; p++)
{
result[*p - 48]++;
}
}
return (char)(result.ToList().IndexOf(result.Max()) + '0');
}
}
Mine2
Basically this uses a second array to work things out
public unsafe class Mine2 : Benchmark<List<string>, char>
{
protected override char InternalRun()
{
var result = new int[10];
var current = new int[10];
var raw = string.Join(" ", Input);
fixed (char* pInput = raw)
{
var len = pInput + raw.Length;
for (var p = pInput; p < len; p++)
if (*p != ' ')
current[*p - 48] = 1;
else
for (var i = 0; i < 10; i++)
{
result[i] += current[i];
current[i] = 0;
}
}
return (char)(result.ToList().IndexOf(result.Max()) + '0');
}
}
Mine3
No Joins or string allocation
public unsafe class Mine3 : Benchmark<List<string>, char>
{
protected override char InternalRun()
{
var result = new int[10];
foreach (var item in Input)
fixed (char* pInput = item)
{
var current = new int[10];
var len = pInput + item.Length;
for (var p = pInput; p < len; p++)
current[*p - 48] = 1;
for (var i = 0; i < 10; i++)
{
result[i] += current[i];
current[i] = 0;
}
}
return (char)(result.ToList().IndexOf(result.Max()) + '0');
}
}
#Results .Net Framework 4.7.1
Mode : Release
Test Framework : .Net Framework 4.7.1
Benchmarks runs : 10 times (averaged)
Scale : 10,000
Name | Average | Fastest | StDv | Cycles | Pass | Gain
--------------------------------------------------------------------------
Mine3 | 0.533 ms | 0.431 ms | 0.10 | 1,751,372 | Base | 0.00 %
Mine2 | 0.994 ms | 0.773 ms | 0.38 | 3,100,896 | Yes | -86.63 %
Mine | 8.122 ms | 7.012 ms | 1.29 | 27,480,083 | Yes | -1,424.78 %
Original | 20.729 ms | 16.044 ms | 4.56 | 65,316,558 | No | -3,791.47 %
Scale : 100,000
Name | Average | Fastest | StDv | Cycles | Pass | Gain
------------------------------------------------------------------------------
Mine3 | 4.766 ms | 4.475 ms | 0.34 | 16,140,716 | Base | 0.00 %
Mine2 | 8.424 ms | 7.890 ms | 0.33 | 28,577,416 | Yes | -76.76 %
Mine | 96.650 ms | 93.066 ms | 3.35 | 327,615,266 | Yes | -1,927.94 %
Original | 163.342 ms | 154.070 ms | 12.61 | 550,038,934 | No | -3,327.32 %
Scale : 1,000,000
Name | Average | Fastest | StDv | Cycles | Pass | Gain
------------------------------------------------------------------------------------
Mine3 | 49.827 ms | 48.600 ms | 1.19 | 169,162,589 | Base | 0.00 %
Mine2 | 106.334 ms | 97.641 ms | 6.53 | 359,773,719 | Yes | -113.41 %
Mine | 1,051.600 ms | 1,000.731 ms | 35.75 | 3,511,515,189 | Yes | -2,010.51 %
Original | 1,640.385 ms | 1,588.431 ms | 65.50 | 5,538,915,638 | No | -3,192.18 %
#Results .Net Core 2.0
Mode : Release
Test Framework : Core 2.0
Benchmarks runs : 10 times (averaged)
Scale : 10,000
Name | Average | Fastest | StDv | Cycles | Pass | Gain
--------------------------------------------------------------------------
Mine3 | 0.476 ms | 0.353 ms | 0.12 | 1,545,995 | Base | 0.00 %
Mine2 | 0.554 ms | 0.551 ms | 0.00 | 1,883,570 | Yes | -16.23 %
Mine | 7.585 ms | 5.875 ms | 1.27 | 25,580,339 | Yes | -1,492.28 %
Original | 21.380 ms | 16.263 ms | 6.46 | 65,741,909 | No | -4,388.14 %
Scale : 100,000
Name | Average | Fastest | StDv | Cycles | Pass | Gain
------------------------------------------------------------------------------
Mine3 | 3.946 ms | 3.685 ms | 0.25 | 13,409,181 | Base | 0.00 %
Mine2 | 6.203 ms | 5.796 ms | 0.33 | 21,042,340 | Yes | -57.21 %
Mine | 72.975 ms | 68.599 ms | 4.13 | 246,471,960 | Yes | -1,749.41 %
Original | 161.400 ms | 145.664 ms | 19.37 | 544,703,761 | Yes | -3,990.40 %
Scale : 1,000,000
Name | Average | Fastest | StDv | Cycles | Pass | Gain
------------------------------------------------------------------------------------
Mine3 | 41.036 ms | 38.928 ms | 2.47 | 139,045,736 | Base | 0.00 %
Mine2 | 71.283 ms | 68.777 ms | 2.49 | 241,525,269 | Yes | -73.71 %
Mine | 749.250 ms | 720.809 ms | 27.79 | 2,479,171,863 | Yes | -1,725.84 %
Original | 1,517.240 ms | 1,477.321 ms | 48.94 | 5,142,422,700 | No | -3,597.35 %
#Summary
String allocation, join, and distinct suck for performance. If you need more performance you could probably break the list up into work loads and smash this in parallel.