I am required to run this forloop in under a SECOND without making any changes
to the loops and the forloop as a whole.
var someLongDataString = "";
const int sLen = 30, loops = 50000;
var source = new string('X', sLen);
Console.WriteLine();
for (var i = 0; i < loops; i++)
{
someLongDataString += source;
}
EDIT: the restriction doesn't apply to someLongDataString += source;
As I indicated in the comment, it implies a change to the statement within the loop.
Your code:
This code:
class Program
{
static void Main(string[] args)
{
var someLongDataString = "";
var builder = new StringBuilder(); // <---- use a StringBuilder instead
const int sLen = 30, loops = 50000;
var source = new string('X', sLen);
var stopwatch = Stopwatch.StartNew();
Console.WriteLine();
for (var i = 0; i < loops; i++)
{
//someLongDataString += source;
builder.Append(source);
}
someLongDataString = builder.ToString();
Console.WriteLine("Elapsed: {0}", stopwatch.Elapsed);
Console.WriteLine("Press any key");
Console.ReadLine();
}
}
After:
Why so fast?
Performing en-mass concatenations to a string is terribly inefficient sadly as new objects are being created, remember string is immutable. StringBuilder isn't and in addition, uses a pre-allocated buffer to accommodate potential additions. So instead of allocating for each and every call, it only restructures it's buffer once it becomes filled.
MSDN has this to say on StringBuilder:
For routines that perform extensive string manipulation (such as apps that modify a string numerous times in a loop), modifying a string repeatedly can exact a significant performance penalty. The alternative is to use StringBuilder, which is a mutable string class. Mutability means that once an instance of the class has been created, it can be modified by appending, removing, replacing, or inserting characters. A StringBuilder object maintains a buffer to accommodate expansions to the string. New data is appended to the buffer if room is available; otherwise, a new, larger buffer is allocated, data from the original buffer is copied to the new buffer, and the new data is then appended to the new buffer. MSDN
Pay particular note to this too:
Although the StringBuilder class generally offers better performance than the String class, you should not automatically replace String with StringBuilder whenever you want to manipulate strings. Performance depends on the size of the string, the amount of memory to be allocated for the new string, the system on which your app is executing, and the type of operation. You should be prepared to test your app to determine whether StringBuilder actually offers a significant performance improvement. MSDN
One way is to use string builder. But this requires a change with in the forloop
var starttime = DateTime.Now;
var someLongDataString = new StringBuilder(100000);
const int sLen = 30, loops = 50000;
var source = new string('X', sLen);
Console.WriteLine();
for (var i = 0; i < loops; i++)
{
someLongDataString.Append(source);
}
Console.WriteLine((DateTime.Now - starttime).Milliseconds);
Console.ReadLine();
Related
I want to write 10^5 lines of 10^5 randomly generated numbers to a file, so that each line
contains 10^5 numbers. Therefore I wanted to know what the best approach would be
for doing this quickly. I thought of creating 10^5 threads that are launched concurrently and
each of them writes one line, so that the file is filled in the time it takes to write
only 1 line.
public static void GenerateNumbers(string path)
{
using(StreamWriter sw = new StreamWriter(path))
{
for (int i = 0; i < 100000; i++)
{
for (int j = 0; j < 100000; j++)
{
Random rnd = new Random();
int number = rnd.Next(1, 101);
sw.Write(number + " ");
}
sw.Write('\n');
}
}
}
Currently I am doing it like this, is there a faster way?
Now that there's a code snippet, some optimization can be applied.
static void Main(string[] args)
{
var sw = new Stopwatch();
const int pow = 5;
sw.Start();
GenerateNumbers("test.txt", pow);
sw.Stop();
Console.WriteLine($"Wrote 10^{pow} lines of 10^{pow} numbers in {sw.Elapsed}");
}
public static void GenerateNumbers(string path, int pow)
{
var rnd = new Random();
using var sw = new StreamWriter(path, false);
var max = Math.Pow(10, pow);
var sb = new StringBuilder();
for (long i = 0; i < max; i++)
{
for (long j = 0; j < max; j++)
{
sb.Append(rnd.Next(1, 101));
sb.Append(' ');
}
sw.WriteLine(sb.ToString());
sb.Clear();
if (i % 100 == 0)
Console.WriteLine((i / max).ToString("P"));
}
}
The above code does IO writes at a fairly decent pace (remember the limit is the IO speed, not CPU / number generation). Also note that I'm running the code from inside a VM, so I'm likely not getting the best IO results.
As mentioned by Neil Moss in the comments, you don't need to instantiate the Random class on each run.
I'm generating a single line to write in-memory using a StringBuilder, then I write this to the disk.
Since this does take a bit of time I've added a progress tracker (this adds a miniscule amount of overhead).
A 10^4 lines of 10^4 numbers file already is 285MB in size and was generated in 4.6767592 seconds.
A 10^5 case like the above yields a 25.5 GB file and takes 5:54.2580683 to generate.
I haven't tried this, but I'm wondering if you couldn't save time by writing the data to a ZIP file, assuming you're more interested in just getting the data onto the disk, and not the format itself. A compressed TXT file of numbers should be a fair-bit smaller and as such should be much faster to write.
I'm working on an UI project which has to work with huge datasets (every second 35 new values) which will then be displayed in a graph. The user shall be able to change the view from 10 Minutes up to Month view. To archive this I wrote myself a helper function which truncate a lot of data to a 600 byte array which then should be displayed on a LiveView Chart.
I found out that at the beginning the software works very well and fast, but as longer the software runs (e.g. for a month) and the memory usage raises (to ca. 600 mb) the function get's a lot of slower (up to 8x).
So I made some tests to to find the source of this. Quite surprised I found out that there is something like a magic number where the function get's 2x slower , just by changing 71494 loops to 71495 from 19ms to 39ms runtime
I'm really confused. Even when you comment out the second for loop (where the arrays are geting truncated) it is a lot of slower.
Maybe this has something to do with the Garbage Collector? Or does C# compress memory automatically?
Using Visual Studio 2017 with newest updates.
The Code
using System;
using System.Collections.Generic;
using System.Diagnostics;
namespace TempoaryTest
{
class ProductNameStream
{
public struct FileValue
{
public DateTime Time;
public ushort[] Value;
public ushort[] Avg1;
public ushort[] Avg2;
public ushort[] DAvg;
public ushort AlarmDelta;
public ushort AlarmAverage;
public ushort AlarmSum;
}
}
public static class Program
{
private const int MAX_MEASURE_MODEL = 600;
private const int TEST = 71494;
//private const int TEST = 71495;//this one doubles the consuming time!
public static void Main(string[] bleg)
{
List<ProductNameStream.FileValue> fileValues = new List<ProductNameStream.FileValue>();
ProductNameStream.FileValue fil = new ProductNameStream.FileValue();
DateTime testTime = DateTime.Now;
Console.WriteLine("TEST: {0} {1:X}", TEST, TEST);
//Creating example List
for (int n = 0; n < TEST; n++)
{
fil = new ProductNameStream.FileValue
{
Time = testTime = testTime.AddSeconds(1),
Value = new ushort[8],
Avg1 = new ushort[8],
Avg2 = new ushort[8],
DAvg = new ushort[8]
};
for (int i = 0; i < 8; i++)
{
fil.Value[i] = (ushort)(n + i);
fil.Avg1[i] = (ushort)(TEST - n - i);
fil.Avg2[i] = (ushort)(n / (i + 1));
fil.DAvg[i] = (ushort)(n * (i + 1));
}
fil.AlarmDelta = (ushort)DateTime.Now.Ticks;
fil.AlarmAverage = (ushort)(fil.AlarmDelta / 2);
fil.AlarmSum = (ushort)(n);
fileValues.Add(fil);
}
var sw = Stopwatch.StartNew();
/* May look like the same as MAX_MEASURE_MODEL but since we use int
* as counter we must be aware of the int round down.*/
int cnt = (fileValues.Count / (fileValues.Count / MAX_MEASURE_MODEL)) + 1;
ProductNameStream.FileValue[] newFileValues = new ProductNameStream.FileValue[cnt];
ProductNameStream.FileValue[] fileValuesArray = fileValues.ToArray();
//Truncate the big list to a 600 Array
for (int n = 0; n < fileValues.Count; n++)
{
if ((n % (fileValues.Count / MAX_MEASURE_MODEL)) == 0)
{
cnt = n / (fileValues.Count / MAX_MEASURE_MODEL);
newFileValues[cnt] = fileValuesArray[n];
newFileValues[cnt].Value = new ushort[8];
newFileValues[cnt].Avg1 = new ushort[8];
newFileValues[cnt].Avg2 = new ushort[8];
newFileValues[cnt].DAvg = new ushort[8];
}
else
{
for (int i = 0; i < 8; i++)
{
if (newFileValues[cnt].Value[i] < fileValuesArray[n].Value[i])
newFileValues[cnt].Value[i] = fileValuesArray[n].Value[i];
if (newFileValues[cnt].Avg1[i] < fileValuesArray[n].Avg1[i])
newFileValues[cnt].Avg1[i] = fileValuesArray[n].Avg1[i];
if (newFileValues[cnt].Avg2[i] < fileValuesArray[n].Avg2[i])
newFileValues[cnt].Avg2[i] = fileValuesArray[n].Avg2[i];
if (newFileValues[cnt].DAvg[i] < fileValuesArray[n].DAvg[i])
newFileValues[cnt].DAvg[i] = fileValuesArray[n].DAvg[i];
}
if (newFileValues[cnt].AlarmSum < fileValuesArray[n].AlarmSum)
newFileValues[cnt].AlarmSum = fileValuesArray[n].AlarmSum;
if (newFileValues[cnt].AlarmDelta < fileValuesArray[n].AlarmDelta)
newFileValues[cnt].AlarmDelta = fileValuesArray[n].AlarmDelta;
if (newFileValues[cnt].AlarmAverage < fileValuesArray[n].AlarmAverage)
newFileValues[cnt].AlarmAverage = fileValuesArray[n].AlarmAverage;
}
}
Console.WriteLine(sw.ElapsedMilliseconds);
}
}
}
This is most likely being caused by the garbage collector, as you suggested.
I can offer two pieces of evidence to indicate that this is so:
If you put GC.Collect() just before you start the stopwatch, the difference in times goes away.
If you instead change the initialisation of the list to new List<ProductNameStream.FileValue>(TEST);, the difference in time also goes away.
(Initialising the list's capacity to the final size in its constructor prevents multiple reallocations of its internal array while items are being added to it, which will reduce pressure on the garbage collector.)
Therefore, I assert based on this evidence that it is indeed the garbage collector that is impacting your timings.
Incidentally, the threshold value was slightly different for me, and for at least one other person too (which isn't surprising if the timing differences are being caused by the garbage collector).
Your data structure is inefficient and is forcing you to do a lot of allocations during computation. Have a look of thisfixed size array inside a struct
. Also preallocate the list. Don't rely on the list to constantly adjust its size which also creates garbage.
I spent the last few days on creating a parallel version of a code (college work), but I came to a dead end (at least for me): The parallel version is nearly as twice slower than the sequential one, and I have no clue on why. Here is the code:
Variables.GetMatrix();
int ThreadNumber = Environment.ProcessorCount/2;
int SS = Variables.PopSize / ThreadNumber;
//GeneticAlgorithm GA = new GeneticAlgorithm();
Stopwatch stopwatch = new Stopwatch(), st = new Stopwatch(), st1 = new Stopwatch();
List<Thread> ThreadList = new List<Thread>();
//List<Task> TaskList = new List<Task>();
GeneticAlgorithm[] SubPop = new GeneticAlgorithm[ThreadNumber];
Thread t;
//Task t;
ThreadVariables Instance = new ThreadVariables();
stopwatch.Start();
st.Start();
PopSettings();
InitialPopulation();
st.Stop();
//Lots of attributions...
int SPos = 0, EPos = SS;
for (int i = 0; i < ThreadNumber; i++)
{
int temp = i, StartPos = SPos, EndPos = EPos;
t = new Thread(() =>
{
SubPop[temp] = new GeneticAlgorithm(Population, NumSeq, SeqSize, MaxOffset, PopFit, Child, Instance, StartPos, EndPos);
SubPop[temp].RunGA();
SubPop[temp].ShowPopulation();
});
t.Start();
ThreadList.Add(t);
SPos = EPos;
EPos += SS;
}
foreach (Thread a in ThreadList)
a.Join();
double BestFit = SubPop[0].BestSol;
string BestAlign = SubPop[0].TV.Debug;
for (int i = 1; i < ThreadNumber; i++)
{
if (BestFit < SubPop[i].BestSol)
{
BestFit = SubPop[i].BestSol;
BestAlign = SubPop[i].TV.Debug;
Variables.ResSave = SubPop[i].TV.ResSave;
Variables.NumSeq = SubPop[i].TV.NumSeq;
}
}
Basically the code creates an array of the object type, instantiante and run the algorithm in each position of the array, and collecting the best value of the object array at the end. This type of algorithm works on a three-dimentional data array, and on the parallel version I assign each thread to process one range of the array, avoiding concurrency on data. Still, I'm getting the slow timing... Any ideas?
I'm using an Core i5, which has four cores (two + two hyperthreading), but any amount of threads greater than one I use makes the code run slower.
What I can explain of the code I'm running in parallel is:
The second method being called in the code I posted makes about 10,000 iterations, and in each iteration it calls one function. This function may or may not call others more (spread across two different objects for each thread) and make lots of calculations, it depends on a bunch of factors which are particular of the algorithm. And all these methods for one thread work in an area of a data array that isn't accessed by the other threads.
With System.Linq there is a lot to make simpler:
int ThreadNumber = Environment.ProcessorCount/2;
int SS = Variables.PopSize / ThreadNumber;
int numberOfTotalIterations = // I don't know what goes here.
var doneAlgorithms = Enumerable.Range(0, numberOfTotalIterations)
.AsParallel() // Makes the whole thing running in parallel
.WithDegreeOfParallelism(ThreadNumber) // We don't need this line if you want the system to manage the number of parallel processings.
.Select(index=> _runAlgorithmAndReturn(index,SS))
.ToArray(); // This is obsolete if you only need the collection of doneAlgorithms to determine the best one.
// If not, keep it to prevent multiple enumerations.
// So we sort algorithms by BestSol ascending and take the first one to determine the "best".
// OrderBy causes a full enumeration, hence the above mentioned obsoletion of the ToArray() statement.
GeneticAlgorithm best = doneAlgorithms.OrderBy(algo => algo.BestSol).First();
BestFit = best.Bestsol;
BestAlign = best.TV.Debug;
Variables.ResSave = best.TV.ResSave;
Variables.NumSeq = best.TV.NumSeq;
And declare a method to make it a bit more readable
/// <summary>
/// Runs a single algorithm and returns it
/// </summary>
private GeneticAlgorithm _runAlgorithmAndReturn(int index, int SS)
{
int startPos = index * SS;
int endPos = startPos + SS;
var algo = new GeneticAlgorithm(Population, NumSeq, SeqSize, MaxOffset, PopFit, Child, Instance, startPos, endPos);
algo.RunGA();
algo.ShowPopulation();
return algo;
}
There is a big overhead in creating threads.
Instead of creating new threads, use the ThreadPool, as show below:
Variables.GetMatrix();
int ThreadNumber = Environment.ProcessorCount / 2;
int SS = Variables.PopSize / ThreadNumber;
//GeneticAlgorithm GA = new GeneticAlgorithm();
Stopwatch stopwatch = new Stopwatch(), st = new Stopwatch(), st1 = new Stopwatch();
List<WaitHandle> WaitList = new List<WaitHandle>();
//List<Task> TaskList = new List<Task>();
GeneticAlgorithm[] SubPop = new GeneticAlgorithm[ThreadNumber];
//Task t;
ThreadVariables Instance = new ThreadVariables();
stopwatch.Start();
st.Start();
PopSettings();
InitialPopulation();
st.Stop();
//lots of attributions...
int SPos = 0, EPos = SS;
for (int i = 0; i < ThreadNumber; i++)
{
int temp = i, StartPos = SPos, EndPos = EPos;
ManualResetEvent wg = new ManualResetEvent(false);
WaitList.Add(wg);
ThreadPool.QueueUserWorkItem((unused) =>
{
SubPop[temp] = new GeneticAlgorithm(Population, NumSeq, SeqSize, MaxOffset, PopFit, Child, Instance, StartPos, EndPos);
SubPop[temp].RunGA();
SubPop[temp].ShowPopulation();
wg.Set();
});
SPos = EPos;
EPos += SS;
}
ManualResetEvent.WaitAll(WaitList.ToArray());
double BestFit = SubPop[0].BestSol;
string BestAlign = SubPop[0].TV.Debug;
for (int i = 1; i < ThreadNumber; i++)
{
if (BestFit < SubPop[i].BestSol)
{
BestFit = SubPop[i].BestSol;
BestAlign = SubPop[i].TV.Debug;
Variables.ResSave = SubPop[i].TV.ResSave;
Variables.NumSeq = SubPop[i].TV.NumSeq;
}
}
Note that instead of using Join to wait the thread execution, I'm using WaitHandles.
You're creating the threads yourself, so there's some extreme overhead there. Parallelise like the comments suggested. Also make sure the time a single work-unit takes is long enough. A single thread/workunit should be alive for at least ~20 ms.
Pretty basic things really. I'd suggest you really read up on how multi-threading in .NET works.
I see you don't create too many threads. But the optimal threadcount can't be determined just from the processor count. The built-in Parallel class has advanced algorithms to reduce the overall time.
Partitioning and threading are some pretty complex things that require a lot knowledge to get right, so unless you REALLY know what you're doing rely on the Parallel class to handle it for you.
How can I take 1 million substring from a string with more than 3 million characters efficiently in C#? I have written a program which involves reading random DNA reads (substrings from random position) of length say 100 from a string with 3 million characters. There are 1 million such reads. Currently i run a while loop that runs 1 million times and read a substring of 100 character length from the string with 3 million character. This is taking a long time. What can i do to complete this faster?
heres my code, len is the length of the original string, 3 million in this case, it may be as low as 50 thats why the check in the while loop.
while(i < 1000000 && len-100> 0) //len is 3000000
{
int randomPos = _random.Next()%(len - ReadLength);
readString += all.Substring(randomPos, ReadLength) + Environment.NewLine;
i++;
}
Using a StringBuilder to assemble the string will get you a 600 times increase in processing (as it avoids repeated object creation everytime you append to the string.
before loop (initialising capacity avoids recreating the backing array in StringBuilder):
StringBuilder sb = new StringBuilder(1000000 * ReadLength);
in loop:
sb.Append(all.Substring(randomPos, ReadLength) + Environment.NewLine);
after loop:
readString = sb.ToString();
Using a char array instead of a string to extract the values yeilds another 30% improvement as you avoid object creation incurred when calling Substring():
before loop:
char[] chars = all.ToCharArray();
in loop:
sb.Append(chars, randomPos, ReadLength);
sb.AppendLine();
Edit (final version which does not use StringBuilder and executes in 300ms):
char[] chars = all.ToCharArray();
var iterations = 1000000;
char[] results = new char[iterations * (ReadLength + 1)];
GetRandomStrings(len, iterations, ReadLength, chars, results, 0);
string s = new string(results);
private static void GetRandomStrings(int len, int iterations, int ReadLength, char[] chars, char[] result, int resultIndex)
{
Random random = new Random();
int i = 0, index = resultIndex;
while (i < iterations && len - 100 > 0) //len is 3000000
{
var i1 = len - ReadLength;
int randomPos = random.Next() % i1;
Array.Copy(chars, randomPos, result, index, ReadLength);
index += ReadLength;
result[index] = Environment.NewLine[0];
index++;
i++;
}
}
I think better solutions will come, but .NET StringBuilder class instances are faster than String class instances because it handles data as a Stream.
You can split the data in pieces and use .NET Task Parallel Library for Multithreading and Parallelism
Edit: Assign fixed values to a variable out of the loop to avoid recalculation;
int x = len-100
int y = len-ReadLength
use
StringBuilder readString= new StringBuilder(ReadLength * numberOfSubStrings);
readString.AppendLine(all.Substring(randomPos, ReadLength));
for Parallelism you should split your input to pieces. Then run these operations on pieces in seperate threads. Then combine the results.
Important: As my previous experiences showed these operations run faster with .NET v2.0 rather than v4.0, so you should change your projects target framework version; but you can't use Task Parallel Library with .NET v2.0 so you should use multithreading in oldschool way like
Thread newThread ......
How long is a long time ? It shouldn't be that long.
var file = new StreamReader(#"E:\Temp\temp.txt");
var s = file.ReadToEnd();
var r = new Random();
var sw = new Stopwatch();
sw.Start();
var range = Enumerable.Range(0,1000000);
var results = range.Select( i => s.Substring(r.Next(s.Length - 100),100)).ToList();
sw.Stop();
sw.ElapsedMilliseconds.Dump();
s.Length.Dump();
So on my machine the results were 807ms and the string is 4,055,442 chars.
Edit: I just noticed that you want a string as a result, so my above solution just changes to...
var results = string.Join(Environment.NewLine,range.Select( i => s.Substring(r.Next(s.Length - 100),100)).ToArray());
And adds about 100ms, so still under a second in total.
Edit: I abandoned the idea to use memcpy, and I think the result is super great.
I've broken a 3m length string into 30k strings of 100 length each in 43 milliseconds.
private static unsafe string[] Scan(string hugeString, int subStringSize)
{
var results = new string[hugeString.Length / subStringSize];
var gcHandle = GCHandle.Alloc(hugeString, GCHandleType.Pinned);
var currAddress = (char*)gcHandle.AddrOfPinnedObject();
for (var i = 0; i < results.Length; i++)
{
results[i] = new string(currAddress, 0, subStringSize);
currAddress += subStringSize;
}
return results;
}
To use the method for the case shown in the question:
const int size = 3000000;
const int subSize = 100;
var stringBuilder = new StringBuilder(size);
var random = new Random();
for (var i = 0; i < size; i++)
{
stringBuilder.Append((char)random.Next(30, 80));
}
var hugeString = stringBuilder.ToString();
var stopwatch = Stopwatch.StartNew();
for (int i = 0; i < 1000; i++)
{
var strings = Scan(hugeString, subSize);
}
stopwatch.Stop();
Console.WriteLine(stopwatch.ElapsedMilliseconds / 1000); // 43
Anyone know any speed differences between Where and FindAll on List. I know Where is part of IEnumerable and FindAll is part of List, I'm just curious what's faster.
The FindAll method of the List<T> class actually constructs a new list object, and adds results to it. The Where extension method for IEnumerable<T> will simply iterate over an existing list and yield an enumeration of the matching results without creating or adding anything (other than the enumerator itself.)
Given a small set, the two would likely perform comparably. However, given a larger set, Where should outperform FindAll, as the new List created to contain the results will have to dynamically grow to contain additional results. Memory usage of FindAll will also start to grow exponentially as the number of matching results increases, where as Where should have constant minimal memory usage (in and of itself...excluding whatever you do with the results.)
FindAll is obviously slower than Where, because it needs to create a new list.
Anyway, I think you really should consider Jon Hanna comment - you'll probably need to do some operations on your results and list would be more useful than IEnumerable in many cases.
I wrote small test, just paste it in Console App project. It measures time/ticks of: function execution, operations on results collection(to get perf. of 'real' usage, and to be sure that compiler won't optimize unused data etc. - I'm new to C# and don't know how it works yet,sorry).
Notice: every measured function except WhereIENumerable() creates new List of elements. I might be doing something wrong, but clearly iterating IEnumerable takes much more time than iterating list.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace Tests
{
public class Dummy
{
public int Val;
public Dummy(int val)
{
Val = val;
}
}
public class WhereOrFindAll
{
const int ElCount = 20000000;
const int FilterVal =1000;
const int MaxVal = 2000;
const bool CheckSum = true; // Checks sum of elements in list of resutls
static List<Dummy> list = new List<Dummy>();
public delegate void FuncToTest();
public static long TestTicks(FuncToTest function, string msg)
{
Stopwatch watch = new Stopwatch();
watch.Start();
function();
watch.Stop();
Console.Write("\r\n"+msg + "\t ticks: " + (watch.ElapsedTicks));
return watch.ElapsedTicks;
}
static void Check(List<Dummy> list)
{
if (!CheckSum) return;
Stopwatch watch = new Stopwatch();
watch.Start();
long res=0;
int count = list.Count;
for (int i = 0; i < count; i++) res += list[i].Val;
for (int i = 0; i < count; i++) res -= (long)(list[i].Val * 0.3);
watch.Stop();
Console.Write("\r\n\nCheck sum: " + res.ToString() + "\t iteration ticks: " + watch.ElapsedTicks);
}
static void Check(IEnumerable<Dummy> ieNumerable)
{
if (!CheckSum) return;
Stopwatch watch = new Stopwatch();
watch.Start();
IEnumerator<Dummy> ieNumerator = ieNumerable.GetEnumerator();
long res = 0;
while (ieNumerator.MoveNext()) res += ieNumerator.Current.Val;
ieNumerator=ieNumerable.GetEnumerator();
while (ieNumerator.MoveNext()) res -= (long)(ieNumerator.Current.Val * 0.3);
watch.Stop();
Console.Write("\r\n\nCheck sum: " + res.ToString() + "\t iteration ticks :" + watch.ElapsedTicks);
}
static void Generate()
{
if (list.Count > 0)
return;
var rand = new Random();
for (int i = 0; i < ElCount; i++)
list.Add(new Dummy(rand.Next(MaxVal)));
}
static void For()
{
List<Dummy> resList = new List<Dummy>();
int count = list.Count;
for (int i = 0; i < count; i++)
{
if (list[i].Val < FilterVal)
resList.Add(list[i]);
}
Check(resList);
}
static void Foreach()
{
List<Dummy> resList = new List<Dummy>();
int count = list.Count;
foreach (Dummy dummy in list)
{
if (dummy.Val < FilterVal)
resList.Add(dummy);
}
Check(resList);
}
static void WhereToList()
{
List<Dummy> resList = list.Where(x => x.Val < FilterVal).ToList<Dummy>();
Check(resList);
}
static void WhereIEnumerable()
{
Stopwatch watch = new Stopwatch();
IEnumerable<Dummy> iEnumerable = list.Where(x => x.Val < FilterVal);
Check(iEnumerable);
}
static void FindAll()
{
List<Dummy> resList = list.FindAll(x => x.Val < FilterVal);
Check(resList);
}
public static void Run()
{
Generate();
long[] ticks = { 0, 0, 0, 0, 0 };
for (int i = 0; i < 10; i++)
{
ticks[0] += TestTicks(For, "For \t\t");
ticks[1] += TestTicks(Foreach, "Foreach \t");
ticks[2] += TestTicks(WhereToList, "Where to list \t");
ticks[3] += TestTicks(WhereIEnumerable, "Where Ienum \t");
ticks[4] += TestTicks(FindAll, "FindAll \t");
Console.Write("\r\n---------------");
}
for (int i = 0; i < 5; i++)
Console.Write("\r\n"+ticks[i].ToString());
}
}
class Program
{
static void Main(string[] args)
{
WhereOrFindAll.Run();
Console.Read();
}
}
}
Results(ticks) - CheckSum enabled(some operations on results), mode: release without debugging(CTRL+F5):
- 16,222,276 (for ->list)
- 17,151,121 (foreach -> list)
- 4,741,494 (where ->list)
- 27,122,285 (where ->ienum)
- 18,821,571 (findall ->list)
CheckSum disabled (not using returned list at all):
- 10,885,004 (for ->list)
- 11,221,888 (foreach ->list)
- 18,688,433 (where ->list)
- 1,075 (where ->ienum)
- 13,720,243 (findall ->list)
Your results can be slightly different, to get real results you need more iterations.
UPDATE(from comment): Looking through that code I agree, .Where should have, at worst, equal performance but almost always better.
Original answer:
.FindAll() should be faster, it takes advantage of already knowing the List's size and looping through the internal array with a simple for loop. .Where() has to fire up an enumerator (a sealed framework class called WhereIterator in this case) and do the same job in a less specific way.
Keep in mind though, that .Where() is enumerable, not actively creating a List in memory and filling it. It's more like a stream, so the memory use on something very large can have a significant difference. Also, you could start using the results in a parallel fashion much faster using there .Where() approach in 4.0.
Where is much, much faster than FindAll. No matter how big the list is, Where takes exactly the same amount of time.
Of course Where just creates a query. It doesn't actually do anything, unlike FindAll which does create a list.
The answer from jrista makes senses. However, the new list adds the same objects, thus just growing with reference to existing objects, which should not be that slow.
As long as 3.5 / Linq extension are possible, Where stays better anyway.
FindAll makes much more sense when limited with 2.0