Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I tried to make a queue that would be so fast as possible and I planned to make it so my taking out many features and you know everything from the beginning. That means I will never try to add more element than I have an array allocated for.
Even though I only implemented what I need, I lose to the built in queue when I get over (~2000) read and write operations.
I got curious what it is that makes the built in queue faster than my own that is built to the bare bone?
As you can see the queue is based on a circular array so I don't have to move any elements. I also just write over the data instead of creating a new node to save some time. (Even though in my test it didn't make any big differences.)
class Queue<T> {
private class Node {
public T data;
public Node(T data) {
this.data = data;
}
public Node() {
}
}
Node[] nodes;
int current;
int emptySpot;
public Queue(int size) {
nodes = new Node[size];
for (int i = 0; i < size; i++) {
nodes[i] = new Node();
}
this.current = 0;
this.emptySpot = 0;
}
public void Enqueue(T value){
nodes[emptySpot].data = value;
emptySpot++;
if (emptySpot >= nodes.Length) {
emptySpot = 0;
}
}
public T Dequeue() {
int ret = current;
current++;
if (current >= nodes.Length) {
current = 0;
}
return nodes[ret].data;
}
}
My testing code is done with the built in stop watch and everything is written out in ticks.
static void Main(string[] args) {
MinimalCollections.Queue<char> queue = new MinimalCollections.Queue<char>(5500);
Queue<char> CQueue = new Queue<char>(5500);
Stopwatch sw = new Stopwatch();
sw.Start();
for (int y = 0; y < 4; y++) {
for (int i = 0; i < 5500; i++) {
queue.Enqueue('f');
}
for (int i = 0; i < 5500; i++) {
queue.Dequeue();
}
}
sw.Stop();
Console.WriteLine("My queue method ticks is = {0}", sw.ElapsedTicks);
sw.Reset();
sw.Start();
for (int y = 0; y < 4; y++) {
for (int i = 0; i < 5500; i++) {
CQueue.Enqueue('f');
}
for (int i = 0; i < 5500; i++) {
CQueue.Dequeue();
}
}
sw.Stop();
Console.WriteLine("C# queue method ticks is = {0}", sw.ElapsedTicks);
Console.ReadKey();
}
The output is:
My queue method ticks is = 2416
C# queue method ticks is = 2320
One obvious overhead that I can see is the introduction of Node objects. This will be especially noticeable when you're actually using this as a Queue of value types such as char, because the built in implementation isn't wrapping the values into a reference type.
Here is how I would change your implementation:
class Queue<T>
{
T[] nodes;
int current;
int emptySpot;
public Queue(int size)
{
nodes = new T[size];
this.current = 0;
this.emptySpot = 0;
}
public void Enqueue(T value)
{
nodes[emptySpot] = value;
emptySpot++;
if (emptySpot >= nodes.Length)
{
emptySpot = 0;
}
}
public T Dequeue()
{
int ret = current;
current++;
if (current >= nodes.Length)
{
current = 0;
}
return nodes[ret];
}
}
This seems to fare much better (Release build, x64, win 8.1):
My queue method ticks is = 582
C# queue method ticks is = 2166
Related
When optimizing a site, I tried to benchmark the code with Benchmark.Net. But I was surprised to find that some benchmarked code used 40,000 times more memory.
After, too much, benchmarking I found that the memory allocation was because of a foreach over a SortedList<int, int>.
using BenchmarkDotNet.Attributes;
namespace NetCollectionsBenchmarks
{
[MemoryDiagnoser]
public class CollectionsBenchmarks
{
private Dictionary<int, int> DictionaryData = new();
private SortedList<int, int> SortedListData = new();
private Dictionary<int, int> DictionaryCheck = new();
private SortedList<int, int> SortedListCheck = new();
[GlobalSetup]
public void Setup()
{
for (int x = 0; x < 15; x++)
this.DictionaryData.Add(x, x);
this.SortedListData = new SortedList<int, int>(this.DictionaryData);
this.DictionaryCheck = new Dictionary<int, int>(this.DictionaryData);
this.SortedListCheck = new SortedList<int, int>(this.DictionaryData);
}
[Benchmark(Baseline = true)]
public long ForLoopDictionaryBenchmark()
{
var count = 0L;
var res = 0L;
for (int x = 0; x < 1_000_000; x++)
{
for (int i = 0; i < 15; i++)
{
if (this.DictionaryCheck.TryGetValue(x, out var value) || value < x)
res += value;
count++;
}
}
return res;
}
[Benchmark]
public long ForLoopSortedListBenchmark()
{
var res = 0L;
for (int x = 0; x < 1_000_000; x++)
{
for (int i = 0; i < 15; i++)
{
if (this.SortedListCheck.TryGetValue(x, out var value) || value < x)
res += value;
}
}
return res;
}
[Benchmark]
public long ForeachDictionaryBenchmark()
{
var res = 0L;
for (int x = 0; x < 1_000_000; x++)
{
foreach (var needle in this.DictionaryData)
{
if (this.DictionaryCheck.TryGetValue(needle.Key, out var value) || value < needle.Value)
res += value;
}
}
return res;
}
[Benchmark]
public long ForeachSortedListBenchmark()
{
var res = 0L;
for (int x = 0; x < 1_000_000; x++)
{
foreach (var needle in this.SortedListData)
{
if (this.SortedListCheck.TryGetValue(needle.Key, out var value) || value < needle.Value)
res += value;
}
}
return res;
}
[Benchmark]
public long ForeachNoTryGetValueDictionaryBenchmark()
{
var res = 0L;
for (int x = 0; x < 1_000_000; x++)
{
foreach (var needle in this.DictionaryData)
{
}
}
return res;
}
[Benchmark]
public long ForeachNoTryGetValueSortedListBenchmark()
{
var res = 0L;
for (int x = 0; x < 1_000_000; x++)
{
foreach (var needle in this.SortedListData)
{
}
}
return res;
}
}
}
The benchmark methods with foreach() over SortedList uses 40,000 times more memory, than the other methods, even when there is no TryGetValue() in the loop.
Why are SortedList so memory expensive when looping it's enumerator?
The benchmarks has been tested in .NET 6.0 and .NET 7.0, with the same result.
Since no one seemed to know the answer to this, I continued the investigation to try to find the reason.
As Hans Passant pointed out, the Enumerator in SortedList<> in 4.8 was a class. But I've had already looked at that and the Enumerator in .NET 6 has changed to struct, and has been a struct since at least 2016 according to git. But it is still allocated on the heap.
Since the SortedList<> Enumerator has been a struct since 2016, there could be no way that that code is not included in .NET 6 (or .NET 7). But to be sure I created my own copies of Dictionary<> and SortedList<> from the git repository. Still the same result.
When I was about to change the code in the classes to find what generated the difference, I found the diverging code. It was in GetEnumerator() in the two classes.
GetEnumerator() in SortedList<>:
public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator()
GetEnumerator() in Dictionary<>:
public Enumerator GetEnumerator()
The return type IEnumerator<KeyValuePair<TKey, TValue>> makes the enumerator to be allocated on the heap, because of interface boxing. Changing the return type to Enumerator removed all the extra allocation reported by Benchmark.NET.
And for the second note from Hans Passant, about it being cheap gen#0 memory. That might be so, but in the benchmarking I have done shows the current implementation of GetEnumerator() takes twice as long time as the one with return type to Enumerator.
And the benchmarking I've done is quite close to the production code I'm currently running.
I am a complete beginner in programming. Trying to make sorting a choice. Everything seems to be ok. Only there is one caveat. Only numbers up to 24 index are filled in the new array. I can’t understand what the problem is.
int[] Fillin(int[] mass)
{
Random r = new Random();
for(int i = 0; i < mass.Length; i++)
{
mass[i] = r.Next(1, 101);
}
return mass;
}
int SearchSmall(int[] mass)
{
int smallest = mass[0];
int small_index = 0;
for(int i = 1; i < mass.Length; i++)
{
if (mass[i] < smallest)
{
smallest = mass[i];
small_index = i;
}
}
return small_index;
}
int[] Remove(int[] massiv,int remind)
{
List<int> tmp = new List<int>(massiv);
tmp.RemoveAt(remind);
massiv = tmp.ToArray();
return massiv;
}
public int[] SortMass(int[] mass)
{
mass = Fillin(mass);
Print(mass);
Console.WriteLine("________________________________");
int[] newmass = new int[mass.Length];
int small;
for(int i = 0; i < mass.Length; i++)
{
small = SearchSmall(mass);
newmass[i] = mass[small];
mass = Remove(mass, small);
}
return newmass;
}
I think your main issue is that when you remove an element in the Remove function, the main loop in for (int i = 0; i < mass.Length; i++) will not check all elements o the initial array. A simple (and ugly) way to fix that would be not to remove the elements but to assign a very high value
public static int[] Remove(int[] massiv, int remind)
{
massiv[remind] = 999999;
return massiv;
}
Or as Legacy suggested simply modify the mass.length for newmass.lengh in the main loop.
As some others have mentioned this is not the best way to order an array, but it is an interesting exercise.
This question already has answers here:
Depth-first flattened collection of an object hierarchy using LINQ
(4 answers)
Closed 8 years ago.
I have the following object set up:
public class DrawingInstance
{
public string DrawingNum;
public string Rev;
public string Title;
public int LevelNum;
public string RefDesc;
public string DateRelease;
public string DrawingType;
public DrawingInstance ParentMember;
public int PageInstance;
public List<DrawingInstance> ChildMembers = new List<DrawingInstance>();
}
After gathering all of the data, I am currently accessing each child member one level at a time, like so:
for (int i = 0; i < drawingInstance.ChildMembers.Count; i++)
{
for (int j = 0; j < drawingInstance.ChildMembers[i].ChildMembers.Count; j++)
{
....
....
}
}
The number of levels in the file being processed can be different each time.
Is there a way through recursion to loop through and traverse an infinite number of levels? I need to process them 1 level at a time. So all of the i's will be processed, then all of the j's for each i are processed, and so on. Currently I have 10 blocks of code for processing a possible of 10 levels, but I feel like there has to be a better way to go about this.
EDIT
Thanks for the quick responses.
Here is a more detailed look straight from my code that gives a little more insight into how I am currently processing the objects
//Level 0 Pages
int _pageNum = PageNum;
int startIdx = 0;
int pageCount = 0;
pageCount = GetVisioPageCount(_treeArray.ChildMembers.Count);
for (int i = 0; i < pageCount; i++)
{
VisioSheetOutline tempSheet = new VisioSheetOutline();
tempSheet = GetSingleSheet(_treeArray, startIdx, _pageNum, (i + 1));
for (int cMember = 0; cMember < tempSheet.ChildPairs.Length; cMember++)
{
ParentDictionary.Add(tempSheet.ChildPairs[cMember].SingleInstance, tempSheet.SheetName);
}
SheetList.Add(tempSheet);
_pageNum++;
startIdx += 15;
}
//Level 1 Pages
for (int i = 0; i < _treeArray.ChildMembers.Count; i++)
{
pageCount = 0;
pageCount = GetVisioPageCount(_treeArray.ChildMembers[i].ChildMembers.Count);
startIdx = 0;
for (int j = 0; j < pageCount; j++)
{
VisioSheetOutline tempSheet = new VisioSheetOutline();
tempSheet = GetSingleSheet(_treeArray.ChildMembers[i], startIdx, _pageNum, (i + 1));
for (int cMember = 0; cMember < tempSheet.ChildPairs.Length; cMember++)
{
ParentDictionary.Add(tempSheet.ChildPairs[cMember].SingleInstance, tempSheet.SheetName);
}
SheetList.Add(tempSheet);
_pageNum++;
startIdx += 15;
}
}
//Level 2 Pages
for (int i = 0; i < _treeArray.ChildMembers.Count; i++)
{
for (int j = 0; j < _treeArray.ChildMembers[i].ChildMembers.Count; j++)
{
pageCount = 0;
pageCount = GetVisioPageCount(_treeArray.ChildMembers[i].ChildMembers[j].ChildMembers.Count);
startIdx = 0;
for (int k = 0; k < pageCount; k++)
{
VisioSheetOutline tempSheet = new VisioSheetOutline();
tempSheet = GetSingleSheet(_treeArray.ChildMembers[i].ChildMembers[j], startIdx, _pageNum, (i + 1));
for (int cMember = 0; cMember < tempSheet.ChildPairs.Length; cMember++)
{
ParentDictionary.Add(tempSheet.ChildPairs[cMember].SingleInstance, tempSheet.SheetName);
}
SheetList.Add(tempSheet);
_pageNum++;
startIdx += 15;
}
}
}
I am currently looking into a few of the suggestions that were made to see which one fits my particular need.
Yes, as you suggested, you can easily deal with this using recursion; you just need a recursive function:
public void ProcessDrawingData(DrawingInstance instance)
{
// Do processing
foreach (DrawingInstance d in ChildMembers)
ProcessDrawingData(d);
}
Call it with the parent instance. This won't do a true breadth-first traversal though, as the first child will execute its first childs children (first all the way down) and slowly unwind.
Microsoft's Ix-Main package contains a number of LINQ extensions, including the Expand method which will flatten a hierarchical layout:
IEnumerable<DrawingInstance> rootList = ...;
IEnumerable<DrawingInstance> flattened = rootList.Expand(x => x.ChildMembers);
You can use the foreach() statement. This will iterate through a group that you need, assuming the object implements IEnumerable.
In your case, try this:
foreach(DrawingInstance di in ChildMembers)
{
// Do something with di.
}
EDIT
If you need to do this repeatedly, you should have some sort of a recursive method that takes a DrawingInstance, like this:
public void RecursiveMethod(DrawingInstance d)
{
foreach(DrawingInstance di in d.ChildMembers)
{
RecursiveMethod(di);
}
}
I don't know your project, so it is up to you to figure out the base case, or if this recursive edit is what you actually want.
I've been running a lot of tests comparing an array of structs with an array of classes and a list of classes. Here's the test I've been running:
struct AStruct {
public int val;
}
class AClass {
public int val;
}
static void TestCacheCoherence()
{
int num = 10000;
int iterations = 1000;
int padding = 64;
List<Object> paddingL = new List<Object>();
AStruct[] structArray = new AStruct[num];
AClass[] classArray = new AClass[num];
List<AClass> classList = new List<AClass>();
for(int i=0;i<num;i++){
classArray[i] = new AClass();
if(padding >0) paddingL.Add(new byte[padding]);
}
for (int i = 0; i < num; i++)
{
classList.Add(new AClass());
if (padding > 0) paddingL.Add(new byte[padding]);
}
Console.WriteLine("\n");
stopwatch("StructArray", iterations, () =>
{
for (int i = 0; i < num; i++)
{
structArray[i].val *= 3;
}
});
stopwatch("ClassArray ", iterations, () =>
{
for (int i = 0; i < num; i++)
{
classArray[i].val *= 3;
}
});
stopwatch("ClassList ", iterations, () =>
{
for (int i = 0; i < num; i++)
{
classList[i].val *= 3;
}
});
}
static Stopwatch watch = new Stopwatch();
public static long stopwatch(string msg, int iterations, Action c)
{
watch.Restart();
for (int i = 0; i < iterations; i++)
{
c();
}
watch.Stop();
Console.WriteLine(msg +": " + watch.ElapsedTicks);
return watch.ElapsedTicks;
}
I'm running this in release mode with the following:
Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(2); // Use only the second core
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
Thread.CurrentThread.Priority = ThreadPriority.Highest;
RESULTS:
With padding=0 I get:
StructArray: 21517
ClassArray: 42637
ClassList: 80679
With padding=64 I get:
StructArray: 21871
ClassArray: 82139
ClassList: 105309
With padding=128 I get:
StructArray: 21694
ClassArray: 76455
ClassList: 107330
I am a bit confused with these results, since I was expecting the difference to be bigger.
After all the structures are tiny and are laid one after the other in memory, while the classes are separated by up to 128 bytes of garbage.
Does this mean that I shouldn't even worry about cache friendlyness? Or is my test flawed?
There are a number of things going on here. The first is that your tests don't take GC's into account- it is distinctly possible that the arrays are being GC'd during the loop over the list (because the arrays are no longer used while you are iterating the list, they are eligible for collection).
The second is that you need to keep in mind that List<T> is backed by an array anyway. The only reading overhead is the additional function calls to go through List.
I'm trying to test what built-in collections perform best under certain applications, such as intersection. To do so, I built the following test:
private static void Main(string[] args)
{
LoadTest<HashSet<object>>();
ClearEverythingHere(); // <<-- what can go here?
LoadTest<LinkedList<object>>();
Console.ReadKey(true);
}
private static void LoadTest<T>() where T : ICollection<object>, new()
{
const int n = 1 << 16;
const int c = 1 << 3;
var objs = new object[n << 1];
for (int i = 0; i < n << 1; i++)
objs[i] = new object();
var array = new T[c];
var r = new Random(123);
for (int s = 0; s < c; s++)
{
array[s] = new T();
for (int i = 0; i < n; i++)
array[s].Add(objs[r.Next(n << 1)]);
}
var sw = Stopwatch.StartNew();
IEnumerable<object> final = array[0];
for (int s = 1; s < c; s++)
final = final.Intersect(array[s]);
sw.Stop();
Console.WriteLine("Ticks elapsed: {0}", sw.ElapsedTicks);
}
If I uncomment both test methods from Main, the second test always completes much faster than the first, no matter which order I test the structures. Generally, the first intersection runs in a few hundred ticks, and the second finishes in less than ten. I would have thought having the tests in completely separate scopes would have prevented at least some of the (what I'm presuming is) caching that leads to such different results.
Is there an easy way to reset the application so that I don't have to worry about caching or optimizing for testing? I would like to be able to run one test, print the results, clear it out, and run another test? Yes, I could comment and uncomment, or possibly spawn two separate applications, but that's a lot of work for simple console tests.
Edit: I've modified the tests as per the suggestions in the answers.
private static void Main(string[] args)
{
const int n = 1 << 17;
const int c = 1 << 4;
var objs = new Item[n << 1];
for (int i = 0; i < (n << 1); i++)
objs[i] = new Item(i);
var items = new Item[c][];
var hash = new HashSet<Item>[c];
var list = new LinkedList<Item>[c];
var r = new Random();
for (int s = 0; s < c; s++)
{
items[s] = new Item[n];
for (int i = 0; i < n; i++)
items[s][i] = objs[r.Next(n << 1)];
hash[s] = new HashSet<Item>(items[s]);
list[s] = new LinkedList<Item>(items[s]);
}
Stopwatch stopwatch = Stopwatch.StartNew();
HashSet<Item> fHash = hash[0];
for (int s = 1; s < hash.Length; s++)
fHash.IntersectWith(hash[s]);
stopwatch.Stop();
Console.WriteLine("Intersecting values: {0}", fHash.Count);
Console.WriteLine("Ticks elapsed: {0}", stopwatch.ElapsedTicks);
stopwatch = Stopwatch.StartNew();
IEnumerable<Item> iEnum = list[0];
for (int s = 1; s < list.Length; s++)
iEnum = iEnum.Intersect(list[s]);
Item[] array = iEnum.ToArray();
stopwatch.Stop();
Console.WriteLine("Intersecting values: {0}", array.Length);
Console.WriteLine("Ticks elapsed: {0}", stopwatch.ElapsedTicks);
Console.ReadKey(true);
}
[DebuggerDisplay("Value = {_value}")]
private class Item
{
private readonly int _value;
public Item(int value)
{
_value = value;
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj))
return false;
if (ReferenceEquals(this, obj))
return true;
if (obj.GetType() != typeof(Item))
return false;
return Equals(obj);
}
public override int GetHashCode()
{
return _value;
}
public override string ToString()
{
return _value.ToString();
}
}
This solved most of my problems. (And if you're wondering, HashSet.IntersectWith with appears much faster than IEnumerable.Intersect.)
There are few errors in your code.
Intersects is LINQ functions, so it means it is lazily evaluated. That means it gets executed only when the data is accesed. This can be done by either looping over the data or calling ToList or ToArray on this enumerable. By adding this, you get different result
Testing must be always done on same data. Try creating your data outside your test method and pass it as parameter.
First pass of code is usualy considered wrong, because JITing and such.
Try creating your own object and override Equals and GetHashCode. Like this it might not be correct to test it.