I just got a code handed over to me. The code is written in C# and it inserts realtime data into database every second. The data is accumulated in time which makes the numbers big.
The data is updated within the second many times then at the end of the second result is taken and inserted.
We used to address the dataset rows directly within the second through the properties. For example many operations like this one 'datavaluerow.meanvalue += mean; could take place.
we figured out that this is degrading the performance after running the profiler becuase of the internal casting done so we created 2d array of decimals on which the updates are carried out then the values are assigned to the datarows only at the end of the second.
I ran a profiler and found out that it is still taking a lot of time (although less than the time spent accessing datarows frequently when added up).
The code that is exectued at the end of the second is as follows
public void UpdateDataRows(int tick)
{
//ord
//_table1Values is of type decimal[][]
for (int i = 0; i < _table1Values.Length; i++)
{
_table1Values[i][(int)table1Enum.barDateTime] = tick;
table1Row[i].ItemArray = _table1Values[i].Cast<object>().ToArray();
}
// this process is done for other 10 tables
}
Is there a way to further improve this approach.
One obvious question: why do you have a 2D array of decimals when you're only updating them with integers? Could you get away with an int[][] instead?
Next, why are you accessing (int)table1Enum.barDateTime on each iteration? Given that there's a conversion involved there, you may find it helps if you extract that out of the loop.
However, I suspect the majority of the time is going to be spent in _table1Values[i].Cast<object>().ToArray(). Do you really need to do that? Taking a copy of the decimal[] (or int[]) would be faster than boxing every value on every iteration on every call - and then creating another array.
Related
Given the task to improve the performance of a piece of code, I have came across the following phenomenon. I have a large collection of reference types in a generic Queue and I'm removing and processing the element one by one, then add them to another generic collection.
It seems the larger the elements are the more time it takes to add the element to the collection.
Trying to narrow down the problem to the relevant part of the code, I've written a test (omitting the processing of elements, just doing the insert):
class Small
{
public Small()
{
this.s001 = "001";
this.s002 = "002";
}
string s001;
string s002;
}
class Large
{
public Large()
{
this.s001 = "001";
this.s002 = "002";
...
this.s050 = "050";
}
string s001;
string s002;
...
string s050;
}
static void Main(string[] args)
{
const int N = 1000000;
var storage = new List<object>(N);
for (int i = 0; i < N; ++i)
{
//storage.Add(new Small());
storage.Add(new Large());
}
List<object> outCollection = new List<object>();
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = N-1; i > 0; --i)
{
outCollection.Add(storage[i];);
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
}
On the test machine, using the Small class, it takes about 25-30 ms to run, while it takes 40-45 ms with Large.
I know that the outCollection has to grow from time to time to be able to store all the items, so there is some dynamic memory allocation. But given an initial collection size even makes the difference more obvious: 11-12 ms with Small and 35-38 ms with Large objects.
I am somewhat surprised, as these are reference types, so I was expecting the collections to work only with references to the Small/Large instances. I have read Eric Lippert's relevant article that and know that references should not be treated as pointers. At the same time, AFAIK currently they are implemented as pointers and their size and the collection's performance should be independent of element size.
I've decided to put up a question here hoping that someone could explain or help me to understand what's happening here. Aside the performance improvement, I'm really curious what is happening behind the scenes.
Update:
Profiling data using the diagnostic tools didn't help me much, although I have to admit I'm not an expert using the profiler. I'll collect more data later today to find where the bottleneck is.
The pressure on the GC is quite high of course, especially with the Large instances. But once the instances are created and stored in the storage collection, and the program enters the loop, there was no collection triggered any more, and memory usage hasn't increased significantly (outCollction already pre-allocated).
Most of the CPU time is of course spent with memory allocation (JIT_New), around 62% and the only other significant entry is Function Name Inclusive Samples Exclusive Samples Inclusive Samples % Exclusive Samples % Module Name
System.Collections.Generic.List`1[System.__Canon].Add with about 7%.
With 1 million items the preallocated outCollection size is 8 million bytes (the same as the size of storage); one can suspect 64 bit addresses being stored in the collections.
Probably I'm not using the tools properly or don't have the experience to interpret the results correctly, but the profiler didn't help me to get closer to the cause.
If the loop is not triggering collections and it only copies pointers between 2 pre-allocated collections, how could the item size cause any difference? Cache hit/miss ratio is supposed to be the more or less the same in both cases, as the loop is iteration over a list of "addresses" in both cases.
Thanks for all the help so far, I will collect more data, and put an update here if anything found.
I suspect that at least one action in the above (maybe some type checks) will require a de-reference. Then the fact that many Smalls are probably sat close together on the heap and thus sharing cache lines could account for some amount of difference (certainly many more of them could share a single cache line than Larges).
Added to which you are also accessing them in the reverse order in which they were allocated which maximises such a benefit.
This method gets:
IEnumerable<object[]> - in which every array is in fixed size (it represent relational
data structure).
DataEnumerable.Column[] - some metadata columns,mostly they will have the same value for all rows.
Expected outcome:
each "row" should get value for each of these columns (so the data structure remains relational).
private IEnumerable<object[]> BindExtraColumns(IEnumerable<object[]> baseData, int dataSize, DataEnumerable.Column[] columnsToAdd)
{
int extraColumnsLength = columnsToAdd.Length;
object[] row = new object[dataSize + extraColumnsLength];
string columnName;
int rowNumberColumnIndex = -1;
for (int i = 0; i < extraColumnsLength; i++)
{
//Assign values that doesn't change between lines..
// Assign rowNumberColumnIndex if row number column exists
}
//Assign values that change here, since we currently support only row number
// i'ts not generic enough
if (rowNumberColumnIndex != -1)
{
int rowNumber = 1;
foreach (var baseRow in baseData)
{
row[rowNumberColumnIndex] = rowNumber;
Array.Copy(baseRow, 0, row, extraColumnsLength, dataSize);
yield return row;
rowNumber++;
}
}
else
{
foreach (var baseRow in baseData)
{
Array.Copy(baseRow, 0, row, extraColumnsLength, dataSize);
yield return row;
}
}
}
this method can be called from hundreds of threads with relatively big data sets so
performance here is critical, and i tried to create as minimum new objects as possible.
Please note - this is a private method, which used ONLY BY DataReader, which read each line, and passes it to another array immediately prior to reading the next line.
So - does copying arrays here be optimized here somehow, and should i use (carefully) memory to boost things here?
Thanks
Your code is fundamentally broken. You're just returning a reference to the same array every time, which means that unless the caller uses the data within each item immediately, it effectively gets lost. For example, suppose I use:
List<object[]> rows = BindExtraColumns(data, size, toAdd).ToList();
Then when I iterate over the rows, I find the same data in every row. That's really not a good experience.
I think it would make much more sense to create a new array for each iteration. Yes, that's a lot of extra memory being used - but it doesn't surprise callers nearly as much.
If you really don't want to do that, I suggest you change the approach so that the caller has to pass in an Action<object[]> to be executed on each row, with the documented proviso that if the caller stashes a reference to the array, they may well be surprised by the results.
You're obviously very concerned about performance, but if your data is coming from a database I'd expect the array creation/copying performance to be insignificant. You should write the simplest (and most reliable) code that works first, and then benchmark it to see whether it performs well enough. Unless you've got evidence that you need to make this surprising design choice, it feels like you're optimizing way too early.
EDIT: Now we know that it's a private method only used in one specific place, I would still avoid this reuse. It's simply fragile. I really would change to passing in an Action<object[]> or simply copying the data to a new array every time. I certainly wouldn't keep the current approach without strong evidence that it's a bottleneck: as I said before, I'd expect the database communication to be much more important. Leaving timebombs in your code like this very rarely works out well.
If you really, really want to keep doing this, you should document it very strongly, giving severe warnings that the result is non-idiomatic.
In terms of whether there's more optimization you could do - well... one alternative would be to avoid having to work with a single array in the first place. You could create a class which held references to both arrays (the current base row and the fixed data) and exposed an indexer which returned the value from one array or the other based on which index was being requested. We don't know what you're doing with the data ("passes it to another array" doesn't really mean anything) so we don't know whether that's feasible, but it would be efficient and could be implemented without the odd behaviour.
I have a variable that has a current value but when I change the value it first needs to store the past value in some data structure that will show me the past X many values.
This is to do all kinds of calculations on past values like an average of the most recent values and such.
My only idea was to use a queue for this and since I only need the past X values then I implemented a FixedSizedQueue that would automatically dequeue older values.
Since then I've found out I can't really access a random value in it at least in default implementations of queues it seems. But additionally that if one would make that work they would be slow and need to iterate over all values.
So I'm left wondering is there any way at all to do this efficiently? The only other way I can think off would be to have an array and simply implement some pushing feature that would move all elements by one index position. But that seems overly wasteful. If these are the only two options, which one would be better if I need to access each value in the data structure 20 times each time I change it, and the size would be 50 values stored?
This is a place where performance will matter a great deal since each variable being "recorded" will change at least a million times when iterating over the data I have so don't worry about me doing premature optimization. Thank you, I appreciate it!
You are looking for a ring buffer / circular buffer.
You can find a c# implementation here.
I was told that using a temp object was not the most effective way to swap elements in an array.
Such as:
Object[] objects = new Object[10];
// -- Assign the 10 objects some values
var Temp = objects[2];
objects[2] = objects[4];
objects[4] = Temp;
Is it really possible to swap the elements of the array without using another object?
I know that with math units you can but I cannot figure out how this would be done with any other object type.
Swapping objects with a temporary is the most correct way of doing it. That should rank way higher up in your priorities than speed. It's pretty easy to write fast software that ouputs garbage.
When dealing with objects, you just cannot do it differently. And this is not at all inefficient. An extra reference variable that points to an already existing object is hardly going to be a problem.
But even with numerical values, most clever techniques fail to produce correct results at some point.
It's possible the person who told you this was thinking of something like this:
objects[2] = Interlocked.Exchange(ref objects[4], objects[2]);
Of course, just because this is one line doesn't mean it isn't also using a temporary variable. It's just hidden in the form of a method parameter (the reference to objects[2] is copied and passed to the Exchange method), making it less obvious.
you could do it with Interlocked.Exchange but that won't be faster than using a temp variable... not that speed is likely to matter for this sort problem outside of an interview.
Every example I can find online uses exactly this method to swap 2 elements in an array. In fact, if this were something I were doing often I would definately consider a generic extansion method, akin to this example:
http://w3mentor.com/learn/asp-dot-net-c-sharp/c-collections-and-generics/generically-swapping-two-elements-in-array-using-ccsharp/
The only time you should worry about creating a temporary object is when its massive, and the time taken to copy the object will be sufficiently long and if you're doing it a lot, for example, or if youre doing a sort of 10k items, and moving it from 9999 to 1, 1 stage at a time testing if it should move or not, then swapping it each and every time would be a drain. However, a more efficient way would test all the tests and move once.
Swapping via deconstruction:
(first, second) = (second, first);
I was working on some code recently and came across a method that had 3 for-loops that worked on 2 different arrays.
Basically, what was happening was a foreach loop would walk through a vector and convert a DateTime from an object, and then another foreach loop would convert a long value from an object. Each of these loops would store the converted value into lists.
The final loop would go through these two lists and store those values into yet another list because one final conversion needed to be done for the date.
Then after all that is said and done, The final two lists are converted to an array using ToArray().
Ok, bear with me, I'm finally getting to my question.
So, I decided to make a single for loop to replace the first two foreach loops and convert the values in one fell swoop (the third loop is quasi-necessary, although, I'm sure with some working I could also put it into the single loop).
But then I read the article "What your computer does while you wait" by Gustav Duarte and started thinking about memory management and what the data was doing while it's being accessed in the for-loop where two lists are being accessed simultaneously.
So my question is, what is the best approach for something like this? Try to condense the for-loops so it happens in as little loops as possible, causing multiple data access for the different lists. Or, allow the multiple loops and let the system bring in data it's anticipating. These lists and arrays can be potentially large and looping through 3 lists, perhaps 4 depending on how ToArray() is implemented, can get very costy (O(n^3) ??). But from what I understood in said article and from my CS classes, having to fetch data can be expensive too.
Would anyone like to provide any insight? Or have I completely gone off my rocker and need to relearn what I have unlearned?
Thank you
The best approach? Write the most readable code, work out its complexity, and work out if that's actually a problem.
If each of your loops is O(n), then you've still only got an O(n) operation.
Having said that, it does sound like a LINQ approach would be more readable... and quite possibly more efficient as well. Admittedly we haven't seen the code, but I suspect it's the kind of thing which is ideal for LINQ.
For referemce,
the article is at
What your computer does while you wait - Gustav Duarte
Also there's a guide to big-O notation.
It's impossible to answer the question without being able to see code/pseudocode. The only reliable answer is "use a profiler". Assuming what your loops are doing is a disservice to you and anyone who reads this question.
Well, you've got complications if the two vectors are of different sizes. As has already been pointed out, this doesn't increase the overall complexity of the issue, so I'd stick with the simplest code - which is probably 2 loops, rather than 1 loop with complicated test conditions re the two different lengths.
Actually, these length tests could easily make the two loops quicker than a single loop. You might also get better memory fetch performance with 2 loops - i.e. you are looking at contiguous memory - i.e. A[0],A[1],A[2]... B[0],B[1],B[2]..., rather than A[0],B[0],A[1],B[1],A[2],B[2]...
So in every way, I'd go with 2 separate loops ;-p
Am I understanding you correctly in this?
You have these loops:
for (...){
// Do A
}
for (...){
// Do B
}
for (...){
// Do C
}
And you converted it into
for (...){
// Do A
// Do B
}
for (...){
// Do C
}
and you're wondering which is faster?
If not, some pseudocode would be nice, so we could see what you meant. :)
Impossible to say. It could go either way. You're right, fetching data is expensive, but locality is also important. The first version may be better for data locality, but on the other hand, the second has bigger blocks with no branches, allowing more efficient instruction scheduling.
If the extra performance really matters (as Jon Skeet says, it probably doesn't, and you should pick whatever is most readable), you really need to measure both options, to see which is fastest.
My gut feeling says the second, with more work being done between jump instructions, would be more efficient, but it's just a hunch, and it can easily be wrong.
Aside from cache thrashing on large functions, there may be benefits on tiny functions as well. This applies on any auto-vectorizing compiler (not sure if Java JIT will do this yet, but you can count on it eventually).
Suppose this is your code:
// if this compiles down to a raw memory copy with a bitmask...
Date morningOf(Date d) { return Date(d.year, d.month, d.day, 0, 0, 0); }
Date timestamps[N];
Date mornings[N];
// ... then this can be parallelized using SSE or other SIMD instructions
for (int i = 0; i != N; ++i)
mornings[i] = morningOf(timestamps[i]);
// ... and this will just run like normal
for (int i = 0; i != N; ++i)
doOtherCrap(mornings[i]);
For large data sets, splitting the vectorizable code out into a separate loop can be a big win (provided caching doesn't become a problem). If it was all left as a single loop, no vectorization would occur.
This is something that Intel recommends in their C/C++ optimization manual, and it really can make a big difference.
... working on one piece of data but with two functions can sometimes make it so that code to act on that data doesn't fit in the processor's low level caches.
for(i=0, i<10, i++ ) {
myObject object = array[i];
myObject.functionreallybig1(); // pushes functionreallybig2 out of cache
myObject.functionreallybig2(); // pushes functionreallybig1 out of cache
}
vs
for(i=0, i<10, i++ ) {
myObject object = array[i];
myObject.functionreallybig1(); // this stays in the cache next time through loop
}
for(i=0, i<10, i++ ) {
myObject object = array[i];
myObject.functionreallybig2(); // this stays in the cache next time through loop
}
But it was probably a mistake (usually this type of trick is commented)
When data is cycicly loaded and unloaded like this, it is called cache thrashing, btw.
This is a seperate issue from the data these functions are working on, as typically the processor caches that separately.
I apologize for not responding sooner and providing any kind of code. I got sidetracked on my project and had to work on something else.
To answer anyone still monitoring this question;
Yes, like jalf said, the function is something like:
PrepareData(vectorA, VectorB, xArray, yArray):
listA
listB
foreach(value in vectorA)
convert values insert in listA
foreach(value in vectorB)
convert values insert in listB
listC
listD
for(int i = 0; i < listB.count; i++)
listC[i] = listB[i] converted to something
listD[i] = listA[i]
xArray = listC.ToArray()
yArray = listD.ToArray()
I changed it to:
PrepareData(vectorA, vectorB, ref xArray, ref yArray):
listA
listB
for(int i = 0; i < vectorA.count && vectorB.count; i++)
convert values insert in listA
convert values insert in listB
listC
listD
for(int i = 0; i < listB.count; i++)
listC[i] = listB[i] converted to something
listD[i] = listA[i]
xArray = listC.ToArray()
yArray = listD.ToArray()
Keeping in mind that the vectors can potentially have a large number of items. I figured the second one would be better, so that the program wouldnt't have to loop n times 2 or 3 different times. But then I started to wonder about the affects (effects?) of memory fetching, or prefetching, or what have you.
So, I hope this helps to clear up the question, although a good number of you have provided excellent answers.
Thank you every one for the information. Thinking in terms of Big-O and how to optimize has never been my strong point. I believe I am going to put the code back to the way it was, I should have trusted the way it was written before instead of jumping on my novice instincts. Also, in the future I will put more reference so everyone can understand what the heck I'm talking about (clarity is also not a strong point of mine :-/).
Thank you again.