I'm making a small game, and this has a lot of loops, which all use a certain variable adjacentSquares. After every loop however, this should be set to 0. What would be faster, creating this variable again every time or just setting it to 0? Is there maybe a certain 'exotic' approach, that will perform even better?
The associated (unfinished) code:
void Update ()
{
int adjacentSquares = 0;
for (int x = 0; x <= gridX; x++)
{
for (int y = 0; y <= gridY; y++)
{
if (grid[x - 1,y - 1] == true)
adjacentSquares += 1;
//and some more logic
}
}
}
Why not experiment and measure the time elapsed using the System.Diagnostics.Stopwatch class? http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx
Set up a Stopwatch object before that loop and then measure elapsed time after it. Then, report back with your findings :D
The real answer here is: try it out and see!
But, I would not expect there to be a difference in speed. If anything, you're stack will use 4 bytes more memory (per variable), but even that is not a guarantee. There's a good change that (if there is a performance benefit here) either the C# compiler or the JIT compiler will recognized that the first variable is no longer used, so it will simply use that same memory for the subsequent variables. But I'll echo what I said before: run some tests - that's the only true answer to your question.
If you really want to improve performance here you could look at doing a parallel solution here, depending on if each individual calculation relies on all the previous ones you have done.
You can probably even do this with LINQ depending on the "some more logic" you are doing.
Just for improving a little bit more the performance:
void Update ()
{
int adjacentSquares = 0;
for (int x = -1; x < gridX; x++)
{
for (int y = -1; y < gridY; y++)
{
if (grid[x, y])
adjacentSquares++;
//and some more logic
}
}
}
I don't know exactly why you need to start from -1 (0 - 1), but if you have, then put that on the for instead of executing the same each time.
Related
This question already has answers here:
What is the difference between prefix and postfix operators?
(13 answers)
Closed 9 years ago.
For example, i++ can be written as i=i+1.
Similarly, how can ++i be written in the form as shown above for i++?
Is it the same way as of i++ or if not please specify the right way to write ++i in the form as shown above for i++?
You actually have it the wrong way around:
i=i+1 is closer to ++i than to i++
The reason for this is that it anyway only makes a difference with surrounding expressions, like:
j=++i gives j the same value as j=(i+=1) and j=(i=i+1)
There's a whole lot more to the story though:
1) To be on the same ground, let's look at the difference of pre-increment and post-increment:
j=i++ vs j=++i
The first one is post-increment (value is returned and "post"=afterwards incremented), the later one is pre-increment (value is "pre"=before incremented, and then returned)
So, (multi statement) equivalents would be:
j=i; i=i+1; and i=i+1; j=i; respectively
This makes the whole thing very interesting with (theoretical) expressions like j=++i++ (which are illegal in standard C though, as you may not change one variable in a single statement multiple times)
It's also interesting with regards to memory fences, as modern processors can do so called out of order execution, which means, you might code in a specific order, and it might be executed in a totally different order (though, there are certain rules to this of course). Compilers may also reorder your code, so the 2 statements actually end up being the same at the end.
-
2) Depending on your compiler, the expressions will most likely be really the same after compilation/optimization etc.
If you have a standalone expression of i++; and ++i; the compiler will most likely transform it to the intermediate 'ADD' asm command. You can try it yourself with most compilers, e.g. with gcc it's -s to get the intermediate asm output.
Also, for many years now, compilers tend to optimize the very common construct of
for (int i = 0; i < whatever; i++)
as directly translating i++ in this case would spoil an additional register (and possible lead to register spilling) and lead to unneeded instructions (we're talking about stuff here, which is so minor, that it really won't matter unless the loop runs trillion of times with a single asm instruction or the like)
-
3) Back to your original question, the whole thing is a precedence question, as the 2 statements can be expressed as (this time a bit more exact than the upper explanation):
something = i++ => temp = i; i = i + 1; something = temp
and
something = ++i => i = i + 1; something = i
( here you can also see why the first variant would theoretically lead to more register spilled and more instructions )
As you can see here, the first expression can not easily be altered in a way I think would satisfy your question, as it's not possible to express it using precedence symbols, e.g. parentheses, or, simpler to understand, a block). For the second one though, that's easy:
++i => (++i) => { i = i + 1; return i } (pseudo code)
for the first one that would be
i++ => { return i; i = i + 1 } (pseudo code again)
As you can see, this won't work.
Hope I helped you clear up your question, if anything may need clarification or I made an error, feel free to point it out.
Technically j = ++i is the same as
j = (i = i + 1);
and j = i++; is the same as
j = i;
i = i + 1;
You can avoid writing this as two lines with this trick, though ++ doesn't do this.
j = (i = i + 1) - 1;
++i and i++ both have identical results if written as a stand alone statement (as opposed to chaining it with other operations.
That result is also identical to i += 1 and to i = i + 1.
The difference only comes in if you start using the ++i and i++ inside a larger expression.
The real meaning of this pre and post increment, you 'll know only about where you are using that.?
Main difference is
pre condition will increment first before execute any statement.
Post condition will increment after the statement executed.
Here I've mention a simple example to you.
void main()
{
int i=0, value;
value=i++; // Here I've used post increment. Here value of i will be assigned first then It'll be incremented.
// After this statement, now Value will hold the value 0 and i will hold the value 1
// Now I'm going to use pre increment
value=++i; // Here i've used pre increment. So i will be incremented first then value will be assigned to value.
// After this statement, Now value will hold the value 2 and i will hold the value 2
}
This may be done using anonymous methods:
int i = 5;
int j = i++;
is equivalent to:
int i = 5;
int j = new Func<int>(() => { int temp = i; i = i + 1; return temp; })();
However, it would make more sense if it were expanded as a named method with a ref parameter (which mimics the underlying implementation of the i++ operator):
static void Main(string[] args)
{
int i = 5;
int j = PostIncrement(ref i);
}
static int PostIncrement(ref int x)
{
int temp = x;
x = x + 1;
return temp;
}
There is no 'equivalent' for i++.
In i++ the value for i is incremented after the surrounding expression has been evaluated.
In ++i and i+1 the value for i is incremented before the surrounding expression is evaluated.
++i will increment the value of 'i', and then return the incremented value.
Example:
int i=1;
System.out.print(++i);
//2
System.out.print(i);
//2
i++ will increment the value of 'i', but return the original value that 'i' held before being incremented.
int i=1;
System.out.print(i++);
//1
System.out.print(i);
//2
I am trying to create a list of tasks which depend on the number of processors available. I have a for loop which which seems to be behaving strangely. I am aware of the concept of closures in javascript, and it seems like something similar could be happening here:
var tasks = new Task[Environment.ProcessorCount];
for(int x = 0; x < Environment.ProcessorCount; x ++)
{
tasks[x] = Task.Run(() => new Segment(SizeOfSegment, x * SizeOfSegment, listOfNumbers).generateNewList());
}
What I am finding is when I break on the line in the for loop, the variable x seems to be correct, so it starts at 0 and ends at 3 (number of processors is 4). But when I put the break point within the constructor for Segment, I am finding that x was actually 4 when stepping back in the Call Stack.
Any help would be greatly appreciated.
You're capturing x within your lambda expression - but you've got a single x variable which changes values across the course of the loop, so by the time your task actually runs, it may well have a different value. You need to create a copy of the variable inside the loop, creating a new "variable instance" on each iteration. Then you can capture that variable safely:
for(int x = 0; x < Environment.ProcessorCount; x ++)
{
int copy = x;
tasks[x] = Task.Run(() => new Segment(SizeOfSegment,
copy * SizeOfSegment,
listOfNumbers).generateNewList());
}
(I'd also advise you to rename generateNewList to GenerateNewList to comply with .NET naming conventions.)
Does the compiler treat the two cases below identically, or does case 2 offer a performance increase because x/2 is not constantly re-evaluated? I have always assumed the latter but it would be great if someone could confirm this.
Case 1:
double result;
for (int i = 0; i < 10000000; i++) {
result += variables[i] * (x/2);
}
return result;
Case 2:
double result;
double xOverTwo = x/2;
for (int i = 0; i < 10000000; i++) {
result += variables[i] * (xOverTwo);
}
return result;
That depends on what x is.
If it is a constant, then the calculation is done at compile time, so the two codes perform identically. If it is a volatile variable, then the compiler will be forced to do the calculation each time, so then you would definitely benifit from calculating it outside the loop.
For any other case it depends on whether the compiler itself can optimise the code to do the calculation outside the loop or not. To be on the safe side you can calculate the value outside the loop.
Of course, in your example you don't need to use x inside the loop at all, which would be an example of modifying the method used instead of trying to optimise it:
double result;
for (int i = 0; i < 10000000; i++) {
result += variables[i];
}
return result * (x / 2);
In the process of writing an "Off By One" mutation tester for my favourite mutation testing framework (NinjaTurtles), I wrote the following code to provide an opportunity to check the correctness of my implementation:
public int SumTo(int max)
{
int sum = 0;
for (var i = 1; i <= max; i++)
{
sum += i;
}
return sum;
}
now this seems simple enough, and it didn't strike me that there would be a problem trying to mutate all the literal integer constants in the IL. After all, there are only 3 (the 0, the 1, and the ++).
WRONG!
It became very obvious on the first run that it was never going to work in this particular instance. Why? Because changing the code to
public int SumTo(int max)
{
int sum = 0;
for (var i = 0; i <= max; i++)
{
sum += i;
}
return sum;
}
only adds 0 (zero) to the sum, and this obviously has no effect. Different story if it was the multiple set, but in this instance it was not.
Now there's a fairly easy algorithm for working out the sum of integers
sum = max * (max + 1) / 2;
which I could have fail the mutations easily, since adding or subtracting 1 from either of the constants there will result in an error. (given that max >= 0)
So, problem solved for this particular case. Although it did not do what I wanted for the test of the mutation, which was to check what would happen when I lost the ++ - effectively an infinite loop. But that's another problem.
So - My Question: Are there any trivial or non-trivial cases where a loop starting from 0 or 1 may result in a "mutation off by one" test failure that cannot be refactored (code under test or test) in a similar way? (examples please)
Note: Mutation tests fail when the test suite passes after a mutation has been applied.
Update: an example of something less trivial, but something that could still have the test refactored so that it failed would be the following
public int SumArray(int[] array)
{
int sum = 0;
for (var i = 0; i < array.Length; i++)
{
sum += array[i];
}
return sum;
}
Mutation testing against this code would fail when changing the var i=0 to var i=1 if the test input you gave it was new[] {0,1,2,3,4,5,6,7,8,9}. However change the test input to new[] {9,8,7,6,5,4,3,2,1,0}, and the mutation testing will fail. So a successful refactor proves the testing.
I think with this particular method, there are two choices. You either admit that it's not suitable for mutation testing because of this mathematical anomaly, or you try to write it in a way that makes it safe for mutation testing, either by refactoring to the form you give, or some other way (possibly recursive?).
Your question really boils down to this: is there a real life situation where we care about whether the element 0 is included in or excluded from the operation of a loop, and for which we cannot write a test around that specific aspect? My instinct is to say no.
Your trivial example may be an example of lack of what I referred to as test-drivenness in my blog, writing about NinjaTurtles. Meaning in the case that you have not refactored this method as far as you should.
One natural case of "mutation test failure" is an algorithm for matrix transposition. To make it more suitable for a single for-loop, add some constraints to this task: let the matrix be non-square and require transposition to be in-place. These constraints make one-dimensional array most suitable place to store the matrix and a for-loop (starting, usually, from index '1') may be used to process it. If you start it from index '0', nothing changes, because top-left element of the matrix always transposes to itself.
For an example of such code, see answer to other question (not in C#, sorry).
Here "mutation off by one" test fails, refactoring the test does not change it. I don't know if the code itself may be refactored to avoid this. In theory it may be possible, but should be too difficult.
The code snippet I referenced earlier is not a perfect example. It still may be refactored if the for loop is substituted by two nested loops (as if for rows and columns) and then these rows and columns are recalculated back to one-dimensional index. Still it gives an idea how to make some algorithm, which cannot be refactored (though not very meaningful).
Iterate through an array of positive integers in the order of increasing indexes, for each index compute its pair as i + i % a[i], and if it's not outside the bounds, swap these elements:
for (var i = 1; i < a.Length; i++)
{
var j = i + i % a[i];
if (j < a.Length)
Swap(a[i], a[j]);
}
Here again a[0] is "unmovable", refactoring the test does not change this, and refactoring the code itself is practically impossible.
One more "meaningful" example. Let's implement an implicit Binary Heap. It is usually placed to some array, starting from index '1' (this simplifies many Binary Heap computations, compared to starting from index '0'). Now implement a copy method for this heap. "Off-by-one" problem in this copy method is undetectable because index zero is unused and C# zero-initializes all arrays. This is similar to OP's array summation, but cannot be refactored.
Strictly speaking, you can refactor the whole class and start everything from '0'. But changing only 'copy' method or the test does not prevent "mutation off by one" test failure. Binary Heap class may be treated just as a motivation to copy an array with unused first element.
int[] dst = new int[src.Length];
for (var i = 1; i < src.Length; i++)
{
dst[i] = src[i];
}
Yes, there are many, assuming I have understood your question.
One similar to your case is:
public int MultiplyTo(int max)
{
int product = 1;
for (var i = 1; i <= max; i++)
{
product *= i;
}
return product;
}
Here, if it starts from 0, the result will be 0, but if it starts from 1 the result should be correct. (Although it won't tell the difference between 1 and 2!).
Not quite sure what you are looking for exactly, but it seems to me that if you change/mutate the initial value of sum from 0 to 1, you should fail the test:
public int SumTo(int max)
{
int sum = 1; // Now we are off-by-one from the beginning!
for (var i = 0; i <= max; i++)
{
sum += i;
}
return sum;
}
Update based on comments:
The loop will only not fail after mutation when the loop invariant is violated in the processing of index 0 (or in the absence of it). Most such special cases can be refactored out of the loop, but consider a summation of 1/x:
for (var i = 1; i <= max; i++) {
sum += 1/i;
}
This works fine, but if you mutate the initial bundary from 1 to 0, the test will fail as 1/0 is invalid operation.
I need to resample big sets of data (few hundred spectra, each containing few thousand points) using simple linear interpolation.
I have created interpolation method in C# but it seems to be really slow for huge datasets.
How can I improve the performance of this code?
public static List<double> interpolate(IList<double> xItems, IList<double> yItems, IList<double> breaks)
{
double[] interpolated = new double[breaks.Count];
int id = 1;
int x = 0;
while(breaks[x] < xItems[0])
{
interpolated[x] = yItems[0];
x++;
}
double p, w;
// left border case - uphold the value
for (int i = x; i < breaks.Count; i++)
{
while (breaks[i] > xItems[id])
{
id++;
if (id > xItems.Count - 1)
{
id = xItems.Count - 1;
break;
}
}
System.Diagnostics.Debug.WriteLine(string.Format("i: {0}, id {1}", i, id));
if (id <= xItems.Count - 1)
{
if (id == xItems.Count - 1 && breaks[i] > xItems[id])
{
interpolated[i] = yItems[yItems.Count - 1];
}
else
{
w = xItems[id] - xItems[id - 1];
p = (breaks[i] - xItems[id - 1]) / w;
interpolated[i] = yItems[id - 1] + p * (yItems[id] - yItems[id - 1]);
}
}
else // right border case - uphold the value
{
interpolated[i] = yItems[yItems.Count - 1];
}
}
return interpolated.ToList();
}
Edit
Thanks, guys, for all your responses. What I wanted to achieve, when I wrote this questions, were some general ideas where I could find some areas to improve the performance. I haven't expected any ready solutions, only some ideas. And you gave me what I wanted, thanks!
Before writing this question I thought about rewriting this code in C++ but after reading comments to Will's asnwer it seems that the gain can be less than I expected.
Also, the code is so simple, that there are no mighty code-tricks to use here. Thanks to Petar for his attempt to optimize the code
It seems that all reduces the problem to finding good profiler and checking every line and soubroutine and trying to optimize that.
Thank you again for all responses and taking your part in this discussion!
public static List<double> Interpolate(IList<double> xItems, IList<double> yItems, IList<double> breaks)
{
var a = xItems.ToArray();
var b = yItems.ToArray();
var aLimit = a.Length - 1;
var bLimit = b.Length - 1;
var interpolated = new double[breaks.Count];
var total = 0;
var initialValue = a[0];
while (breaks[total] < initialValue)
{
total++;
}
Array.Copy(b, 0, interpolated, 0, total);
int id = 1;
for (int i = total; i < breaks.Count; i++)
{
var breakValue = breaks[i];
while (breakValue > a[id])
{
id++;
if (id > aLimit)
{
id = aLimit;
break;
}
}
double value = b[bLimit];
if (id <= aLimit)
{
var currentValue = a[id];
var previousValue = a[id - 1];
if (id != aLimit || breakValue <= currentValue)
{
var w = currentValue - previousValue;
var p = (breakValue - previousValue) / w;
value = b[id - 1] + p * (b[id] - b[id - 1]);
}
}
interpolated[i] = value;
}
return interpolated.ToList();
}
I've cached some (const) values and used Array.Copy, but I think these are micro optimization that are already made by the compiler in Release mode. However You can try this version and see if it will beat the original version of the code.
Instead of
interpolated.ToList()
which copies the whole array, you compute the interpolated values directly in the final list (or return that array instead). Especially if the array/List is big enough to qualify for the large object heap.
Unlike the ordinary heap, the LOH is not compacted by the GC, which means that short lived large objects are far more harmful than small ones.
Then again: 7000 doubles are approx. 56'000 bytes which is below the large object threshold of 85'000 bytes (1).
Looks to me you've created an O(n^2) algorithm. You are searching for the interval, that's O(n), then probably apply it n times. You'll get a quick and cheap speed-up by taking advantage of the fact that the items are already ordered in the list. Use BinarySearch(), that's O(log(n)).
If still necessary, you should be able to do something speedier with the outer loop, what ever interval you found previously should make it easier to find the next one. But that code isn't in your snippet.
I'd say profile the code and see where it spends its time, then you have somewhere to focus on.
ANTS is popular, but Equatec is free I think.
few suggestions,
as others suggested, use profiler to understand better where time is used.
the loop
while (breaks[x] < xItems[0])
could cause exception if x grows bigger than number of items in "breaks" list. You should use something like
while (x < breaks.Count && breaks[x] < xItems[0])
But you might not need that loop at all. Why treat the first item as special case, just start with id=0 and handle the first point in for(i) loop. I understand that id might start from 0 in this case, and [id-1] would be negative index, but see if you can do something there.
If you want to optimize for speed then you sacrifice memory size, and vice versa. You cannot usually have both, except if you make really clever algorithm. In this case, it would mean to calculate as much as you can outside loops, store those values in variables (extra memory) and use them later. For example, instead of always saying:
id = xItems.Count - 1;
You could say:
int lastXItemsIndex = xItems.Count-1;
...
id = lastXItemsIndex;
This is the same suggestion as Petar Petrov did with aLimit, bLimit....
next point, your loop (or the one Petar Petrov suggested):
while (breaks[i] > xItems[id])
{
id++;
if (id > xItems.Count - 1)
{
id = xItems.Count - 1;
break;
}
}
could probably be reduced to:
double currentBreak = breaks[i];
while (id <= lastXIndex && currentBreak > xItems[id]) id++;
and the last point I would add is to check if there is some property in your samples that is special for your problem. For example if xItems represent time, and you are sampling in regular intervals, then
w = xItems[id] - xItems[id - 1];
is constant, and you do not have to calculate it every time in the loop.
This is probably not often the case, but maybe your problem has some other property which you could use to improve performance.
Another idea is this: maybe you do not need double precision, "float" is probably faster because it is smaller.
Good luck
System.Diagnostics.Debug.WriteLine(string.Format("i: {0}, id {1}", i, id));
I hope it's release build without DEBUG defined?
Otherwise, it might depend on what exactly are those IList parameters. May be useful to store Count value instead of accessing property every time.
This is the kind of problem where you need to move over to native code.