Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I understand nested FOR loops. I understand what they do, and how they do it. But my problem is that they seem horribly unreadable to me.
Take this example:
for (int i = 0, y = 0; y <= ySize; y++) {
for (int x = 0; x <= xSize; x++, i++) {
vertices[i] = new Vector3(x, y);
}
}
Now, this loop is pretty straightforward. It's just an x/y "2 dimensional" loop. But as I add more and more "dimensions" to this nested loop, is there a way to make the code not a horrible mess of nests within nests and stupid amounts of backtracing counter variables (i, x, y, z, etc.)?
Also, does additional nesting affect performance in a linear way, or do additonal FORs make things more and more inefficient as you nest more of them?
I think that the issue you have here is less the nested for loops, and more an unusual use of variables within the loops.
Newlines before the opening braces can help with readability too (although this is subjective).
How about this instead:
int i = 0;
for (int y = 0; y <= ySize; y++)
{
for (int x = 0; x <= xSize; x++)
{
vertices[i++] = new Vector3(x, y);
}
}
This approach should remain relatively readable for additional dimensions too (in this example I've moved the incrementing of i out to its own line, as suggested by usr).
int i = 0;
for (int y = 0; y <= ySize; y++)
{
for (int x = 0; x <= xSize; x++)
{
for (int a = 0; a <= aSize; a++)
{
for (int b = 0; b <= bSize; b++)
{
vertices[i] = new Vector3(x, y, a, b);
i++;
}
}
}
}
Regarding performance, I would suggest focussing on making sure that the code is readable and understandable by a human first, and then measuring the run-time performance, possibly with a tool such as RedGate ANTS
The usual solution is to refactor into methods which contain one or two for loops, and keep refactoring until each method is clear and not too large.
Another solution to stop the indenting, and to separate the loop-resulting-data from the applying-logic, is to use Linq.
int i = 0;
var coordinates = from y in Enumerable.Range(0, ySize + 1)
from x in Enumerable.Range(0, xSize + 1)
select new { x, y, i = i++ };
foreach (var coordinate in coordinates) {
vertices[coordinate.i] = new Vector3(coordinate.x, coordinate.y);
}
This is only if the vertices array is already declared. If you can just create a new array, then you can do simply this:
var vertices = (from y in Enumerable.Range(0, ySize + 1)
from x in Enumerable.Range(0, xSize + 1)
select new Vector3(coordinate.x, coordinate.y)
).ToArray();
var vertices =
(from y in Enumerable.Range(0, ySize)
from x in Enumerable.Range(0, xSize)
select new Vector3(x, y)).ToList();
Loops are overused. Most loops can be expressed as queries. That makes them easier to write and maintain and it makes them expressions as opposed to statements which are easier to move around.
Performance is much worse, like 3-10x here. Whether that matters to your specific case depends on how much time is spent here and what your performance goals are.
a) Usually you will find that you're not going to need very deep nesting, so it won't be a problem.
b) You can make the nested loops into a separate method. (i.e. if you're nesting d in c in b in a - you can make a method that accepts a and b as parameters and does c and d. You can even let VS do this for you by selecting the c loop and clicking Edit->Refactor->Extract Method.)
As for performance - obviously more nesting means more iterations, but if you have them - you need them. Just changing a nested loop to be included in the original loop (and calculating where you are inside the "actual" code) will IMHO usually not help in any noticeable way.
An N-deep loop nest is likely to be more readable than any alternative expressible in C#. Instead, consider the use of a higher-level language that has vector arithmetic as a primitive: for instance, in NumPy the direct equivalent of your code is
xSize = 10
ySize = 20
vertices = np.meshgrid(
np.linspace(0, xSize, xSize),
np.linspace(0, ySize, ySize))
and the N-dimensional generalization is
sizes = [10, 20, 30, 40, 10] # as many array entries as you like
vertices = np.meshgrid(*(
np.linspace(0, kSize, kSize)
for kSize in sizes))
Just messing around here, but how about padding out spaces so that the loop conditions line up? Here is #Richard Everett's code reformatted a bit:
int i = 0;
for (int y = 0; y <= ySize; y++) {
for (int x = 0; x <= xSize; x++) {
for (int a = 0; a <= aSize; a++) {
for (int b = 0; b <= bSize; b++) {
vertices[i++] = new Vector3(x, y, a, b);
}
}
}
}
Related
I'm writing a graphics program in C# and I couldn't figure out a good way to run a for loop between two values, where either one may be larger or smaller than the other.
To demonstrate, the following code works perfectly when X2>X1:
for (int x = X1; x<=X2; x++) {
//code
}
However, it fails when X2<X1. What I want to happen in this situation is that the loop starts at X1 and goes backwards until X2.
I since I'm doing a graphics program, I can't simply swap X1 and X2 when X2<X1, as this would mean swapping their associated Y values, which could produce the same problem just for Y values. The loop must always start at X1, it's the direction(+/-) that needs to change, not the order of values.
I've thought of a few solutions however they all have flaws, it's worth noting that X1 will never equal X2.
#1: Replicate loop
if (X2<X1) {
for (int x = X1; x>=X2; x--) {/*code*/}
} else {
for (int x = X1; x<=X2; x++) {/*code*/}
}
Unsuitable because of replicated code, especially if the "//code" section is particularly long
#2: Lots of ternaries
for (int x = X1; x!=X2+(X2<X1?-1:1); x+=(X2<X1?-1:1)) {/*code*/}
While this code works and is concise, it's readability is terrible. Also I've seen in various places that using "not equal to" for your loop constraint is bad practice source
#3: Use a while loop
int x = X1;
while(true) {
//code
if (X2<X1) {
x--;
if (x<X2) break;
} else {
x++;
if (x>X2) break;
}
}
This solution seems very long and convoluted to perform such a simple task, in addition, use of "while(true)" is also bad practice source
I think the most readable option is to simple create/extract method from the repeating code (the first proposed version):
void ComputeRenderedStuff(int x)
{
// do your computations for x
}
if (X2<X1)
for (int x = X1; x>=X2; x--)
ComputeRenderedStuff(x);
else
for (int x = X1; x<=X2; x++)
ComputeRenderedStuff(x);
A simple solution would be to use one variable for the loop itself, and another variable for the steps:
int length = Math.Abs(x1-x2);
for(int i=0; i <= length; i++)
{
// step will go either from x1 to x2 or from x2 to x1.
int step = (x1 < x2) ? x1 + i : x2 + (length-i);
}
Of course, you can wrap the entire loop in a method, so you wouldn't have to repeat the code:
void ForLoopUnknownDirection(int start, int stop, Action<int> action)
{
int length = Math.Abs(start-stop);
for(int i=0; i <= length; i++)
{
int step = (start < stop) ? start + i : stop + (length-i);
action(step);
}
}
This way you can do whatever you want between the numbers while only writing the loop code once.
See a live demo on rextester
Simply use Math.Min() and Math.Max() to choose lower and upper boundaries.
Something like this:
int MinX = Math.Min(X1, X2);
int MaxX = Math.Max(X1, X2);
for (int x = MinX; x <= MaxX; x++) {
//code
}
Maybe extract a method like this
private static IEnumerable<int> Step(int start, int end)
{
if (start < end)
{
for (int x = start; x <= end; x++)
yield return x;
}
else
{
for (int x = start; x >= end; x--)
yield return x;
}
}
Then you can do
foreach (int x in Step(X1, X2))
{
/*code*/
}
Use a directional increment ( d in the code below )
var d = (x1 > x2) ? -1 : 1;
var i = x1;
while (i != x2)
{
//
// insert your code here
//
i = i + d;
}
Am I purdy? 😜
Why so complicated?
for (int n = 0; n < Count; n++)
{
int Index = (ascending ? n : Count - 1- n);
}
i have a program with a nested nested nested loop (four loops altogether). I have a Boolean variable which i want to affect the code in the deepest loop and in the first nested loop a small amount. My dilemma is, i don't really want to have the if else statement put inside the loops as i would have thought that would check the Boolean's state every iteration using extra time to check the statement, and i know that the Boolean's state would not change when the loop starts. This lead me to think it would be better to place the if else statement outside the loops and just have my loops code slightly changed, however, this also looks messy there is a lot of repeated code.
One thing i thought might work, but of which i have little experience using, is a delegate, i could simply put some of the code in a method and then create a delegate, depending on the state of betterColor i could then assign that delegate methods with the different code on them beforehand, but this also seems messy.
Below is what i am trying to avoid as i thhought it may slow down my algorithm:
for (short y = 0; y < effectImage.Height; y++)
{
int vCalc = (y <= radius) ? 0 : y - radius;
for (short x = 0; x < effectImage.Width; x++)
{
int red = 0, green = 0, blue = 0;
short kArea = 0;
for (int v = vCalc; v <= y + radius && v < effectImage.Height; v++)
{
int calc = calcs[(y - v) + radius];
for (int h = (x <= calc || calc < 0) ? 0 : x - calc; h <= x + calc && h < effectImage.Width; h++)
{
if (betterColor == true)
{
red += colorImage[h, v].R * colorImage[h, v].R;
green += colorImage[h, v].G * colorImage[h, v].G;
blue += colorImage[h, v].B * colorImage[h, v].G;
kArea++;
}
}
}
if (betterColor == true)
effectImage.SetPixel(x, y, Color.FromArgb(red / kArea, green / kArea, blue / kArea));
else
effectImage.SetPixel(x, y, Color.FromArgb(Convert.ToInt32(Math.Sqrt(red / kArea)), Convert.ToInt32(Math.Sqrt(green / kArea)), Convert.ToInt32(Math.Sqrt(blue / kArea))));
}
if (y % 4 == 0) // Updates the image on screen every 4 y pixels calculated.
{
image.Image = effectImage;
image.Update();
}
}
And here is what my code now looks like:
if (betterColor == true)
{
for (short y = 0; y < effectImage.Height; y++)
{
int vCalc = (y <= radius) ? 0 : y - radius;
for (short x = 0; x < effectImage.Width; x++)
{
int red = 0, green = 0, blue = 0;
short kArea = 0;
for (int v = vCalc; v <= y + radius && v < effectImage.Height; v++)
{
int calc = calcs[(y - v) + radius];
for (int h = (x <= calc || calc < 0) ? 0 : x - calc; h <= x + calc && h < effectImage.Width; h++)
{
red += colorImage[h, v].R * colorImage[h, v].R;
green += colorImage[h, v].G * colorImage[h, v].G;
blue += colorImage[h, v].B * colorImage[h, v].G;
kArea++;
}
}
effectImage.SetPixel(x, y, Color.FromArgb(Convert.ToInt32(Math.Sqrt(red / kArea)), Convert.ToInt32(Math.Sqrt(green / kArea)), Convert.ToInt32(Math.Sqrt(blue / kArea))));
}
if (y % 4 == 0) // Updates the image on screen every 4 y pixels calculated.
{
image.Image = effectImage;
image.Update();
}
}
}
else
{
for (short y = 0; y < effectImage.Height; y++)
{
int vCalc = (y <= radius) ? 0 : y - radius;
for (short x = 0; x < effectImage.Width; x++)
{
int red = 0, green = 0, blue = 0;
short kArea = 0;
for (int v = vCalc; v <= y + radius && v < effectImage.Height; v++)
{
int calc = calcs[(y - v) + radius];
for (int h = (x <= calc || calc < 0) ? 0 : x - calc; h <= x + calc && h < effectImage.Width; h++)
{
red += colorImage[h, v].R;
green += colorImage[h, v].G;
blue += colorImage[h, v].B;
kArea++;
}
}
effectImage.SetPixel(x, y, Color.FromArgb(red / kArea, green / kArea, blue / kArea));
}
if (y % 4 == 0) // Updates the image on screen every 4 y pixels calculated.
{
image.Image = effectImage;
image.Update();
}
}
}
In terms of what the code does, it is a box blur that uses a circular kernal.
Moving the if out of the loop, and effectively duplicating the whole looping code is not really worth it. So if you have code like this:
for (i …)
{
if (something)
DoX(i);
else
DoY(i);
}
Then you should not replace it by this:
if (something)
{
for (i …)
DoX(i);
}
else
{
for (i …)
DoY(i);
}
Doing so will only make the code a lot more difficult to read and to maintain. One first need to figure out that this is actually the same code that’s being executed for each case (except that one tiny difference), and it’s super difficult to maintain once you need to change anything about the loop since you need to make sure that you edit both cases properly.
While in theory, performing a single check vs. performing that check N times is obviously faster, in practice, this rarely matters. If-branches that rely on a constant boolean are super fast, so if you calculate the condition outside of the loop (in your case betterColor is set outside the loop), then the performance difference will not be noticeable at all. In addition, branch prediction will usually make sure that there is no difference at all in these cases.
So no, don’t rewrite that code like that. Keep it the way that is more understandable.
In general, you should avoid these kind of micro optimizations anyway. It is very likely that you algorithm has much slower parts that are much more relevant to the overall performance than small constructs like that. So focusing on those small things which are already very fast will not help you make the total execution faster. You should only optimize things that are an actual performance bottleneck in your code, where a profiler showed that there is a performance issue or that optimizing that code will actively improve your performance. And stay away from optimizations that make code less readable unless you really need it (in most cases you won’t).
The way i see it, the code is not equivalent (in your first example if betterColor is false, nothing happens in the innermost loop).
But isn't this micro-optimizing?
You could probably do something with creating a function with a Func<> as argument for the innermost loop. And then pass the correct func depending on the betterColor value.
I.E. Blur(betterColor?FuncA:FuncB);
Although I don't think it will be faster then a boolean check... But that's my feeling.
I feel like I'm missing something terribly obvious, but I cannot seem to find the array pair with the lowest value.
I have an int[,] worldMapXY where a 2D map is stored, say worldMapXY[0,0] through worldMapXY[120,120]. All values of map's array are 1 (wall\invalid) or 0 (path/valid).
I'm writing a method that will find coordinates in one of the eight cardinal directions to create a spawn point. So I also have int[,] validSpotArr which has a subset of bounds of the map closest to the direction I'm setting the spawn. The values for wall/invalid locations are set to 9999, the values for path/valid locations are set to (x + y). This is all specific to the bottom left corner, nearest to [0,0], hence "BL" or "Bottom Left"
case "BL":
for (int x = (int)border + 1; x < worldX + (int)border / 4; x++)
{
for (int y = (int)border + 1; y < worldY + (int)border / 4; y++)
{
if (worldMapXY[x,y] = 0)
{
validSpotArr[x,y] = x + y;
}
else
{
validSpotArr[x,y] = 9999;
}
}
}
What I can't quite wrap my head around is how to determine the coordinates/index of validSpotArr with the lowest value in such a way that I could pass those as separate x and y coordinates to another function (to set the spawn point). I suspect there's a lambda operator that may help, but I literally don't understand lambdas. Clearly that needs to be my next point of study.
E.g. - if validSpotArr[23, 45] = 68, and 68 is the lowest value, how do I set x=23 and y=45?
Edit: I tried messing around with something like this, but it isn't right:
Array.IndexOf(validSpotArr, validSpotArr.Min());
While not precisely an answer to your question, in a strictly given situation I'd probably go for finding those from within the cycles, i.e.
int minValidSpot = int.MaxValue, minX, minY;
for (int x = (int)border + 1; x < worldX + int(border) / 4; x++)
{
for (int y = (int)border + 1; y < worldY + int(border) / 4; y++)
{
if (worldMapXY[x,y] = 0)
{
validSpotArr[x,y] = x + y;
}
else
{
validSpotArr[x,y] = 9999;
}
if ( minValidSpot > validSpotArr[x,y] )
{
minValidSpot = validSpotArr[x,y];
minX = x;
minY = y;
}
}
}
Other than that, if looking for some kind of more universal solution, I'd probably just flatten that array, the maths for index conversion (nD<=>1D) are pretty simple.
I have a 2d-array custom Vector class of around 250, 250 in dimensions. The Vector class just stores x and y float components for the vector. My project requires that I perform a smoothing function on the array so that a new array is created by taking the local average of i indices around each vector in the array. My problem is that my current solution does not compute fast enough and was wondering if there was a better way of computing this.
Pseudo code for my current solution can be seen below. I am implementing this in C#, any help would be much appreciated. My actual solution use 1d arrays for the speed up, but I didn't include that here.
function smoothVectorArray(Vector[,] myVectorArray, int averagingDistance) {
newVectorArray = new Vector[250,250];
for (x = 0; x < 250; x++)
{
for (y = 0; y < 250; y++)
{
vectorCount = 0;
vectorXTotal = 0;
vectorYTotal = 0;
for (i = -averageDistance; i < averagingDistance+ 1; i++)
{
for (j = -averageDistance; j < averagingDistance+ 1; j++)
{
tempX = x + i;
tempY = y + j;
if (inArrayBounds(tempX, tempY)) {
vectorCount++;
vectorXTotal += myVectorArray[tempX, tempY].x;
vectorYTotal += myVectorArray[tempX, tempY].y;
}
}
}
newVectorArray[x, y] = new Vector(vectorXTotal / vectorCount, vectorYTotal / vectorCount);
}
}
return newVectorArray;
}
What your inner cycles do is calculating sum of rectangular ares:
for (i = -averageDistance; i < averagingDistance+ 1; i++)
for (j = -averageDistance; j < averagingDistance+ 1; j++)
You can pre-calculate those efficiently in O(n^2). Let's introduce array S[N][N] (where N = 250 in your case).
To make it simpler I will assume there is only one coordinate. You can easily adapt it to pair (x, y) by building 2 arrays.
S[i, j] - will be sum of sub-rectangle (0, 0)-(i, j)
we can build this array efficiently:
S[0, 0] = myVectorArray[0, 0]; //rectangle (0, 0)-(0,0) has only one cell (0, 0)
for (int i = 1; i < N; ++i){
S[0, i] = S[0, i-1] + myVectorArray[0, i]; //rectangle (0, 0)-(0, i) is calculated based on previous rectangle (0,0)-(0,i-1) and new cell (0, i)
S[i, 0] = S[i - 1, 0] + myVectorArray[i, 0]; //same for (0, 0)-(i, 0)
}
for (int i = 1; i < N; ++i){
var currentRowSum = myVectorArray[i, 0];
for (int j = 1; j < N; ++j){
currentRowSum += myVectorArray[i, j]; //keep track of sum in current row
S[i, j] = S[i - 1, j] + currentRowSum; //rectangle (0,0)-(i,j) sum constrcuted as //rectanle (0, 0) - (i-1, j) which is current rectagnle without current row which is already calculated + current row sum
}
}
Once we have have this partials sums array calculated we can get sub rectangle sum in O(1). Lets say we want to get sum in rectangle (a, b)-(c,d)
To get it we start with big rectangle (0, 0)-(c, d) from which we need to subtract (0, 0)-(a-1, d-1) and (0, 0)-(c-1, b-1) and add add back rectangle (0, 0)-(a-1, b-1) since it was subtracted twice.
This way your can get rid of your inner cycle.
https://en.wikipedia.org/wiki/Summed_area_table
You will definitely want to take advantage of CPU cache for the solution, it sounds like you have that in mind with your 1D array solution. Try to arrange the algorithm to work on chunks of contiguous memory at a time, rather than hopping around the array. To this point you should either use a Vector struct, rather than a class, or use two arrays of floats, one for the x values and one for the y values. By using a class, your array is storing pointers to various spots in the heap. So even if you iterate over the array in order, you are still missing the cache all the time as you hop to the location of the Vector object. Every cache miss is ~200 cpu cycles wasted. This would be the main thing to work out first.
After that, some micro-optimizations you can consider are
using an inlining hint on the inArrayBounds method: [MethodImpl(MethodImplOptions.AggressiveInlining)]
using unsafe mode and iterating with pointer arithmetic to avoid arrays bounds checking overhead
These last two ideas may or may not have any significant impact, you should test.
I have read the question for Performance of 2-dimensional array vs 1-dimensional array
But in conclusion it says could be the same (depending the map own map function, C does this automatically)?...
I have a matrix wich has 1,000 columns and 440,000,000 rows where each element is a double in C#...
If I am doing some computations in memory, which one could be better to use in performance aspect? (note that I have the memory needed to hold such a monstruos quantity of information)...
If what you're asking is which is better, a 2D array of size 1000x44000 or a 1D array of size 44000000, well what's the difference as far as memory goes? You still have the same number of elements! In the case of performance and understandability, the 2D is probably better. Imagine having to manually find each column or row in a 1D array, when you know exactly where they are in a 2D array.
It depends on how many operations you are performing. In the below example, I'm setting the values of the array 2500 times. Size of the array is (1000 * 1000 * 3). The 1D array took 40 seconds and the 3D array took 1:39 mins.
var startTime = DateTime.Now;
Test1D(new byte[1000 * 1000 * 3]);
Console.WriteLine("Total Time taken 1d = " + (DateTime.Now - startTime));
startTime = DateTime.Now;
Test3D(new byte[1000,1000,3], 1000, 1000);
Console.WriteLine("Total Time taken 3D = " + (DateTime.Now - startTime));
public static void Test1D(byte[] array)
{
for (int c = 0; c < 2500; c++)
{
for (int i = 0; i < array.Length; i++)
{
array[i] = 10;
}
}
}
public static void Test3D(byte[,,] array, int w, int h)
{
for (int c = 0; c < 2500; c++)
{
for (int i = 0; i < h; i++)
{
for (int j = 0; j < w; j++)
{
array[i, j, 0] = 10;
array[i, j, 1] = 10;
array[i, j, 2] = 10;
}
}
}
}
The difference between double[1000,44000] and double[44000000] will not be significant.
You're probably better of with the [,] version (letting the compiler(s) figure out the addressing. But the pattern of your calculations is likely to have more impact (locality and cache use).
Also consider the array-of-array variant, double[1000][]. It is a known 'feature' of the Jitter that it cannot eliminate range-checking in the [,] arrays.