Optimizing this enormous for loop - c#

So I have this for loop:
for (int i = 0; i < meshes.Count; i++)
{
for (int j = 0; j < meshes.Count; j++)
{
for (int m = 0; m < meshes[i].vertices.Length; m++)
{
for (int n = 0; n < meshes[i].vertices.Length; n++)
{
if ((meshes[i].vertices[m].x == meshes[j].vertices[n].x) && (meshes[i].vertices[m].z == meshes[j].vertices[n].z))
{
if (meshes[i].vertices[m] != meshes[j].vertices[n])
{
meshes[i].vertices[m].y = meshes[j].vertices[n].y;
}
}
}
}
}
}
Which goes through a few million vectors and compares them to all other vectors, to then modify some of their y values. I think it works, however after hitting play it takes an unbelievably long time to load (currently been waiting for 15 minutes, and still going). Is there a way to make it more efficient? Thanks for the help!

As I read this, what you're basically doing, is that for all vertices with the same x and z, you set their y value to the same.
A more optimized way would be to use the Linq method GroupBy which internally uses hash mapping to avoid exponential time complexity like your current approach:
var vGroups = meshes.SelectMany(mesh => mesh.vertices)
.GroupBy(vertex => new { vertex.x, vertex.z });
foreach (var vGroup in vGroups)
{
vGroup.Aggregate((prev, curr) =>
{
// If prev is null (i.e. first iteration of the "loop")
// don't change the 'y' value
curr.Y = prev?.y ?? curr.y;
return curr;
});
}
// All vertices should now be updated in the 'meshes'
Note, that the final y value of the vertices depends on the order of the meshes and vertices in your original list. The first vertex in each vGroup is the deciding vertex. I believe it'll be opposite of your approach, where it's the last vertex that's the deciding one, but it doesn't sound like that's important for you.
Furthermore, be aware that in this (and your) approach you are possibly merging two vertices in the same mesh if two vertices have the same x and z values. I don't know if that's intended but I wanted to point it out.
A additional performance optimization would be to parallelize this. Just start out with call to AsParallel:
var vGroups = meshes.AsParallel()
.SelectMany(mesh => mesh.vertices)
.GroupBy(vertex => new { vertex.x, vertex.z });
// ...
Be aware, that parallelization is not always speeding things up if the computation you are trying to parallelize is not that computationally expensive. The overhead from parallelizing it may outweigh the benefits. I'm not sure if the GroupBy operation is heavy enough for it to be beneficial but you'll have to test that out for yourself. Try without it first.
For a simplified example, see this fiddle.

You want to make Y equal for all vertices with the same X and Z. Lets do just that
var yForXZDict = new Dictionary<(int, int), int>();
foreach (var mesh in meshes)
{
foreach (var vertex in mesh.vertices)
{
var xz = (vertex.x, vertex.z);
if (yForXZDict.TryGetValue(xz, out var y))
{
vertex.y = y;
}
else
{
yForXZDict[xz] = vertex.y;
}
}
}
You should replace int to the exact type you use for coordinates

You are comparing twice unnecessarily.
Here a short example of what I mean:
Let's say we have meshes A, B, C.
You are comparing
A, A
A, B
A, C
B, A
B, B
B, C
C, A
C, B
C, C
while this checks e.g. the combination A and B two times.
One first easy improvement would be to use e.g.
for (int i = 0; i < meshes.Count; i++)
{
// only check the current and following meshes
for (int j = i; j < meshes.Count; j++)
{
...
do you even want to compare a mesh with itself? Otherwise you can actually even use j = i + 1 so only compare the current mesh to the next and following meshes.
Then for the vertices it depends. If you actually also want to check the mesh with itself at least you want int n = m + 1 in the case that i == j.
It makes no sense to check a vertex with itself since the condition will always be true.
A next point is minimize accesses
You are accessing e.g.
meshes[i].vertices
five times!
rather get and store it once like e.g.
// To minimize GC it sometimes makes sense to reuse variables outside of a loop
Mesh meshA;
Mesh meshB;
Vector3[] vertsA;
Vector3[] vertsB;
Vector3 vA;
Vector3 vB;
for (int i = 0; i < meshes.Count; i++)
{
meshA = meshes[i];
vertsA = meshA.vertices;
for (int j = i; j < meshes.Count; j++)
{
meshB = meshes[j];
vertsB = meshB.vertices;
for(int m = 0; m < vertsA.Length; m++)
{
vA = vertsA[m];
...
Also note that a line like
meshes[i].vertices[m].y = meshes[j].vertices[n].y;
Actually shouldn't even compile!
The vertices are Vector3 which is a struct so assigning the
meshes[i].vertices[m].y
only changes the value of a returned Vector3 instance but shouldn't in any way change the content of the array.
You would rather work with the vA as mentioned before and at the end assign it back via
vertsA[m] = vA;
and then at the end of the loop assign the entire array back once via
meshA.vertices = vertsA;
And well finally: I would put this into a Thread or use Unity's JobSystem and the burst compiler and meanwhile e.g. display a progress bar or some User feedback instead of freezing the entire application.
Yet another point is floating point precision
you are directly comparing two float values using ==. Due to the floating point precision this might fail even if it shouldn't e.g.
10f * 0.1f == 1f
is not necessarily true. It might be 0.99999999 or 1.0000000001.
Therefore Unity uses only a precision of 0.00001 for Vector3 == Vector3.
You should either do the same and use
if(Mathf.Abs(vA.x - vB.x) <= 0.00001f)`
or use
if(Mathf.Approximately(vA.x, vB.x))
which equals
if(Mathf.Abs(vA.x - vB.x) <= Mathf.Epsilon)`
where Epsilon is the smallest value two floats can differ

Related

How to multiply 2 matrices using Parallel.ForEach?

There is a function that multiplies two matrices as usual
public IMatrix Multiply(IMatrix m1, IMatrix m2)
{
var resultMatrix = new Matrix(m1.RowCount, m2.ColCount);
for (long i = 0; i < m1.RowCount; i++)
{
for (byte j = 0; j < m2.ColCount; j++)
{
long sum = 0;
for (byte k = 0; k < m1.ColCount; k++)
{
sum += m1.GetElement(i, k) * m2.GetElement(k, j);
}
resultMatrix.SetElement(i, j, sum);
}
}
return resultMatrix;
}
This function should be rewritten using Parallel.ForEach Threading, I tried this way
public IMatrix Multiply(IMatrix m1, IMatrix m2)
{
// todo: feel free to add your code here
var resultMatrix = new Matrix(m1.RowCount, m2.ColCount);
Parallel.ForEach(m1.RowCount, row =>
{
for (byte j = 0; j < m2.ColCount; j++)
{
long sum = 0;
for (byte k = 0; k < m1.ColCount; k++)
{
sum += m1.GetElement(row, k) * m2.GetElement(k, j);
}
resultMatrix.SetElement(row, j, sum);
}
});
return resultMatrix;
}
But there is an error with the type argument in the loop. How can I fix it?
Just use Parallel.For instead of Parallel.Foreach, that should let you keep the exact same body as the non-parallel version:
Parallel.For(0, m1.RowCount, i =>{
...
}
Note that only fairly large matrices will benefit from parallelization, so if you are working with 4x4 matrices for graphics, this is not the approach to take.
One problem with multiplying matrices is that you need to access one value for each row for one of the matrices in your innermost loop. This access pattern may be difficult to cache by your processor, causing lots of cache misses. So a fairly easy optimization is to copy an entire column to a temporary array and do all computations that need this column before reading the next. This lets all memory access be nice and linear and easy to cache. this will do more work overall, but better cache utilization easily makes it a win. There are even more cache efficient methods, but the complexity also tend to increase.
Another optimization would be to use SIMD, but this might require platform specific code for best performance, and will likely involve more work. But you might be able to find libraries that are already optimized.
But perhaps most importantly, Profile your code. It is quite easy to have simple things consume lot of time. You are for example using an interface, so if you may have a virtual method call for each memory access that cannot be inlined, potentially causing a severe performance penalty compared to a direct array access.
ForEach receives a collection, IEnumerable as the first argument and m1.RowCount is a number.
Probably Parallel.For() is what you wanted.

C# - How to place a given number of random mines in an array

I'm new to c#. I have a task to make a type of minesweeper, but which immediately opens a solution.
static void Main(string[] args)
{
Console.Write("Enter the width of the field: ");
int q = Convert.ToInt32(Console.ReadLine());
Console.Write("Enter the length of the field: ");
int w = Convert.ToInt32(Console.ReadLine());
Console.Write("Enter the number of bombs: ");
int c = Convert.ToInt32(Console.ReadLine());
Random rand = new Random();
var charArray = new char[q, w];
var intArray = new int[q, w];
for (int i = 0; i < q; i++)
{
for (int j = 0; j < w; j++)
{
intArray[i, j] = rand.Next(2);
charArray[i, j] = intArray[i, j] == 0 ? '_' : '*';
Console.Write(charArray[i, j]);
}
Console.WriteLine();
}
}
}
}
Two arrays should be output. Everything should be closed on the first one, that is, there should be only the characters: _ and *
0 - these are places without mines, I replaced them with a symbol _
1 - these are places with mines, I replaced them with an asterisk symbol, but they do not accept the number of mines entered by the user. And it is necessary that there are as many "*" characters as there are mines.
And in the second array there should be an already open solution of the game. That is, the cells next to which there are mines should take a number meaning the number of mines next to this cell.
Please help me..
Compiling the current code
Random random = new Random();
while(c > 0)
{
var rq = random.Next(q);
var rw = random.Next(w);
if(intArray[rq,rw] == 0)
{
intArray[rq, rw] = 1;
c--;
}
}
I would suggest dividing the problem in smaller manageable chunks. For instance, you can place the bombs in a initial step, and on a second step build the solution. You can build the solution at the same time you place the bombs, although for clarity you can do it after.
Naming of variables is also important. If you prefer using single letter variable names, I believe that's fine for the problem limits, however I would use meaningful letters easier to remember. eg: W and H for the width and height of the board, and B for the total number of bombs.
The first part of the problem then can be described as placing B bombs in a WxH board. So instead of having nested for statements that enumerate WxH times, it's better to have a while loop that repeats the bomb placing logic as long as you have remaining bombs.
Once you generate a new random location on the board, you have to check you haven't placed a bomb there already. You can have an auxiliary function HasBomb that checks that:
bool HasBomb(char[,] charArray, int x, int y)
{
return charArray[x,y] == '*';
}
I'll leave error checking out, this function being private can rely on the caller sending valid coordinates.
Then the bomb placing procedure can be something like:
int remainingBombs = B;
while (remainingBombs > 0)
{
int x = rand.Next(W);
int y = rand.Next(H);
if (!HasBomb(charArray, x, y)
{
charArray[x,y] = '*';
remainingBombs--;
}
}
At this point you may figure out another concern. If the number B of bombs to place is larger than the available positions on the board WxH, then you wont be able to place the bombs on the board. You'll have to check for that restriction when requesting the values for W, H and B.
Then in order to create the array with the number of bombs next to each position, you'll need some way to check for all the neighbouring positions to a given one. If the position is in the middle of the board it has 8 neighbour positions, if it's on an edge it has 5, and if it's on a corner it has 3. Having a helper function return all the valid neighbour positions can be handy.
IEnumerable<(int X, int Y)> NeighbourPositions(int x, int y, int W, int H)
{
bool leftEdge = x == 0;
bool topEdge = y == 0;
bool rightEdge = x == W - 1;
bool bottomEdge = y == H - 1;
if (!leftEdge && !topEdge)
yield return (x-1, y-1);
if (!topEdge)
yield return (x, y-1);
if (!rightEdge && !topEdge)
yield return (x+1, y-1);
if (!leftEdge)
yield return (x-1, y);
if (!rightEdge)
yield return (x+1, y);
if (!leftEdge && !bottomEdge)
yield return (x-1, y+1);
if (!bottomEdge)
yield return (x, y+1);
if (!rightEdge && !bottomEdge)
yield return (x+1, y+1)
}
This function uses Iterators and touples. If you feel those concepts are too complex as you said are new to C#, you can make the function return a list with coordinates instead.
Now the only thing left is to iterate over the whole intArray and increment the value on each position for each neighbour bomb you find.
for (int x = 0; x < W; x++)
{
for (int y = 0; y < H; y++)
{
foreach (var n in NeighbourPositions(x, y, W, H))
{
if (HasBomb(charArray, n.X, n.Y))
intArray[x,y]++;
}
}
}
The answers here are mostly about generating random x and random y put in loop and trying to put the mine into empty cells. It is ok solution, but if you think of it, it is not that sufficient. Every time you try to find a new random cell, there is chance that cell is already a mine. This is pretty much alright, if you don't put too many mines into your field, but if you put some greater number of mines, this event can occur quite often. This means that the loop might take longer than usually. Or, theoretically, if you wanted to put 999 mines into 1000 cell field, it would be really hard for the loop to fill all the necessary cells, especially for the last mine. Now, I am not saying that the solutions here are bad, I think, it's really alright solution for many people. But if someone wanted a little bit efficient solution, I have tried to crate it.
Solution
In this solution, you iterate each cell and try to calculate a probability of the mine to be placed there. I have come up with this easy formula, which calculates the probability:
Every time you try to place a mine into one cell, you calculate this formula and compare it to random generated number.
bool isMine = random.NextDouble() < calcProbability();

Start array search on the END of the (2D) array

I'd like to start the search of an int in my array on the end of the array, not at the beginning. How do I do this?
Please keep in mind that it's a 2D array.
edit someone asked how I search from the start:
In my code file I'm printing a matrix from 8x8 displaying random numbers (1 - 10). The user gives a number to search in the matrix, and the code gives back the position in which the number is FIRST found:
Position position = new Position();
Position LookForNumber(int[,] matrix, int findNumber)
{
for (int r = 0; r < matrix.GetLength(0); r++)
{
for (int c = 0; c < matrix.GetLength(1); c++)
{
if (matrix[r, c] == findNumber)
{
Console.WriteLine("Your number {0} first appears on position {1},{2}", findNumber, r, c);
position.row = r;
position.column = c;
return position;
}
}
}
return position;
}
Now I'd like to find the last position of the number that the user gave up.
To Search a 2D array you would need 2 loops, however you may do the loops in revearse order:
Position LookForNumber(int[,] matrix, int findNumber)
{
Position position = new Position();
for (int r = matrix.GetLength(0) - 1; r >= 0 ; r--)
{
for (int c = matrix.GetLength(1) - 1; c >= 0 ; c--)
{
if (matrix[r, c] == findNumber)
{
Console.WriteLine("Your number {0} first appears on position
{1},{2}", findNumber, r, c);
position.row = r;
position.column = c;
return position;
}
}
}
return null;
}
The fact the array is 2D is irrelevant to your issue.
See how the most regular basic loops works?
for (int i = 0; i < array.length ; i++)
{
...
}
You're essentially doing this, if you had to translate code to english
For all elements, starting with 0 that we're assigning to i, stopping before we reach array.length, and adding 1 to i between each iteration, do the following (...)
So all you have to do really, is change the parameters of the loop.
For example, you could say your starting point (which is i), could be something other than 0. What would you use instead of zero if you want to start at the end of your array?
Then, if you're starting at the end, can you really ADD values to i between each iterations ? Not really, you'd go out of bounds, so what should you do instead of i++ ?
At that point, you need a stopping point if you're iterating from end to start. And the answer lies in the question : my stopping point is the start of the array. So the middle parameter of your loop (which is the stopping point) should be something that represents the start of your array. How would you write that down?
Note that you could also iterate every second element now that you understand the logic, for example.
And the fact you're in a 2D loop just means you need one inner loop inside your main loop, which will work the same way as the outer loop. Just imagine the indexes as a canvas with coordinates, instead of just a line with points.
If you don't understand all I meant I'll just write down the answer, but I think the explanation is clear enough that you will understand :)

Sudoku Backtracking - Order of walking through fields by amount of possible values

I've created a backtracking algorithm for solving Sudoku puzzles which basicly walks through all the empty fields from left to right, top down respectively. Now I need to make an extended version in which the order in which the algorithm walks through the fields is defined by the amount of possibilities (calculated once, at initialization) for each of the fields. E.g. empty fields which initially have the fewest amount of possible values should be visited first, and only the initially possible values should be checked (both to reduce the amount of iterations needed). Now I'm not sure how to go on implementing this without increasing the amount of iterations needed to actually define these values for each field and then obtain the next field with the fewest possibilities.
For my backtracking algorithm I have a nextPosition method which determines the next empty field in the sudoku to visit. Right now it looks like this:
protected virtual int[] nextPosition(int[] position)
{
int[] nextPosition = new int[2];
if (position[1] == (n * n) - 1)
{
nextPosition[0] = position[0]+1;
nextPosition[1] = 0;
}
else
{
nextPosition[0] = position[0];
nextPosition[1] = position[1]+1;
}
return nextPosition;
}
So it basicly walks through the sudoku left-right, top-down respectively. Now I need to alter this for my new version to walk through the fields ordered by the fewest amount of possible values for the fields (and only trying the possible values for each field in my backtracking algorithm). I figured I'd try to keep a list of invalid values for each field:
public void getInvalidValues(int x, int y)
{
for (int i = 0; i < n * n; i++)
if (grid[y, i] != 0)
this.invalidValues[y, i].Add(grid[y, i]);
for (int i = 0; i < n * n; i++)
if (grid[i, x] == 0)
this.invalidValues[i, x].Add(grid[i, x]);
int nX = (int)Math.Floor((double)x / n);
int nY = (int)Math.Floor((double)y / n);
for (int x = 0; x < n; x++)
for (int y = 0; y < n; y++)
if (grid[nY * n + y, nX * n + x] != 0)
this.invalidValues[y, x].Add(grid[y, x]);
}
Calling this method for every empty field in the sudoku (represented in this.grid as 2D array [nn,nn]). However this causes even more iterations since in order to determine the amount of different invalid values for each field it'll have to walk through each list again.
So my question is whether someone knows a way to efficiently walk through the fields of the sudoku ordered by the amount of possible values for each field (at the same time keeping track of these possible values for each field since they are needed for the backtracking algorithm). If anyone could help me out on this it'd be much appreciated.
Thanks in advance!

calculate average function of several functions

I have several ordered List of X/Y Pairs and I want to calculate a ordered List of X/Y Pairs representing the average of these Lists.
All these Lists (including the "average list") will then be drawn onto a chart (see example picture below).
I have several problems:
The different lists don't have the same amount of values
The X and Y values can increase and decrease and increase (and so on) (see example picture below)
I need to implement this in C#, altought I guess that's not really important for the algorithm itself.
Sorry, that I can't explain my problem in a more formal or mathematical way.
EDIT: I replaced the term "function" with "List of X/Y Pairs" which is less confusing.
I would use the method Justin proposes, with one adjustment. He suggests using a mappingtable with fractional indices, though I would suggest integer indices. This might sound a little mathematical, but it's no shame to have to read the following twice(I'd have to too). Suppose the point at index i in a list of pairs A has searched for the closest points in another list B, and that closest point is at index j. To find the closest point in B to A[i+1] you should only consider points in B with an index equal to or larger than j. It will probably by j + 1, but could be j or j + 2, j + 3 etc, but never below j. Even if the point closest to A[i+1] has an index smaller than j, you still shouldn't use that point to interpolate with, since that would result in an unexpected average and graph. I'll take a moment now to create some sample code for you. I hope you see that this optimalization makes sense.
EDIT: While implementing this, I realised that j is not only bounded from below(by the method described above), but also bounded from above. When you try the distance from A[i+1] to B[j], B[j+1], B[j+2] etc, you can stop comparing when the distance A[i+1] to B[j+...] stops decreasing. There's no point in searching further in B. The same reasoning applies as when j was bounded from below: even if some point elsewhere in B would be closer, that's probably not the point you want to interpolate with. Doing so would result in an unexpected graph, probably less smooth than you'd expect. And an extra bonus of this second bound is the improved performance. I've created the following code:
IEnumerable<Tuple<double, double>> Average(List<Tuple<double, double>> A, List<Tuple<double, double>> B)
{
if (A == null || B == null || A.Any(p => p == null) || B.Any(p => p == null)) throw new ArgumentException();
Func<double, double> square = d => d * d;//squares its argument
Func<int, int, double> euclidianDistance = (a, b) => Math.Sqrt(square(A[a].Item1 - B[b].Item1) + square(A[a].Item2 - B[b].Item2));//computes the distance from A[first argument] to B[second argument]
int previousIndexInB = 0;
for (int i = 0; i < A.Count; i++)
{
double distance = euclidianDistance(i, previousIndexInB);//distance between A[i] and B[j - 1], initially
for (int j = previousIndexInB + 1; j < B.Count; j++)
{
var distance2 = euclidianDistance(i, j);//distance between A[i] and B[j]
if (distance2 < distance)//if it's closer than the previously checked point, keep searching. Otherwise stop the search and return an interpolated point.
{
distance = distance2;
previousIndexInB = j;
}
else
{
break;//don't place the yield return statement here, because that could go wrong at the end of B.
}
}
yield return LinearInterpolation(A[i], B[previousIndexInB]);
}
}
Tuple<double, double> LinearInterpolation(Tuple<double, double> a, Tuple<double, double> b)
{
return new Tuple<double, double>((a.Item1 + b.Item1) / 2, (a.Item2 + b.Item2) / 2);
}
For your information, the function Average returns the same amount of interpolated points the list A contains, which is probably fine, but you should think about this for your specific application. I've added some comments in it to clarify some details, and I've described all aspects of this code in the text above. I hope it's clear, and otherwise feel free to ask questions.
SECOND EDIT: I misread and thought you had only two lists of points. I have created a generalised function of that above accepting multiple lists. It still uses only those principles explained above.
IEnumerable<Tuple<double, double>> Average(List<List<Tuple<double, double>>> data)
{
if (data == null || data.Count < 2 || data.Any(list => list == null || list.Any(p => p == null))) throw new ArgumentException();
Func<double, double> square = d => d * d;
Func<Tuple<double, double>, Tuple<double, double>, double> euclidianDistance = (a, b) => Math.Sqrt(square(a.Item1 - b.Item1) + square(a.Item2 - b.Item2));
var firstList = data[0];
for (int i = 0; i < firstList.Count; i++)
{
int[] previousIndices = new int[data.Count];//the indices of points which are closest to the previous point firstList[i - 1].
//(or zero if i == 0). This is kept track of per list, except the first list.
var closests = new Tuple<double, double>[data.Count];//an array of points used for caching, of which the average will be yielded.
closests[0] = firstList[i];
for (int listIndex = 1; listIndex < data.Count; listIndex++)
{
var list = data[listIndex];
double distance = euclidianDistance(firstList[i], list[previousIndices[listIndex]]);
for (int j = previousIndices[listIndex] + 1; j < list.Count; j++)
{
var distance2 = euclidianDistance(firstList[i], list[j]);
if (distance2 < distance)//if it's closer than the previously checked point, keep searching. Otherwise stop the search and return an interpolated point.
{
distance = distance2;
previousIndices[listIndex] = j;
}
else
{
break;
}
}
closests[listIndex] = list[previousIndices[listIndex]];
}
yield return new Tuple<double, double>(closests.Select(p => p.Item1).Average(), closests.Select(p => p.Item2).Average());
}
}
Actually that I did the specific case for 2 lists separately might have been a good thing: it is easily explained and offers a step before understanding the generalised version. Furthermore, the square root could be taken out, since it doesn't change the order of the distances when sorted, just the lengths.
THIRD EDIT: In the comments it became clear there might be a bug. I think there are none, aside from the mentioned small bug, which shouldn't make any difference except for at the end of the graphs. As a proof that it actually works, this is the result of it(the dotted line is the average):
I'll use a metaphor of your functions being cars racing down a curvy racetrack, where you want to extract the center-line of the track given the cars' positions. Each car's position can be described as a function of time:
p1(t) = (x1(t), y1(t))
p2(t) = (x2(t), y2(t))
p3(t) = (x3(t), y3(t))
The crucial problem is that the cars are racing at different speeds, which means that p1(10) could be twice as far down the race track as p2(10). If you took a naive average of these two points, and there was a sharp curve in the track between the cars, the average may be far from the track.
If you could just transform your functions to no longer be a function of time, but a function of the distance along the track, then you would be able to do what you want.
One way you could do this would be to choose the slowest car (i.e., the one with the greatest number of samples). Then, for each sample of the slowest car's position, look at all of the other cars' paths, find the two closest points, and choose the point on the interpolated line which is closest to the slowest car's position. Then average these points together. Once you do this for all of the slow car's samples, you have an average path.
I'm assuming that all of the cars start and end in roughly the same places; if any of the cars just race a small portion of the track, you will need to add some more logic to detect that.
A possible improvement (for both performance and accuracy), is to keep track of the most recent sample you are using for each car and the speed of each car (the relative sampling rate). For your slowest car, it would be a simple map: 1 => 1, 2 => 2, 3 => 3, ... For the other cars, though, it could be more like: 1 => 0.3, 2 => 0.7, 3 => 1.6 (fractional values are due to interpolation). The speed would be the inverse of the change in sample number (e.g., the slow car would have speed 1, and the other car would have speed 1/(1.6-0.7)=1.11). You could then ensure that you don't accidentally backtrack on any of the cars. You could also improve the calculation speed because you don't have to search through the whole set of all points on each path; instead, you can assume that the next sample will be somewhere close to the current sample plus 1/speed.
As these are not y=f(x) functions, are they perhaps something like (x,y)=f(t)?
If so, you could interpolate along t, and calculate avg(x) and avg(y) for each t.
EDIT This of course assumes that t can be made available to your code - so that you have an ordered list of T/X/Y triples.
There are several ways this can be done. One is to combine all of your data into one single set of points, and do a best-fit curve through the combined set.
you have e.g. 2 "functions" with
fc1 = { {1,0.3} {2, 0.5} {3, 0.1} }
fc1 = { {1,0.1} {2, 0.8} {3, 0.4} }
You want the arithmetic mean (slang: "average") of the two functions. To do this you just calculate the pointwise arithmetic mean:
fc3 = { {1, (0.3+0.1)/2} ... }
Optimization:
If you have large numbers of points you should first convert your "ordered List of X/Y Pairs" into a Matrix OR at least store the points column-wise like so:
{0.3, 0.1}, {0.5, 0.8}, {0.1, 0.4}

Categories