I am working on a program and it has a pretty long execution time. I'am trying to improve performance where I can, however my knowledge is limited in this area. Can anyone recommend a way to speed up the method below?
public static double DistanceBetween2Points(double[,] p1, double[,] p2, int patchSize)
{
double sum = 0;
for (int i = 0; i < patchSize; i++)
{
for (int j = 0; j < patchSize; j++)
{
sum += Math.Sqrt(Math.Pow(p1[i, j] - p2[i, j], 2));
}
}
return sum;
}
The method calculates the distance between two images by calculating the sum of all the distances between two points on the two images.
Think about your algorithm. Probably a pixel-distance isn't the best thing to get an acurate image-distance.
replace sqrt(x^2) by abs(x) or even faster:
if(x < 0) x = -x;
Rename your routine to OverallImageDistance or similar(will not improve performance) ;)
Use unsafe pointers, and calculate your distance in a single loop using these pointers:
unsafe
{
sum = 0.0;
int numPixels = patchsize*patchsize;
fixed(int *pointer1 = &p1[0])
{
fixed(int* pointer2 = &p2[0])
{
while(numPixels-- > 0)
{
double dist = *pointer1++ - *pointer2++;
if(dist < 0) dist = -dist;
sum += dist;
}
...
This should be several times faster than your original.
Well, this method is really weird and does not look like distance between pixels at all. But certainly you would want to use linear algebra instead of straightforward array calculations.
Image recognition, natural language processing and machine learning algorithms all use matrices, because matrix libraries are highly optimized for these kind of situations, when you need batch processing.
There is a plethora of matrix libraries in the wild, look here Recommendation for C# Matrix Library
EDIT: Ok, thanks for feedback, trying to improve the answer...
You can use Math.Net Numerics open source library (install MathNet.Numerics nuget package) and rewrite your method like this:
using MathNet.Numerics.LinearAlgebra;
public static double DistanceBetween2Points(double[,] p1, double[,] p2, int patchSize)
{
var A = Matrix<double>.Build.DenseOfArray(p1).SubMatrix(0, patchSize, 0, patchSize);
var B = Matrix<double>.Build.DenseOfArray(p2).SubMatrix(0, patchSize, 0, patchSize);
return (A - B).RowAbsoluteSums().Sum();
}
Essentially, loops slow down your code. When doing batch processing ideally you should avoid loops at all.
Related
I'm reading data from a sensor. The sensor give an array of points (x,y). But as you can see in the image, there is a lot of noise:
.
I need to clean the data in a way that the data filtered, give a few points . Using something like Median,adjacent averaging, mean of the xy points or an algorithm that removes the noise. I know that there are a bunch of libraries in Python that make the work automatically. All the auto libraries that I found are base on image analysis and I think they do not work for this case because this is different, these are points (x,y) in a dataset.
point-cloud noise cleaned:
PD: I wanted to do the median of the points but i got confused when i tried with an bidimensional array (this mean ListOfPoints[[x,y],[x,y],[x,y],[x,y],[x,y],[x,y]]) I didn't know how to make that calculation with for or while to iterate and make the calc. I prefer C#, but if there is a solution in other language without libraries, I would be open to it.
One of the methods you can use is k_means algorithm. This picture briefly explains this algorithm k_means
This link fully explains the k_means algorithm. Also, how to create a loop on the input data. I don't think any additional explanation is needed k_means algorithm
K_menans algorithm is very simple and you will understand it with the first Google search
You could try doing a weighted average of the Y-value at sampled X-positions. Something like this:
List<Point2> filtered_points = new List<Point2>();
for (int x = xmin; x <= xmax; x++)
{
double weight_sum = 0;
List<double> weights = new List<double>();
foreach (Point2 p in point_list)
{
double w = 1.0/((p.x - x)*(p.x - x) + 1e-3);
weights.Add(w);
weight_sum += w;
}
double y = 0;
for (int i = 0; i < point_list.Count; i++)
{
y += weights[i]*point_list[i].y / weight_sum;
}
filtered_points.Add(new Point2(x, y));
}
You would probably need to tune the weights to get nice results. Also, in the example I am using a quadratic decay, but other weighting functions can be used (linear decay, gaussian function...)
I'm a fan of Minecraft's old terrain generation with amazing overhangs, mountains and generally interesting worlds. My problem is that right now I'm using perlin noise, which while good for smooth terrain doesn't really give sporadic jumps that would allow mountains in a mostly flat area.
On top of that with the method I'm using gets 2d perlin noise, puts it in an array and then gets every Y value under it and sets it to a block; This stops generation of overhangs like this: Old Minecraft Terrain Image
Right now I have this:
public class GenerateIdMap : MonoBehaviour {
[Serializable] public class IBSerDict : SerializableDictionaryBase<int, byte> {};
public int size = 60;
public int worldHeight = 3;
public float perlinScale = 15f;
public int seed;
public int heightScale = 10;
public int maxHeight = 256;
public IBSerDict defaultBlocks = new IBSerDict();
void Start()
{
if (seed != 0) seed = (int)Network.time * 10;
CreateMap();
}
byte[,,] CreateMap()
{
byte[,,] map = new byte[size, maxHeight, size];
for (int x = 0; x < size; x++)
{
for (int z = 0; z < size; z++)
{
int y = (int)(Mathf.PerlinNoise((x + seed) / perlinScale, (z + seed) / perlinScale) * heightScale) + worldHeight;
y = Mathf.Clamp(y, 0, maxHeight-1);
while (y > 0)
{
map[x, y, z] = GetBlockType(y);
y--;
}
}
}
return map;
}
byte GetBlockType(int y)
{
SortedDictionary<int, byte> s_defaultBlocks = new SortedDictionary<int, byte>(defaultBlocks);
foreach (var item in s_defaultBlocks.OrderBy(key => key.Key))
{
if (y <= item.Key)
{
print(item.Value);
return item.Value;
}
}
return 0;
} }
The GetBlockType function is new and for getting the default block at that height, I'll fix it up later but it works for now. If you instantiate a prefab at that vector3 you would see terrain. Can someone help me figure out how to make better terrain? Thanks in advance!
Both of your problems should be tackled individually.
The first issue regarding the lack of variation in the generated values can usually be fixed in one of two ways, the first way is to modify the input into the perlin noise, i.e. the octaves and persistance and the second is to mix the output of multiple functions and even use the output of one function as the input to another. By functions, I mean Perlin/Simplex/Voronoi etc.
With the former method, as you mentioned, it can be pretty difficult to get terrain with interesting features over a large area (the generated values are homogeneous), but by playing with the coordinate range and octaves/persistance, it can be possible. The second approach is probably recommended however, because by mixing the inputs and outputs of different functions you can get some really interesting shapes (Voronoi produces circular crator-like shapes).
In order to fix the problem you are having with the overhangs, you would need to change your approach to generating the world slightly. Currently, you are just generating the height values of the terrain and assigning each of those values to give you the terrain surface only. What you ideally would want to do is, generate a pseudo-random value to use as a pass flag for each of the blocks in the 3d space (also those underground). The flag would indicate whether a block should be placed or not in the 3d world.
This is slower, but would generate caves and overhangs as you need.
I seek to optimize the performance of my small program, which functionality relies on detecting an image which is most similar to given example. Problem is, the method that I use is really slow and could use a bit of reworking.
I also find that I cannot use Parallel.For to compute the similarity value due to the fact that the function you'll see below is already being called from Parallel.ForEach cycle. Eh.
My similarity method:
public static double isItSame(Bitmap source, Color[,] example)
{
double rez = 0;
for (int x = 20; x < 130; x += 3)
{
for (int y = 10; y < 140; y += 3)
{
Color color1 = source.GetPixel(x, y);
rez += Math.Abs(color1.R - example[x, y].R) + Math.Abs(color1.G - example[x, y].G) + Math.Abs(color1.B - example[x, y].B);
}
}
return rez;
}
Will greatly appreciate any help to optimize this solution. My own way to optimize it was to just do x+3 instead of x++, and same for y, but it results in poor overall results. Eh.
I wrote some code today. I can't figure out how to reduce the length of this code, although it seems repetitive, every part is different.
try {
totalVerts.Add(verts[i]);
if (verts[i].x > maxXvert)
{
maxXvert = verts[i].x;
}
if (verts[i].x < minXvert)
{
minXvert = verts[i].x;
}
if (verts[i].y > maxYvert)
{
maxYvert = verts[i].y;
}
if (verts[i].y < minYvert)
{
minYvert = verts[i].y;
}
if (verts[i].z > maxZvert)
{
maxZvert = verts[i].z;
}
if (verts[i].z < minZvert)
{
minZvert = verts[i].z;
}
}
In this code I am adding the Vector3 position vertices (x,y,z) to the totalVerts Array. I am also testing each x,y,z position whether it is the maximum or minimum of all vertices, if it is, I then set the variables maxXvert, maxYvert... etc to the value that is higher or lower.
If anyone can think of a way to reduce, that would be great. Thank you.
You could use Math.Min and Math.Max.
minXvert = Math.Min(verts[i].x, minXvert);
maxXvert = Math.Max(verts[i].x, maxXvert);
That would make your code more concise and readable, but won't make it any faster.
To make it somewhat faster, you could store x, y, z values in local variables, so they only have to be looked up once instead of 2-4 times. But, the compiler is probably doing this for you anyway. int x = verts[i].x; etc.
You could remove all of the brackets: (No refactoring, just less lines!)
try {
totalVerts.Add(verts[i]);
if (verts[i].x > maxXvert)
maxXvert = verts[i].x;
if (verts[i].x < minXvert)
minXvert = verts[i].x;
if (verts[i].y > maxYvert)
maxYvert = verts[i].y;
if (verts[i].y < minYvert)
minYvert = verts[i].y;
if (verts[i].z > maxZvert)
maxZvert = verts[i].z;
if (verts[i].z < minZvert)
minZvert = verts[i].z;
}
Performance wise, this is fine. The code just looks ugly.
unfortunately the Array.max() function in LINQ is only for .net 3.5 and unity is .net 2.0, so I cannot think of a better way.
If the LINQ where available (which it is not), you could do.
float minX=totalVerts.min(v => v.x);
Or similar which would be a lot neater (I would have to check performance)
It's hard to guess the context of this, but in my experience having different floats represent a vector is an unnecessary pain. If you create two Vetor3 instead of six float, you can still access the individual values (eg. myVector3.x += 1f;). And, by using the higher abstraction, you both make the code more readable and incorporate Vector3 functionalities, like Max and Min methods, that serve the very purpose of simplifying code:
Vector3 upperBound = Vector3.one * Mathf.NegativeInfinity,
lowerBound = Vector3.one * Mathf.Infinity;
foreach (Vector3 vert in verts) {
totalVerts.Add(vert);
upperBound = Vector3.Max(upperBound, vert);
lowerBound = Vector3.Min(lowerBound, vert);
}
As a side note, if you are doing procedural meshes and this is to calculate the bounds, be aware of the RecalculateBounds() method. Most of the times, when I need the bounds of a procedural mesh I read it from mesh.bounds after creating and recalculating the mesh, because I had to do that anyways and just reading if afterwards saves me the trouble.
I have a 8 x 8 matrix of floating point numbers and need to calculate eigenvector and eigenvalue from it. This is for feature reduction using PCA (Principal Component Analysis) and is one hell of a time consuming job if done by traditional methods. I tried to use power method as, Y = C*X where X is my 8 X 8 matrix.
float[,] XMatrix = new float[8, 1];
float[,] YMatrix = new float[8, 1];
float max = 0;
XMatrix[0, 0] = 1;
for (int i = 0; i < 8; i++)
{
for (int j = 0; j < 1; j++)
{
for (int k = 0; k < 8; k++)
{
YMatrix[i, j] += C[i, k] * XMatrix[k, j];
if (YMatrix[i, j] > max)
max = YMatrix[i, j];
}
}
}
I know it is incorrect but cannot figure it out. I need help for using a power method or perhaps more effective way of calculating it.
Thanks in advance.
To retrieve the eigenvalues/eigenvectors in an efficent manner (i.e. fast!) for any size (dense) matrix, is not entirely trivial. I would suggest you use something like the QR algorithm (although this maybe overkill for a one-off calculation of a single 8x8 matrix).
The QR algorithm computes a Schur decomposition of a matrix. It is certainly one of the
most important algorithm in eigenvalue computations. However, it is applied to dense matrices only (as stated above).
The QR algorithm consists of two separate stages. First, by means of a similarity
transformation, the original matrix is transformed in a finite number of steps to Hessenberg
form or – in the Hermitian/symmetric case – to real tridiagonal form. This first stage of
the algorithm prepares its second stage, the actual QR iterations that are applied to the
Hessenberg or tridiagonal matrix.
The overall complexity (number of floating points) of the algorithm is O(n3). For a good explanation of this algorithm see here. Or searches for eigenvalue algorithm in Google should provide you with many alternative ways of calculating your required eigenvalues/vectors.
Also, I have not looked into this in detail, but Math.NET a free library may help you here...