I'm trying to reduce the Length of this Code - c#

I wrote some code today. I can't figure out how to reduce the length of this code, although it seems repetitive, every part is different.
try {
totalVerts.Add(verts[i]);
if (verts[i].x > maxXvert)
{
maxXvert = verts[i].x;
}
if (verts[i].x < minXvert)
{
minXvert = verts[i].x;
}
if (verts[i].y > maxYvert)
{
maxYvert = verts[i].y;
}
if (verts[i].y < minYvert)
{
minYvert = verts[i].y;
}
if (verts[i].z > maxZvert)
{
maxZvert = verts[i].z;
}
if (verts[i].z < minZvert)
{
minZvert = verts[i].z;
}
}
In this code I am adding the Vector3 position vertices (x,y,z) to the totalVerts Array. I am also testing each x,y,z position whether it is the maximum or minimum of all vertices, if it is, I then set the variables maxXvert, maxYvert... etc to the value that is higher or lower.
If anyone can think of a way to reduce, that would be great. Thank you.

You could use Math.Min and Math.Max.
minXvert = Math.Min(verts[i].x, minXvert);
maxXvert = Math.Max(verts[i].x, maxXvert);
That would make your code more concise and readable, but won't make it any faster.
To make it somewhat faster, you could store x, y, z values in local variables, so they only have to be looked up once instead of 2-4 times. But, the compiler is probably doing this for you anyway. int x = verts[i].x; etc.

You could remove all of the brackets: (No refactoring, just less lines!)
try {
totalVerts.Add(verts[i]);
if (verts[i].x > maxXvert)
maxXvert = verts[i].x;
if (verts[i].x < minXvert)
minXvert = verts[i].x;
if (verts[i].y > maxYvert)
maxYvert = verts[i].y;
if (verts[i].y < minYvert)
minYvert = verts[i].y;
if (verts[i].z > maxZvert)
maxZvert = verts[i].z;
if (verts[i].z < minZvert)
minZvert = verts[i].z;
}

Performance wise, this is fine. The code just looks ugly.
unfortunately the Array.max() function in LINQ is only for .net 3.5 and unity is .net 2.0, so I cannot think of a better way.
If the LINQ where available (which it is not), you could do.
float minX=totalVerts.min(v => v.x);
Or similar which would be a lot neater (I would have to check performance)

It's hard to guess the context of this, but in my experience having different floats represent a vector is an unnecessary pain. If you create two Vetor3 instead of six float, you can still access the individual values (eg. myVector3.x += 1f;). And, by using the higher abstraction, you both make the code more readable and incorporate Vector3 functionalities, like Max and Min methods, that serve the very purpose of simplifying code:
Vector3 upperBound = Vector3.one * Mathf.NegativeInfinity,
lowerBound = Vector3.one * Mathf.Infinity;
foreach (Vector3 vert in verts) {
totalVerts.Add(vert);
upperBound = Vector3.Max(upperBound, vert);
lowerBound = Vector3.Min(lowerBound, vert);
}
As a side note, if you are doing procedural meshes and this is to calculate the bounds, be aware of the RecalculateBounds() method. Most of the times, when I need the bounds of a procedural mesh I read it from mesh.bounds after creating and recalculating the mesh, because I had to do that anyways and just reading if afterwards saves me the trouble.

Related

Minecraft like terrain in unity3d

I'm a fan of Minecraft's old terrain generation with amazing overhangs, mountains and generally interesting worlds. My problem is that right now I'm using perlin noise, which while good for smooth terrain doesn't really give sporadic jumps that would allow mountains in a mostly flat area.
On top of that with the method I'm using gets 2d perlin noise, puts it in an array and then gets every Y value under it and sets it to a block; This stops generation of overhangs like this: Old Minecraft Terrain Image
Right now I have this:
public class GenerateIdMap : MonoBehaviour {
[Serializable] public class IBSerDict : SerializableDictionaryBase<int, byte> {};
public int size = 60;
public int worldHeight = 3;
public float perlinScale = 15f;
public int seed;
public int heightScale = 10;
public int maxHeight = 256;
public IBSerDict defaultBlocks = new IBSerDict();
void Start()
{
if (seed != 0) seed = (int)Network.time * 10;
CreateMap();
}
byte[,,] CreateMap()
{
byte[,,] map = new byte[size, maxHeight, size];
for (int x = 0; x < size; x++)
{
for (int z = 0; z < size; z++)
{
int y = (int)(Mathf.PerlinNoise((x + seed) / perlinScale, (z + seed) / perlinScale) * heightScale) + worldHeight;
y = Mathf.Clamp(y, 0, maxHeight-1);
while (y > 0)
{
map[x, y, z] = GetBlockType(y);
y--;
}
}
}
return map;
}
byte GetBlockType(int y)
{
SortedDictionary<int, byte> s_defaultBlocks = new SortedDictionary<int, byte>(defaultBlocks);
foreach (var item in s_defaultBlocks.OrderBy(key => key.Key))
{
if (y <= item.Key)
{
print(item.Value);
return item.Value;
}
}
return 0;
} }
The GetBlockType function is new and for getting the default block at that height, I'll fix it up later but it works for now. If you instantiate a prefab at that vector3 you would see terrain. Can someone help me figure out how to make better terrain? Thanks in advance!
Both of your problems should be tackled individually.
The first issue regarding the lack of variation in the generated values can usually be fixed in one of two ways, the first way is to modify the input into the perlin noise, i.e. the octaves and persistance and the second is to mix the output of multiple functions and even use the output of one function as the input to another. By functions, I mean Perlin/Simplex/Voronoi etc.
With the former method, as you mentioned, it can be pretty difficult to get terrain with interesting features over a large area (the generated values are homogeneous), but by playing with the coordinate range and octaves/persistance, it can be possible. The second approach is probably recommended however, because by mixing the inputs and outputs of different functions you can get some really interesting shapes (Voronoi produces circular crator-like shapes).
In order to fix the problem you are having with the overhangs, you would need to change your approach to generating the world slightly. Currently, you are just generating the height values of the terrain and assigning each of those values to give you the terrain surface only. What you ideally would want to do is, generate a pseudo-random value to use as a pass flag for each of the blocks in the 3d space (also those underground). The flag would indicate whether a block should be placed or not in the 3d world.
This is slower, but would generate caves and overhangs as you need.

Getting Number of Cubes with Integer Coordinates with Certain Distance from Origin

I am working on a voxel system for my game that uses dynamic loading of chunks. To optimize it, I have a pool of chunks and a render distance, and what I want to do is fill the pool with a proper amount of chunks. So, I need a way to find that amount. I have tried the following but it seems very inefficient.
private void CreatePool()
{
int poolSize = 0;
for (int x = -m_RenderDistance; x <= m_RenderDistance; x++) {
for (int y = -m_RenderDistance; y <= m_RenderDistance; y++) {
for (int z = -m_RenderDistance; z <= m_RenderDistance; z++) {
if (Position3Int.DistanceFromOrigin(new Position3Int(x, y, z)) <= m_RenderDistance)
poolSize++;
}
}
}
}
More formally, the question involes finding the amount of unique cubes with integer coorindates with a certain distance from the origin.
If you think there is a better way to approach this or I am doing something fundamentally wrong, let me know.
Thanks,
Quintin
I assume its the distance check that you think is inefficient? What you've got shouldn't be too bad if you're just getting the count on Start() or Awake().
Draco18s solution is fine if you are okay with a cubed result. If you want a spherical result without a distance check, you can try some formulation of the volume of a sphere: 4/3*PI*r^3
checkout Bresenham's circle.
Here's a approximation algorithm for a filled 3d Bresenham Circle that I have. It is very similar to what you have already, just with a more effecient squared dist check and a minor adjustment to get a more attractive bresenham-looking circle):
public static List<Vector3> Get3DCircleKeys(int radius){
List<Vector3> keys = new List<Vector3>();
for(int y=-radius; y<=radius; y++){
for(int x=-radius; x<=radius; x++){
for(int z =-radius; z<=radius; z++){
// (+ radius*.08f) = minor modification to match Bresenham result
if(x*x+y*y+z*z <= radius*radius + radius*.08f){
keys.Add(new Vector3(x,y,z));
}
}
}
}
return keys;
}
This, however, will deliver a different count than the volume of sphere would give you, but with some tweaking to it or to the sphere volume calculation, it could be good enough, or at least, more efficient than instantiating a full volume of a cube, where many of the voxels will be outside of the bounds of the render distance.

Logarithmic Fade In and Out over Game Frames

I have created a static function within Unity3d that fades out music over a certain amount of frames. The function is being placed within Unity3d's FixedUpdate() so that it updates according to real world time (or at least close hopefully).
My current math for a linear fade out looks like this:
if (CurrentFrames > 0) {
AudioSourceObject.volume = ((CurrentFrames / TotalFrames) * TotalVolume);
}
if (CurrentFrames <= 0) {
AudioSourceObject.volume = 0.00f;
return 0;
}
CurrentFrames--;
This works well as a simple way to create a function that's like FadeOutBGM(NumberOfFrames);...but now I'm looking at how I would create a logarithmic fade out using this function.
Looking at the equasions online, I'm completely confused as to what I should do.
You can do this easily by using Unity's built in AnimationCurve class. Define it in your class and use the Unity inspector to plot a logarithmic curve.
Then in the fade out class, do something like this:
public AnimationCurve FadeCurve;
//...
if (CurrentFrames > 0) {
float time = CurrentFrames / TotalFrames;
AudioSourceObject.volume = FadeCurve.Evaluate(time) * TotalVolume;
}
//...
Its very useful for doing all kinds of easing effects, just try playing around with different curves.
You're more than welcome to use the builtins. Otherwise, if you'd like finer control, you could use an adjusted sigmoid function. Normally, a sigmoid function looks like this: (https://www.wolframalpha.com/input/?i=plot+(1%2F(1%2Be%5E(t)))+from+-5+to+5)
But you want your fade out function to look more like this: (https://www.wolframalpha.com/input/?i=plot+(1%2F(1%2Be%5E(3t-4))+from+-5+to+5)
Where x is your ratio of (CurrentFrames / TotalFrames) * 3, and y is the output volume.
But how did I get from the first to the second graph? Try experimenting with the scalers (i.e. 3x or 5x) and offsets (i.e. (3x - 4) or (3x - 6)) of the exponential input (e^(play with this)) in wolfram alpha to get a feel for how you want the curve to look. You don't necessarily have to line up the intersection of your curve and an axis with a particular number, as you can adjust the input and output of your function in your code.
So, your code would look like:
if (CurrentFrames > 0) {
// I'd recommend factoring out and simplifying this into a
// function, but I have close to zero C# experience.
inputAdjustment = 3.0
x = ((CurrentFrames / TotalFrames) * inputAdjustment);
sigmoidFactor = 1.0/(1.0 + Math.E ^ ((3.0 * x) - 4.0));
AudioSourceObject.volume = (sigmoidFactor * TotalVolume);
}
if (CurrentFrames <= 0) {
AudioSourceObject.volume = 0.00f;
return 0;
}
CurrentFrames--;

Speed up nested for loops and improve performance

I am working on a program and it has a pretty long execution time. I'am trying to improve performance where I can, however my knowledge is limited in this area. Can anyone recommend a way to speed up the method below?
public static double DistanceBetween2Points(double[,] p1, double[,] p2, int patchSize)
{
double sum = 0;
for (int i = 0; i < patchSize; i++)
{
for (int j = 0; j < patchSize; j++)
{
sum += Math.Sqrt(Math.Pow(p1[i, j] - p2[i, j], 2));
}
}
return sum;
}
The method calculates the distance between two images by calculating the sum of all the distances between two points on the two images.
Think about your algorithm. Probably a pixel-distance isn't the best thing to get an acurate image-distance.
replace sqrt(x^2) by abs(x) or even faster:
if(x < 0) x = -x;
Rename your routine to OverallImageDistance or similar(will not improve performance) ;)
Use unsafe pointers, and calculate your distance in a single loop using these pointers:
unsafe
{
sum = 0.0;
int numPixels = patchsize*patchsize;
fixed(int *pointer1 = &p1[0])
{
fixed(int* pointer2 = &p2[0])
{
while(numPixels-- > 0)
{
double dist = *pointer1++ - *pointer2++;
if(dist < 0) dist = -dist;
sum += dist;
}
...
This should be several times faster than your original.
Well, this method is really weird and does not look like distance between pixels at all. But certainly you would want to use linear algebra instead of straightforward array calculations.
Image recognition, natural language processing and machine learning algorithms all use matrices, because matrix libraries are highly optimized for these kind of situations, when you need batch processing.
There is a plethora of matrix libraries in the wild, look here Recommendation for C# Matrix Library
EDIT: Ok, thanks for feedback, trying to improve the answer...
You can use Math.Net Numerics open source library (install MathNet.Numerics nuget package) and rewrite your method like this:
using MathNet.Numerics.LinearAlgebra;
public static double DistanceBetween2Points(double[,] p1, double[,] p2, int patchSize)
{
var A = Matrix<double>.Build.DenseOfArray(p1).SubMatrix(0, patchSize, 0, patchSize);
var B = Matrix<double>.Build.DenseOfArray(p2).SubMatrix(0, patchSize, 0, patchSize);
return (A - B).RowAbsoluteSums().Sum();
}
Essentially, loops slow down your code. When doing batch processing ideally you should avoid loops at all.

algorithm to get first, second, third neighbors from a coordinate

In a bitmap for each pixel, I'm trying to get its first, second, third... level of neighbors until the end of the bitmap, but my solution is a little slow, so let me know if any of you guys have a better algorithm or way to do this:
private IEnumerable<Point> getNeightboorsOfLevel(int level, Point startPos, Point[,] bitMap)
{
var maxX = bitMap.GetLength(0);
var maxY = bitMap.GetLength(1);
if (level > Math.Max(maxX, maxY)) yield break;
int startXpos = startPos.X - level;
int startYpos = startPos.Y - level;
int sizeXY = level * 2;
var plannedTour = new Rectangle(startXpos, startYpos, sizeXY, sizeXY);
var tourBoundaries = new Rectangle(0, 0, maxX, maxY);
for(int UpTour = plannedTour.X; UpTour<plannedTour.Width; UpTour++)
if (tourBoundaries.Contains(UpTour,plannedTour.Y))
yield return bitMap[UpTour,plannedTour.Y];
for(int RightTour = plannedTour.Y; RightTour<plannedTour.Height;RightTour++)
if (tourBoundaries.Contains(plannedTour.Right,RightTour))
yield return bitMap[plannedTour.Right,RightTour];
for(int DownTour = plannedTour.X; DownTour<plannedTour.Width;DownTour++)
if (tourBoundaries.Contains(DownTour,plannedTour.Bottom))
yield return bitMap[DownTour,plannedTour.Bottom];
for (int LeftTour = plannedTour.Y; LeftTour < plannedTour.Height; LeftTour++)
if (tourBoundaries.Contains(plannedTour.X,LeftTour))
yield return bitMap[plannedTour.X,LeftTour];
}
Well, if this is too slow, you might want to change your approach.
For example, generate a Dictionary<Color, List<Point>> which for each color in the bitmap has a list of points which are that color. Then when you are given a point, you get the color and then run through the list of points to find the closest one to the given point.
This is a 1 time pre-compute on your image, and then changes the complexity to by the number of points which are the same color. I'm assuming currently it is slow because you have to look at a lot of points because it is rare to find a point with the same color.
One way to speed things is to make your plannedTour include the boundaries. For example:
var plannedTour = new Rectangle(
Math.Max(0, startPos.X - level),
Math.Max(0, startPos.Y - level),
Math.Min(maxX, startPos.X + level),
Math.Min(maxY, startPos.Y + level));
That pre-computes the boundaries and prevents you from having to check at every loop iteration. It can also save you an entire Left tour, for example.
If you do that, you need an if statement to prevent checking an area that is outside the boundaries. For example:
if (plannedTour.Y >= minY)
{
// do Up tour.
}
if (plannedTour.X <= maxX)
{
// do Right tour
}
if (plannedTour.Y <= maxY)
{
// do Down tour
}
if (plannedTour.X >= minX)
{
// do Left tour
}
A minor optimization would be to eliminate the four extra pixels you're checking. It looks to me as though you're checking each corner twice. You can prevent that by making the Left and Right tours start at plannedTour.Y+1 and end at plannedTour.Bottom-1.
You might be able to save some time, although it'll probably be small, by using a strict left-to-right, top-to-bottom check. That is, check the Up row, then on the next row check the left and right pixels, then the next row, etc., and finally check the bottom row. Doing this will probably give you better cache coherence because it's likely that bitMap[x-level, y] and bitmap[x+level, y] will be on the same cache line, whereas it's highly unlikely that bitmap[x-level, y] and bitmap[x-level, y+1] will be on the same cache line. The savings you gain from doing the memory access that way might not be worth the add complexity in the coding.

Categories