I'm coding an runtime terrain editor for unity and got stuck.
At first I wanted just to paint with textrue the terrain. I found this code and it worked fine:
SCRIPT: TerrainPainter
void Paint(Vector3 point)
{
mapX = (int)(((point.x - terrainPosition.x) / terrainData.size.x) * heightmapWidth);
mapY = (int)(((point.z - terrainPosition.z) / terrainData.size.z) * heigtmapHeight);
splatmapData[mapY, mapX, 0] = element[0, 0, 0] = 0;
splatmapData[mapY, mapX, 1] = element[0, 0, 1] = 1;
terrain.terrainData.SetAlphamaps(mapX, mapY, element);
}
But now I want to paint with different sizes/thickness. I have another script, named Terrainmodifier, which I use to raise and lower the terrain. There I have this lines for raising:
SCRIPT: Terrainmodifier
public void RaiseTerrain(Terrain terrain, Vector3 location, float effectIncrement)
{
int offset = areaOfEffectSize / 2;
//--1--
Vector3 tempCoord = (location - terrain.GetPosition());
Vector3 coord;
coord = new Vector3(
(tempCoord.x / GetTerrainSize().x),
(tempCoord.y / GetTerrainSize().y),
(tempCoord.z / GetTerrainSize().z)
);
Vector3 locationInTerrain = new Vector3(coord.x * terrainHeightMapWidth, 0, coord.z * terrainHeightMapHeight);
// End --1--
// --2--
int terX = (int)locationInTerrain.x - offset;
int terZ = (int)locationInTerrain.z - offset;
// End --2--
// --3--
float[,] heights = targetTerrainData.GetHeights(terX, terZ, areaOfEffectSize, areaOfEffectSize);
for (int xx = 0; xx < areaOfEffectSize; xx++)
{
for (int yy = 0; yy < areaOfEffectSize; yy++)
{
heights[xx, yy] += (effectIncrement * Time.smoothDeltaTime);
}
}
targetTerrainData.SetHeights(terX, terZ, heights);
}
So I thought I could use this as an aid and transfer it. So I took GetAlphamaps() instead of GetHeights() and added the variable areaOfEffectSize.
SCRIPT: TerrainPainter
void Paint(Vector3 point)
{
// --1--
mapX = (int)(((point.x - terrainPosition.x) / terrainData.size.x) * heightmapWidth);
mapY = (int)(((point.z - terrainPosition.z) / terrainData.size.z) * heigtmapHeight);
// End --1--
// --2--
int terX = (int)mapX - (areaOfEffectSize / 2);
int terY = (int)mapY - (areaOfEffectSize / 2);
// End --2--
// --3--
splatmapData = terrainData.GetAlphamaps(terX, terY, areaOfEffectSize, areaOfEffectSize);
for(int xx = 0; xx < areaOfEffectSize; xx++)
{
for (int yy = 0; yy < areaOfEffectSize; yy++)
{
splatmapData[yy, xx, 1] = element[0, 0, 1] = 1;
}
}
terrain.terrainData.SetAlphamaps(terX, terY, element);
}
Hope sb can help me find my mistake. How can I change the size of my "brush"?
EDIT: I wrote comments into the code to see the transfered/related lines.
Oh guys, I did a stupid mistake. Solved this problem by passing the splatmapData into SetAlphamaps -.-
So the solution is:
[..]
for (int xx = 0; xx < areaOfEffectSize; xx++)
{
for (int yy = 0; yy < areaOfEffectSize; yy++)
{
splatmapData[yy, xx, 0] = 0;
splatmapData[yy, xx, 1] = 1;
}
}
terrain.terrainData.SetAlphamaps(terX, terY, splatmapData);
Related
I'm working on writing a Mendelbrot renderer in C# to practice multithreading, but am having an issue where my calculation code maxes out at 2 iterations. I don't really understand why since the online references I've been looking at seem to calculate it the same way as me. No matter what coordinates of pixels I provide, it always returns 2 (aside from 0,0 and 0,1).
`
using System;
using System.Numerics;
using SkiaSharp;
namespace Mandelbrot
{
internal class Program
{
const int MaxIterations = 100;
static void Main(string[] args)
{
int size = 250;
int[,] grid = Run(size, 0, 0, 1);
// Console.WriteLine("Please specify a square size for the image.");
// try
// {
// Console.Write("Length: ");
// height = Int32.Parse(Console.ReadLine());
// }
// catch (FormatException e)
// {
// Console.WriteLine(e);
// throw;
// }
using (var surface = SKSurface.Create(width: size, height: size, SKColorType.Rgba8888, SKAlphaType.Premul))
{
SKCanvas canvas = surface.Canvas;
canvas.DrawColor(SKColors.Coral);
for (int i = 0; i < grid.GetLength(0); i++)
{
for (int j = 0; j < grid.GetLength(1); j++)
{
if (grid[i, j] >= MaxIterations)
canvas.DrawPoint(new SKPoint(i, j), SKColors.Black);
}
}
canvas.DrawPoint(new SKPoint(250, 250), SKColors.Chartreuse);
OutputImage(surface);
}
Console.WriteLine("Program successfully completed.");
}
public static int[,] Run(int size, int fromX, int fromY, int h)
{
int[] oulltput = new int[size * size];
int[,] output = new int[size, size];
for (int i = 0; i < output.GetLength(0); i += 1)
{
for (int j = 0; j < output.GetLength(1); j += 1)
{
float x = fromX + i * h;
float y = fromY + j * h;
output[i, j] = IterCount(x, y, MaxIterations);
}
}
return output;
}
public static int IterCount(float constX, float constY, int maxIterations)
{
const float maxMagnitude = 2f;
const float maxMagnitudeSquared = maxMagnitude * maxMagnitude;
int i = 0;
float x = 0.0f, y = 0.0f;
float xSquared = 0.0f, ySquared = 0.0f;
while (xSquared + ySquared <= maxMagnitudeSquared && i < maxIterations)
{
xSquared = x * x;
ySquared = y * y;
float xtmp = xSquared - ySquared + constX;
y = 2.0f * x * y + constY;
x = xtmp;
i++;
}
return i;
}
private static void OutputImage(SKSurface surface)
{
Console.WriteLine("Attempting to write .png to disk...");
using (var image = surface.Snapshot())
using (var data = image.Encode(SKEncodedImageFormat.Png, 80))
using (var stream = File.OpenWrite("out.png"))
{
// save the data to a stream
data.SaveTo(stream);
Console.WriteLine("Success!");
}
}
}
}
`
I tried to use breakpoints and writeline statements to debug but I can't figure out where my math is going wrong. I keep getting 2 for my iteration count.
I am implementing Yolov4-Tiny (onnx model found here) in Unity with the Windows ML APIs. I can load the model, and begin a session with no issue. I am using a VideoFrame (sized to 416x416) as the input and can access the two output Tensors. The problems arise when I begin to parse the output Tensors. With a confidence threshold of .5, I get between 700 and 1000 detections each frame, way way more than I expect. Also, the bboxes appear to be very small. The NMS and IOU functions below are nearly verbatim from here so I am not using anchors for the bounding boxes. I believe my issues is in the NMS and IOU functions, but I cannot locate the problem. My gut tells me I am manipulating the output Tensors incorrectly. Any ideas?
private List<DetectionResult> ParseResult(float[] boxes, float[] classes)
{
int c_values = 80;
int c_boxes = boxes.Length / 4;
int c_classNames = classes.Length / c_values;
float confidence_threshold = 0.5f;
List<DetectionResult> detections = new List<DetectionResult>();
for (int i_box = 0; i_box < c_classNames; i_box++)
{
float max_prob = 0.0f;
int label_index = -1;
for (int j_confidence = 0; j_confidence < c_values; j_confidence++)
{
int index = i_box * c_values + j_confidence;
if (Sigmoid(classes[index]) > max_prob)
{
max_prob = Sigmoid(classes[index]) ;
label_index = j_confidence;
}
}
if (max_prob > confidence_threshold)
{
//Debug.Log(_labels[label_index]);
List<float> bbox = new List<float>();
bbox.Add(boxes[(i_box * 4) + 0] * 416);
bbox.Add(boxes[(i_box * 4) + 1] * 416);
bbox.Add(boxes[(i_box * 4) + 2] * 416);
bbox.Add(boxes[(i_box * 4) + 3] * 416);
detections.Add(new DetectionResult()
{
label = _labels[label_index],
bbox = bbox,
prob = max_prob
});
}
}
private List<DetectionResult> NMS(IReadOnlyList<DetectionResult> detections,
float IOU_threshold = 0.45f,
float score_threshold = 0.3f)
{
List<DetectionResult> final_detections = new List<DetectionResult>();
for (int i = 0; i < detections.Count; i++)
{
int j = 0;
for (j = 0; j < final_detections.Count; j++)
{
if (ComputeIOU(final_detections[j], detections[i]) > IOU_threshold)
{
break;
}
}
if (j == final_detections.Count)
{
final_detections.Add(detections[i]);
}
}
return final_detections;
}
private float ComputeIOU(DetectionResult DRa, DetectionResult DRb)
{
float ay1 = DRa.bbox[0];
float ax1 = DRa.bbox[1];
float ay2 = DRa.bbox[2];
float ax2 = DRa.bbox[3];
float by1 = DRb.bbox[0];
float bx1 = DRb.bbox[1];
float by2 = DRb.bbox[2];
float bx2 = DRb.bbox[3];
// determine the coordinates of the intersection rectangle
float x_left = Math.Max(ax1, bx1);
float y_top = Math.Max(ay1, by1);
float x_right = Math.Min(ax2, bx2);
float y_bottom = Math.Min(ay2, by2);
if (x_right < x_left || y_bottom < y_top)
return 0;
float intersection_area = (x_right - x_left) * (y_bottom - y_top);
float bb1_area = (ax2 - ax1) * (ay2 - ay1);
float bb2_area = (bx2 - bx1) * (by2 - by1);
float iou = intersection_area / (bb1_area + bb2_area - intersection_area);
return iou;
}
This program is coded in c#. It is supposed to display a 3d Graph of the function Log(x,y). I don't know why but everytime I run it I get the System.OverflowException when it begins to draw the graph and the program stops.
How can I prevent it from happening and why does it happen?
private void Draw_Function_Click(object sender, EventArgs e)
{
int size = 100;
double accuracy = 0.09;
int zoom = 1;
ver = new double[size, size];
xtag = new double[size, size];
ytag = new double[size, size];
function = Insert_Function.Text;
for (int i = 0; i < 100; i++)
{
for (int p = 0; p < 100; p++)
{
ver[i, p] = Math.Log(i,p);
xtag[i, p] = p * accuracy - i * accuracy * Math.Cos(Math.PI / 5);
ytag[i, p] = ver[i, p] - i * accuracy * Math.Sin(Math.PI / 5);
}
}
Graphics g = panel1.CreateGraphics();
for (int i = 0; i < ver.GetLength(0) - 1; i++)
for (int p = 1; p < ver.GetLength(1) - 1; p++)
{
int y0 = (panel1.Height / 2) - (int)(ytag[i, p] * zoom);
int x0 = (int)(zoom * xtag[i, p]) + panel1.Width / 2;
int y1 = (panel1.Height / 2) - (int)(ytag[i + 1, p + 1] * zoom);
int x1 = (int)(zoom * xtag[i + 1, p + 1]) + panel1.Width / 2;
g.DrawLine(Pens.Black,
(float)x0,
(float)y0,
(float)x1,
(float)y1);
}
}
I found the following answer: https://social.msdn.microsoft.com/Forums/windows/en-US/9f2b5bba-f725-45c1-9ada-383151267c13/overflow-exception-on-drawline-method-?forum=winforms
Quote: "Hi, the minimum value allowed for screen coordinate is x = -1073741376 and y = -1073740288, this is the boundary where the desktop surface lies, if you go further, you enter the void, therfore cause an overflow..."
I ran your calculation and besides the fact that it produces NaN and -Infinity values, it also produced very small values like -2147483503. This is the reason why you get the overflow exception.
I am trying to implement a fluid surface simulation using the paper Fast Hydraulic Erosion Simulation and Visualization on GPU as a template to simulate water. However, I get artifacts like this:
The code for my update function is below, I have followed another post on this subject, but the change did not help.
private void updateHeight (float dt, float dx){
float dhL, dhR, dhF, dhB, DV;
float totalFlux;
float updateConstant;
float fluxConstant = dt*GRAVITY/(2*VISCOUSCOEFFICEINT*dx);
//Update Flux
for (int y=1; y<=N-1; y++){
for(int x=1;x<=N-1;x++){
dhL = this.height[x][y] - this.height[x-1][y];
dhR = this.height[x][y] - this.height[x+1][y];
dhF = this.height[x][y] - this.height[x][y+1];
dhB = this.height[x][y] - this.height[x][y-1];
if (Mathf.Abs(dhL) < 0.0001f){
dhL=0.0f;
}
if (Mathf.Abs(dhR) < 0.0001f){
dhR=0;
}
if (Mathf.Abs(dhF) < 0.0001f){
dhF=0;
}
if (Mathf.Abs(dhB) < 0.0001f){
dhB=0;
}
this.tempFluxArray[x][y].fluxL = Mathf.Max(0.0f, this.fluxArray[x][y].fluxL + fluxConstant*dhL);
this.tempFluxArray[x][y].fluxR = Mathf.Max(0.0f, this.fluxArray[x][y].fluxR + fluxConstant*dhR);
this.tempFluxArray[x][y].fluxF = Mathf.Max(0.0f, this.fluxArray[x][y].fluxF + fluxConstant*dhF);
this.tempFluxArray[x][y].fluxB = Mathf.Max(0.0f, this.fluxArray[x][y].fluxB + fluxConstant*dhB);
totalFlux = this.tempFluxArray[x][y].fluxL + this.tempFluxArray[x][y].fluxR + this.tempFluxArray[x][y].fluxF + this.tempFluxArray[x][y].fluxB;
if(totalFlux > 0){
updateConstant = Mathf.Min(1.0f, this.height[x][y]* dx*dx/(totalFlux * dt));
this.tempFluxArray[x][y].fluxL = updateConstant * this.tempFluxArray[x][y].fluxL;
this.tempFluxArray[x][y].fluxR = updateConstant * this.tempFluxArray[x][y].fluxR;
this.tempFluxArray[x][y].fluxF = updateConstant * this.tempFluxArray[x][y].fluxF;
this.tempFluxArray[x][y].fluxB = updateConstant * this.tempFluxArray[x][y].fluxB;
}
}
}
swap();
//Height Calculation
for (int y=1; y<=N-1; y++){
for(int x=1;x<=N-1;x++){
DV = dt*(this.fluxArray[x-1][y].fluxR + this.fluxArray[x][y-1].fluxF + this.fluxArray[x+1][y].fluxL + this.fluxArray[x][y+1].fluxB - this.fluxArray[x][y].fluxL - this.fluxArray[x][y].fluxR - this.fluxArray[x][y].fluxF - this.fluxArray[x][y].fluxB);
this.height[x][y] = this.height[x][y] + DV/(dx*dx);
if(this.height[x][y] < 1){
// Debug.Log(x);
// Debug.Log(y);
}
}
}
}
Could it be due to using this rather than ref? The way I interact with the water surface written below.
private void waterdrop(int x, int y){
float sqrD = 0.8f*0.8f;
for (int j = 1; j < N; j++) {
for (int i = 1; i < N; i++) {
float sqrDToVert = (float)0.2f*(i - x)*(i-x) + 0.2f*(j - y)*(j-y);
if (sqrDToVert <= sqrD){
float distanceCompensator = 1 - (sqrDToVert/sqrD);
this.fluxArray[i][j].fluxL = this.fluxArray[i][j].fluxL + (0.02f * distanceCompensator);
this.fluxArray[i][j].fluxR = this.fluxArray[i][j].fluxR + (0.02f * distanceCompensator);
this.fluxArray[i][j].fluxF = this.fluxArray[i][j].fluxF + (0.02f * distanceCompensator);
this.fluxArray[i][j].fluxB = this.fluxArray[i][j].fluxB + (0.02f * distanceCompensator);
//this.height[i][j] = this.height[i][j] - 1.0f * distanceCompensator;
Debug.Log(this.height[i][j]);
}
//Debug.Log("x = "+i+"\n y = "+j+" height is "+this.height[i][j]);
}
}
}
Namely just changing the flux at some point at all direction.
Some solutions I have tried was to change the height but that didn't work or clamp on small changes only but it only built up. The boundary conditions are zero flux on all boarders and the height would be the same as the closest point.
I am creating a game after working through a XNA 4.0 book. It will be 3D, but I am already stuck in creating the terrain...
UPDATE: Everything starting from here is an update...
Terrain Update:
public void Update(Matrix view, Matrix projection)
{
View = view;
Projection = projection;
World = Matrix.CreateTranslation(-Width / 2f, 0, Height / 2f);
}
Terrain Draw:
public void Draw(GraphicsDevice g)
{
effect.CurrentTechnique = effect.Techniques["ColoredNoShading"];
effect.Parameters["xView"].SetValue(View);
effect.Parameters["xProjection"].SetValue(Projection);
effect.Parameters["xWorld"].SetValue(World);
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
//g.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices, 0, vertices.Length, indices, 0, indices.Length / 3, VertexPositionColorNormal.VertexDeclaration);
g.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, indices.Length / 3);
}
}
The commented line is working, in the both cases I am able to see the terrain...
The following code is to initialize Vertex and Index Buffer:
private void SetUpVertices(GraphicsDevice g)
{
float currentH;
int currentI;
vertices = new VertexPositionColorNormal[Width * Height];
for (int x = 0; x < Width; x++)
{
for (int y = 0; y < Height; y++)
{
currentH = heightData[x,y];
currentI = x + y * Width;
vertices[currentI].Position = new Vector3(x, currentH , -y);
if (currentH < minH + (maxH - minH) / 3)
vertices[currentI].Color = Color.ForestGreen;
else if (currentH < maxH - (maxH - minH) / 3)
vertices[currentI].Color = Color.LawnGreen;
else
vertices[currentI].Color = Color.White;
}
}
SetUpIndices(g);
}
private void SetUpIndices(GraphicsDevice g)
{
indices = new int[(Width - 1) * (Height - 1) * 6];
int counter = 0;
for (int y = 0; y < Height - 1; y++)
{
for (int x = 0; x < Width - 1; x++)
{
int lowerLeft = x + y * Width;
int lowerRight = (x + 1) + y * Width;
int topLeft = x + (y + 1) * Width;
int topRight = (x + 1) + (y + 1) * Width;
indices[counter++] = topLeft;
indices[counter++] = lowerRight;
indices[counter++] = lowerLeft;
indices[counter++] = topLeft;
indices[counter++] = topRight;
indices[counter++] = lowerRight;
}
}
SetUpNormals(g);
}
private void SetUpNormals(GraphicsDevice g)
{
for (int i = 0; i < vertices.Length; i++)
{
vertices[i].Normal = Vector3.Zero;
}
int[] index = new int[3];
Vector3 s1, s2, n;
for (int i = 0; i < vertices.Length / 3; i++)
{
for (int y = 0; y < 3; y++)
index[y] = indices[i * 3 + y];
s1 = vertices[index[0]].Position - vertices[index[2]].Position;
s2 = vertices[index[0]].Position - vertices[index[1]].Position;
n = Vector3.Cross(s1, s2);
for (int y = 0; y < 3; y++)
{
vertices[index[y]].Normal += n;
vertices[index[y]].Normal.Normalize();
}
}
FillBuffers(g);
}
private void FillBuffers(GraphicsDevice g)
{
VertexBuffer = new VertexBuffer(g, VertexPositionColorNormal.VertexDeclaration, vertices.Length, BufferUsage.WriteOnly);
VertexBuffer.SetData(vertices);
IndexBuffer = new IndexBuffer(g, typeof(int), indices.Length, BufferUsage.WriteOnly);
IndexBuffer.SetData(indices);
g.Indices = IndexBuffer;
g.SetVertexBuffer(VertexBuffer);
}
I don't think, that there is a mistake, because it is working with the other line. Might there be an error with the .fx file I am using. If you think so, I am going to switch to BasicEffects...
(You might notice, that the code is from http://www.riemers.net/eng/Tutorials/XNA/Csharp/series1.php )
Thanks for your help...
Yours,
Florian
(Answer to original revision of the question.)
You're not setting your vertex buffer and index buffer onto the graphics device. These two lines of code (untested) should do what you need:
g.GraphicsDevice.Indices = indexBuffer;
g.GraphicsDevice.SetVertexBuffer(vertexBuffer);
Place them just after you set the parameters on your effect (ef), before the loop.
The vertex buffer provides the vertex declaration that the exception message is asking for.
Edit after question update: In your new version you're setting the vertex and index buffers - but it's in the wrong place. You need to set them onto the graphics device each frame. Your code would only work if nothing changes them after you set them in FillBuffers. But I'm guessing that stuff is being drawn outside your class's Draw method?
If that something else is a SpriteBatch, even it works using vertex buffers and index buffers. So it will reset your settings. (It's worth adding that it also sets render states - in which case you might need to see this article.)