In Unity C#, I'm using a procedular mesh extrusion, based on a flat polygon's 2D vector points. This works great using the code below, except for one detail: seemingly every second triangle which connects the front from the backside of the extruded mesh is flipped, as the image shows (using a double-sided shader, I can see all the triangles do exist fine, though). How would I fix the flip of these bridging normals? Thanks!
// Extrusion functionality via
// https://forum.unity.com/threads/trying-extrude-a-2d-polygon-to-create-a-mesh.102629/
// with Triangulator based on
// http://wiki.unity3d.com/index.php?title=Triangulator
public static Mesh GetExtrudedMeshFromPoints(Vector2[] points, float depth)
{
const float frontVertex = 0f;
Triangulator triangulator = new Triangulator(points);
int[] tris = triangulator.Triangulate();
Mesh m = new Mesh();
Vector3[] vertices = new Vector3[points.Length*2];
for (int i = 0; i < points.Length; i++)
{
vertices[i].x = points[i].x;
vertices[i].y = points[i].y;
vertices[i].z = frontVertex;
vertices[i+points.Length].x = points[i].x;
vertices[i+points.Length].y = points[i].y;
vertices[i+points.Length].z = depth;
}
int[] triangles = new int[tris.Length*2+points.Length*6];
int count_tris = 0;
// Front vertices
for (int i = 0; i < tris.Length; i += 3)
{
triangles[i] = tris[i];
triangles[i+1] = tris[i+1];
triangles[i+2] = tris[i+2];
}
count_tris += tris.Length;
// Back vertices
for (int i = 0; i < tris.Length; i += 3)
{
triangles[count_tris+i] = tris[i+2] + points.Length;
triangles[count_tris+i+1] = tris[i+1] + points.Length;
triangles[count_tris+i+2] = tris[i] + points.Length;
}
count_tris += tris.Length;
// Triangles around the perimeter of the object
for (int i = 0; i < points.Length; i++)
{
int n = (i+1) % points.Length;
triangles[count_tris] = i;
triangles[count_tris+1] = i + points.Length;
triangles[count_tris+2] = n;
triangles[count_tris+3] = n;
triangles[count_tris+4] = n + points.Length;
triangles[count_tris+5] = i + points.Length;
count_tris += 6;
}
m.vertices = vertices;
m.triangles = triangles;
m.RecalculateNormals();
m.RecalculateBounds();
m.Optimize();
return m;
}
If a face isn't shown it is as you already noticed flipped, this is due to the winding order which in unity is clockwise. The link of your triangluator already states under Troubleshooting:
"If you can't see a polygon created with this utility, remember to check if the polygon is facing the opposite direction. If it is, you can change that by constructing your mesh with the vertex indices in reverse order."
EDIT:
for further clarification: in your code this would mean you have to switch
// Triangles around the perimeter of the object
for (int i = 0; i < points.Length; i++)
{
int n = (i + 1) % points.Length;
triangles[count_tris] = i;
triangles[count_tris + 1] = i + points.Length;
triangles[count_tris + 2] = n;
triangles[count_tris + 3] = n;
triangles[count_tris + 4] = n + points.Length;
triangles[count_tris + 5] = i + points.Length;
count_tris += 6;
}
to
// Triangles around the perimeter of the object
for (int i = 0; i < points.Length; i++)
{
int n = (i + 1) % points.Length;
triangles[count_tris] = n;
triangles[count_tris + 1] = i + points.Length;
triangles[count_tris + 2] = i;
triangles[count_tris + 3] = n;
triangles[count_tris + 4] = n + points.Length;
triangles[count_tris + 5] = i + points.Length;
count_tris += 6;
}
But be careful because the order depends on your depth (if it is higher or lower than you frontVertex)
2nd EDIT:
the normal of a triangle depends on the winding order, this means that there is a difference in the ordering.
An example:
1: Vector2(1f, 1f);
2: Vector2(1f, 0f);
3: Vector2(0f, 0f);
the triangle of
Triangle 1,2,3
and
Triangle 1,3,2
have different normal.
You have to make sure that the winding order is consistent for every triangle you draw.
Alternativly you could tell your shader to disable culling (with Cull Off),as you already said above. But this comes with a cost of computation time. Which in most cases shouldn't matter that much, but it always depends on your purpose. Besides most of the time you dont want the side effects of disabling culling.
Related
I've been making procedural terrain height maps with the diamond square algorithm and the mesh with the triangulation method below:
public Map GenerateMap()
{
Mesh mapMesh = new();
vertices = new Vector3[(Resolution + 1) * (Resolution + 1)];
Vector2[] uv1 = new Vector2[vertices.Length];
Vector2[] uv2 = new Vector2[vertices.Length];
Vector2[] uv3 = new Vector2[vertices.Length];
DiamondSquare diamondSquare = new(Resolution, Roughness, Seed, HeightLevels);
float[,] heightFloatMap = diamondSquare.DoDiamondSquare();
tex = new Texture2D(Resolution, Resolution);
for (int y = 0, i = 0; y <= Resolution; y++)
{
for (int x = 0; x <= Resolution; x++, i++)
{
//float height = heightMap.GetPixel(x,y).r;
float height = heightFloatMap[x, y];
vertices[i] = new Vector3(x * CellSize.x, height * CellSize.y, y * CellSize.z);
tex.SetPixel(x, y, new Color(height, height, height, 1));
if (height == 0)
uv1[i] = new Vector2(vertices[i].x, vertices[i].z);
else if (height < 0.4)
uv2[i] = new Vector2(vertices[i].x, vertices[i].z);
else if (height < 0.4)
uv3[i] = new Vector2(vertices[i].x, vertices[i].z);
}
}
mapMesh.vertices = vertices;
mapMesh.uv = uv1;
mapMesh.uv2 = uv2;
int[] triangles = new int[Resolution * Resolution * 6];
Cell[,] cellMap = new Cell[Resolution / 4, Resolution / 4];
for (int ti = 0, vi = 0, y = 0; y < Resolution; y++, vi++)
{
for (int x = 0; x < Resolution; x++, ti += 6, vi++)
{
triangles[ti] = vi;
triangles[ti + 3] = triangles[ti + 2] = vi + 1;
triangles[ti + 4] = triangles[ti + 1] = vi + Resolution + 1;
triangles[ti + 5] = vi + Resolution + 2;
Vector3[] cellVerts = new Vector3[]
{
vertices[vi], vertices[vi + 1], vertices[vi + Resolution + 1], vertices[vi + Resolution + 2]
};
Cell cell = new(new Vector2Int(x, y), cellVerts, CalculateCellGeometry(cellVerts));
cellMap[x / 4, y / 4] = cell;
}
}
mapMesh.triangles = triangles;
mapMesh.RecalculateNormals();
mapMesh.RecalculateTangents();
mapMesh.RecalculateBounds();
Map map = new(mapMesh, cellMap, heightFloatMap, vertices);
return map;
}
}
This works fine with grid sizes 16x16, 32x32... 256x256 but breaks when I try it on 512x512 or above
256x256
Mesh is perfect
512x512
It successfully triangulates up until the rows starting y=128
On the underside of the terrain there are these bars
I've mapped out the vertices generated from 512x512 and above resolutions and they are all good so I'm 99% sure its down to the triangulation.
I'm new to procedural meshes and am stumped by this issue, any help would be greatly appreciated.
Turns out it wasn't triangulation, the vertex limit was being reached as my mesh was set to use a 16-bit index buffer.
I added this line
mapMesh.indexFormat = UnityEngine.Rendering.IndexFormat.UInt32;
and the issue is fixed. An annoying oversight on my part but that's part of the learning process!
So I am trying to create my own version of Sebastian Lauge's marching cubes coding adventures (https://www.youtube.com/watch?v=M3iI2l0ltbE) and I have a script that generates perlin noise into terrain on the Y-axis ant Im trying to make it so it creates perlin noise for the x, y, and z axis and compile vertices from that perlin noise into a mesh. Rightnow I'm in the process of adding the code to decides which points should be put into the mesh The problem is that I have to change the variable y in lines 49 - 56 to the variable ypn (declared on line 45) and leave the y variables out of that area alone the problem is that if i change one of those y variables (don't know which one) it will make the mesh flat and that's not what I want then it gets worse. If you try to revert the variable you changed back to y it will not change a thing. It will always make the mesh flat unless you make a new script. My code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class MapGen : MonoBehaviour
{
Mesh mesh;
Material material;
Vector3[] vertices;
int[] triangles;
Vector2[] uvs;
Vector3[] interest;
public int xSize = 20;
public int zSize = 20;
public int ySize = 20;
[Range(-1, 1)]
public float Surface = 0;
public float Weight = 1;
float minHeight = 0;
float maxHeight = 0;
// Start is called before the first frame update
void Start()
{
mesh = new Mesh();
GetComponent<MeshFilter>().mesh = mesh;
CreateShape();
UpdateMesh();
}
void CreateShape()
{
vertices = new Vector3[(xSize + 1) * (zSize + 1)];
Texture2D texture = new Texture2D(xSize, zSize, TextureFormat.ARGB32, false);
for (int y = 0; y <= ySize; y++)
{
for (int i = 0, z = 0; z <= zSize; z++)
{
for (int x = 0; x <= xSize; x++)
{
float ypn = (Mathf.PerlinNoise(x * .3f, z * .3f) * 2f) * Weight;
vertices[i] = new Vector3(x, ypn, z);
if (y <= minHeight)
minHeight = y;
else
{
if (y >= maxHeight)
maxHeight = y;
}
var result = Mathf.Lerp(minHeight, maxHeight, Mathf.InverseLerp(0, 1, y));
texture.SetPixel(x, z, new Color(y, x, 1));
i++;
}
}
}
texture.Apply();
GetComponent<Renderer>().material.mainTexture = texture;
triangles = new int[xSize * zSize * 6];
int vert = 0;
int tris = 0;
for (int z = 0; z < zSize; z++)
{
for (int x = 0; x < xSize; x++)
{
triangles[tris + 0] = vert + 0;
triangles[tris + 1] = vert + xSize + 1;
triangles[tris + 2] = vert + 1;
triangles[tris + 3] = vert + 1;
triangles[tris + 4] = vert + xSize + 1;
triangles[tris + 5] = vert + xSize + 2;
vert++;
tris += 6;
}
vert++;
}
uvs = new Vector2[vertices.Length];
for (int i = 0, z = 0; z <= zSize; z++)
{
for (int x = 0; x <= xSize; x++)
{
uvs[i] = new Vector2((float)x / xSize, (float)z / zSize);
i++;
}
}
}
void UpdateMesh()
{
mesh.Clear();
mesh.vertices = vertices;
mesh.triangles = triangles;
mesh.uv = uvs;
mesh.RecalculateNormals();
MeshCollider meshc = gameObject.AddComponent(typeof(MeshCollider)) as MeshCollider;
meshc.sharedMesh = mesh;
}
}
Note: using unity 2019.2.15
Script is attached to empty Game object with a mesh renderer and a mesh filter on default unity values
I have problem with normalized values when creating procedural grid in Unity. I have been following great tutorial from catlikecoding and I dumped in to weird behaving when I tried to use normalized values for my vertices. In some cases of xSize and ySize grid combinations all works, but in other combinations mesh get deformed. Let me give you couple of examples
xSize = 35; ySize = 25; // OK
xSize = 350; ySize = 250; // NOT OK
xSize = 150; ySize = 250; // OK
xSize = 350; ySize = 200; // NOT OK
xSize = 1000; ySize = 750; // NOT OK
First 2 cases I illustrated with sphere representing each 10th vertices.
35x25 case
350x250 case
I am using Unity3d 2018.3
private void Generate()
{
GetComponent<MeshFilter>().mesh = mesh = new Mesh();
mesh.name = "Procedural Grid";
vertices = new Vector3[(xSize + 1) * (ySize + 1)];
Vector2[] uv = new Vector2[vertices.Length];
float multX = 1 / (float)xSize;
float multY = 1 / (float)ySize;
for (int i = 0, y = 0; y <= ySize; y++)
{
for (int x = 0; x <= xSize; x++, i++)
{
//vertices[i] = new Vector3(x, y);
var xNormalized = x * multX;
var yNormalized = y * multY;
vertices[i] = new Vector3(xNormalized, yNormalized);
uv[i] = new Vector2(xNormalized, yNormalized);
}
}
mesh.vertices = vertices;
mesh.uv = uv;
var triangles = new int[xSize * ySize * 6];
for (int ti = 0, vi = 0, y = 0; y < ySize; y++, vi++)
{
for (int x = 0; x < xSize; x++, ti += 6, vi++)
{
triangles[ti] = vi;
triangles[ti + 3] = triangles[ti + 2] = vi + 1;
triangles[ti + 4] = triangles[ti + 1] = vi + xSize + 1;
triangles[ti + 5] = vi + xSize + 2;
}
}
mesh.triangles = triangles;
mesh.RecalculateNormals();
}
I expect the mesh be 1x1 in every case, no matter which xSize or ySize of the grid I use. Anybody can advise how to achieve that?
So my friend explained me, that by default meshes have a 65535 vertices limit in Unity. And I have to nicely ask if I want more.
I had to add
mesh.indexFormat = UnityEngine.Rendering.IndexFormat.UInt32;
after
mesh.name = "Procedural Grid";
here is more..
Suddenly all works as expected. Thank you all for support.
I have arrays of coordinates x and y
x = new int[18];
y = new int[15];
x[0] = -404;
y[0] = -226;
for (int i = 1; i < 18; i++)
x[i] = x[i - 1] + 30;
for (int i = 1; i < 15; i++)
y[i] = y[i - 1] + 30;
I setup random coordinates from arrays. But they're incorrect when I start the program. Mostly numbers are out of arrays. Can't understand why. May be I setup position incorrect?
int xCor = x[(int)Random.Range(0, x.Length - 1)];
int yCor = y[(int)Random.Range(0, y.Length - 1)];
transform.position = new Vector2(xCor, yCor);
I need to setup new coordinates. E.g. x = 24, y = 50.
Apple is out of the green area:
Use RectTransform.anchoredPosition property instead of transform.position, like this
RectTransform rectTransform = GetComponent<RectTransform>();
rectTransform.anchoredPosition = new Vector2(xCor, yCor);
You are using a canvas GameObject which doesn't have a normal transform component.
I have a problem regarding frame rate drop while trying to make real time 3D terrain changes. I use C#, XNA 4.0 and VS 2010. This is my old school project and time has come to finish it.
I already did terrain generation from image file, with all effects and stuff and it is running smoothly no matter what resolution image file is. Problem is with my terrain editor. I want it to be able to manually alter the terrain. I did that part too, but it works only if terrain size is equal or less than 128x128 pixels. If the terrain size is greater I start to get frame rate drops around 150x150 pixels, and it is completely unmanageable if terrain size is greater than 512x512 pixels.
I already tried several approaches:
tried to use threads, but then I get weird error saying something like "Draw method can be called in one thread at a time" or something similar, and that I can't resolve.
next I tried to use DynamicVertexBuffer and DynamicIndexBuffer. That helped a lot and now my code is working with acceptable frame rate for terrain size of up to 256x256 pixels.
Have a look at my code:
public void ChangeTerrain(float[,] heightData)
{
int x, y;
int v = 1;
if (currentMouseState.LeftButton == ButtonState.Pressed && currentMouseState.X < 512)
{
x = (int)currentMouseState.X / 2;
y = (int)currentMouseState.Y / 2;
if (x < 5)
x = 5;
if (x >= 251)
x = 251;
if (y < 5)
y = 5;
if (y >= 251)
y = 251;
for (int i = x - 4; i < x + 4; i++)
{
for (int j = y - 4; j < y + 4; j++)
{
if (i == x - 4 || i == x + 3 || j == y - 4 || j == y + 3)
v = 3;
else
v = 5;
if (heightData[i, j] < 210)
{
heightData[i, j] += v;
}
}
}
}
if (currentMouseState.RightButton == ButtonState.Pressed && currentMouseState.X < 512)
{
x = (int)currentMouseState.X / 2;
y = (int)currentMouseState.Y / 2;
if (x < 5)
x = 5;
if (x >= 251)
x = 251;
if (y < 5)
y = 5;
if (y >= 251)
y = 251;
for (int i = x - 4; i < x + 4; i++)
{
for (int j = y - 4; j < y + 4; j++)
{
if (heightData[i, j] > 0)
{
heightData[i, j] -= 1;
}
}
}
}
if (keyState.IsKeyDown(Keys.R))
{
for (int i = 0; i < 256; i++)
for (int j = 0; j < 256; j++)
heightData[i, j] = 0f;
}
SetUpTerrainVertices();
CalculateNormals();
terrainVertexBuffer.SetData(vertices, 0, vertices.Length);
}
I work with resolution of 1024x512 pixels, so I scale mouse position by 1/2 to get terrain position. I use left and right mouse button to alter terrain, i.e. to alter heightData from which 3D terrain is generated.
Last 3 lines create Vertices from new heightData, calculate Normals so shades could be applied and last line is just throwing Vertices data to Vertex Buffer.
Prior to that, I set up dynamic Vertex and Index buffer in LoadContent method and call initial Vertices and Indices setup. This method (ChangeTerrain) is called from Update method.
I did some debugging and found out that maximum size of vertices in most extreme case would be around 260000 +- few thousands. Is it possible that .SetData is so much time consuming it is causing frame rate drops? Or is it something else? How can I fix that and make my editor functioning normally for any terrain size?
Also, i red that I need to use this code with DynamicVertexBuffer, but I can't make it work in XNA 4.0.
terrainVertexBuffer.ContentLost += new EventHandler(TerrainVertexBufferContentLost);
public void TerrainVertexBufferContentLost()
{
terrainVertexBuffer(vertices, 0, vertices.Length, SetDataOptions.NoOverwrite);
}
Thanks for your help!
EDIT:
This is my SetUpTerrainVertices code:
private void SetUpTerrainVertices()
{
vertices = new VertexPositionNormalColored[terrainWidth * terrainLength];
for (int x = 0; x < terrainWidth; x++)
{
for (int y = 0; y < terrainLength; y++)
{
vertices[x + y * terrainWidth].Position = new Vector3(x, heightData[x, y], -y);
vertices[x + y * terrainWidth].Color = Color.Gray;
}
}
}
And my CalculateNormals
private void CalculateNormals()
{
for (int i = 0; i < vertices.Length; i++)
vertices[i].Normal = new Vector3(0, 0, 0);
for (int i = 0; i < indices.Length / 3; i++)
{
int index1 = indices[i * 3];
int index2 = indices[i * 3 + 1];
int index3 = indices[i * 3 + 2];
Vector3 side1 = vertices[index1].Position - vertices[index3].Position;
Vector3 side2 = vertices[index1].Position - vertices[index2].Position;
Vector3 normal = Vector3.Cross(side1, side2);
vertices[index1].Normal += normal;
vertices[index2].Normal += normal;
vertices[index3].Normal += normal;
}
for (int i = 0; i < vertices.Length; i++)
vertices[i].Normal.Normalize();
}
I set up vertex and index buffers in XNA LoadContent Method using lines:
terrainVertexBuffer = new DynamicVertexBuffer(device, VertexPositionNormalColored.VertexDeclaration, vertices.Length,
BufferUsage.None);
terrainIndexBuffer = new DynamicIndexBuffer(device, typeof(int), indices.Length, BufferUsage.None);
I call ChangeTerrain method from Update and this is how i Draw:
private void DrawTerrain(Matrix currentViewMatrix)
{
device.DepthStencilState = DepthStencilState.Default;
device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
effect.CurrentTechnique = effect.Techniques["Colored"];
Matrix worldMatrix = Matrix.Identity;
effect.Parameters["xWorld"].SetValue(worldMatrix);
effect.Parameters["xView"].SetValue(currentViewMatrix);
effect.Parameters["xProjection"].SetValue(projectionMatrix);
effect.Parameters["xEnableLighting"].SetValue(true);
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
device.Indices = terrainIndexBuffer;
device.SetVertexBuffer(terrainVertexBuffer);
device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, indices.Length / 3);
}
}
EDIT2:
Ok, I decided to go for your second suggestion and got into some problems. I modified my methods like this:
public void ChangeTerrain(Texture2D heightmap)
{
Color[] mapColors = new Color[256 * 256];
Color[] originalColors = new Color[256 * 256];
for (int i = 0; i < 256 * 256; i++)
originalColors[i] = new Color(0, 0, 0);
heightMap2.GetData(mapColors);
device.Textures[0] = null;
device.Textures[1] = null;
int x, y;
int v = 1;
if (currentMouseState.LeftButton == ButtonState.Pressed && currentMouseState.X < 512)
{
x = (int)currentMouseState.X / 2;
y = (int)currentMouseState.Y / 2;
if (x < 4)
x = 4;
if (x >= 251)
x = 251;
if (y < 4)
y = 4;
if (y >= 251)
y = 251;
for (int i = x-4; i < x+4; i++)
{
for (int j = y-4; j < y+4; j++)
{
if (i == x - 4 || i == x + 3 || j == y - 4 || j == y + 3)
v = 3;
else
v = 5;
if (mapColors[i + j * 256].R < 210)
{
mapColors[i + j * 256].R += (byte)(v);
mapColors[i + j * 256].G += (byte)(v);
mapColors[i + j * 256].B += (byte)(v);
}
heightMap2.SetData(mapColors);
}
}
}
if (currentMouseState.RightButton == ButtonState.Pressed && currentMouseState.X < 512)
{
x = (int)currentMouseState.X / 2;
y = (int)currentMouseState.Y / 2;
if (x < 4)
x = 4;
if (x >= 251)
x = 251;
if (y < 4)
y = 4;
if (y >= 251)
y = 251;
for (int i = x - 4; i < x + 4; i++)
{
for (int j = y - 4; j < y + 4; j++)
{
if (mapColors[i + j * 256].R > 0)
{
mapColors[i + j * 256].R -= 1;
mapColors[i + j * 256].G -= 1;
mapColors[i + j * 256].B -= 1;
}
heightMap2.SetData(mapColors);
}
}
}
if (keyState.IsKeyDown(Keys.R))
heightMap2.SetData(originalColors);
}
Generating flat surface - only once in LoadContent() method:
vertices are assigned only once
private void SetUpTerrainVertices()
{
for (int x = 0; x < terrainWidth; x++)
{
for (int y = 0; y < terrainLength; y++)
{
vertices[x + y * terrainWidth].Position = new Vector3(x, 0, -y);
vertices[x + y * terrainLength].Color = Color.Gray;
}
}
}
Draw method is same as previous, but with one extra line:
effect.Parameters["xTexture0"].SetValue(heightMap2);
also, I made new technique called Editor and it looks like this:
//------- Technique: Editor --------
struct EditorVertexToPixel
{
float4 Position : POSITION;
float4 Color : COLOR0;
float LightingFactor: TEXCOORD0;
float2 TextureColor : TEXCOORD1;
};
struct EditorPixelToFrame
{
float4 Color : COLOR0;
};
EditorVertexToPixel EditorVS( float4 inPos : POSITION, float4 inColor: COLOR, float3 inNormal: NORMAL, float2 inTextureColor: TEXCOORD1)
{
EditorVertexToPixel Output = (EditorVertexToPixel)0;
float4x4 preViewProjection = mul (xView, xProjection);
float4x4 preWorldViewProjection = mul (xWorld, preViewProjection);
float4 Height;
float4 position2 = inPos;
position2.y += Height;
Output.Color = inColor;
Output.Position = mul(position2, preWorldViewProjection);
Output.TextureColor = inTextureColor;
float3 Normal = normalize(mul(normalize(inNormal), xWorld));
Output.LightingFactor = 1;
if (xEnableLighting)
Output.LightingFactor = saturate(dot(Normal, -xLightDirection));
return Output;
}
EditorPixelToFrame EditorPS(EditorVertexToPixel PSIn)
{
EditorPixelToFrame Output = (EditorPixelToFrame)0;
//float4 height2 = tex2D(HeightSAmpler, PSIn.TextureColor);
float4 colorNEW = float4(0.1f, 0.1f, 0.6f, 1);
Output.Color = PSIn.Color * colorNEW;
Output.Color.rgb *= saturate(PSIn.LightingFactor) + xAmbient;
return Output;
}
technique Editor
{
pass Pass0
{
VertexShader = compile vs_3_0 EditorVS();
PixelShader = compile ps_3_0 EditorPS();
}
}
this code doesn't work because float4 Height is not set. What I wanted to do is to sample texture colors into float4 Height (using Sample), but I can not use sampler in VertexShader. I get error message "X4532 cannot map expression to vertex shader instruction set".
Then, I red that you can use SampleLevel in VertexShader to sample color data and thought I found solution, but I get strange error that is only documented in one Russian blog, but I can't speak or read Russian. Error is: "X4814 unexpected Alias on texture declaration"
Is there a way to sample colors in PixelShader and then pass them to VertexShader?
This could work cos I managed to set float4 Height to various values and it altered vertices height. Problem is, I don't know how to read texture color in VertexShader, or how to pass red texture color data from PixelShader to VertexShader.
EDIT3:
I think I found solution. Was searching the net and found out about tex2Dlod function to use as VertexShader texture sampler. But there are different syntax displayed and I can't make them work.
Can anyone point out on good HLSL literature to learn a bit about HLSL coding. This task seems pretty easy, but somehow, I can't make it to work.
Ok, so I can't offer "real" performance advice - because I haven't measured your code. And measuring is probably the most important part of performance optimisation - you need to be able to answer the questions: "am I slower than my performance target?" and "why am I slower than my target?"
That being said - here are the things that stand out to me as a seasoned developer:
This method (ChangeTerrain) is called from Update method
You should probably consider splitting that method up so that, rather than recreating your data every frame, it only does work when the terrain is actually changed.
vertices = new VertexPositionNormalColored[terrainWidth * terrainLength];
Allocating a new vertices buffer each frame is huge memory allocation (6MB at 512x512). This is going to put a big strain on the garbage collector - and I suspect this is the primary cause of your performance issues.
Given that you're about to set all the data in that array anyway, simply delete that line and the old data in the array will be overwritten.
Better yet, you could leave the data that doesn't change as-is, and only modify the vertices that are actually changed. In much the same way you are doing for heightData.
As part of this, it would be a very good idea to modify CalculateNormals so that, rather than having to rely on the index buffer and going through every triangle, it could calculate the indices of surrounding vertices (that form triangles) for any specific vertex - something you can do because vertices is ordered. Again, kind of like what you're doing for heightData.
terrainVertexBuffer.SetData(vertices, 0, vertices.Length);
This is sending the full 6MB buffer to the GPU. There are versions of SetData that only send a subset of the full buffer to the GPU. You should probably try and use these.
Just remember that each SetData call comes with some overhead, so don't get too granular. It's probably best to have just one call per vertex buffer, even if that means some unmodified parts of the buffer must be sent.
This is probably the only place where "chunking" your terrain would have an significant impact, as it would allow you to specify a tighter region for each SetData call - allowing you to send less unmodified data. (I'll leave figuring out why this is the case as an exercise.)
(You're already using DynamicVertexBuffer, which is good, because this means the GPU will automatically handle the pipeline issues of having its buffer changed on-the-fly.)
Finally, if performance is still an issue, you could consider a different approach entirely.
One example might be to offload the calculation of the geometry to the GPU. You'd convert your heightData to a texture, and use a vertex shader (with a flat grid of vertices as input) to sample that texture and output the appropriate positions and normals.
One big advantage of this approach is that heightData can be a lot smaller (0.25MB at 512x512) than your vertex buffer - that's much less data that the CPU needs to process and send to the GPU.