Apply texture in a quad mesh from a texture atlas - c#

I'm trying to apply dynamically a texture from a texture atlas to a quad mesh in Unity3D.
When I do the same in a mesh of a cube, the front face works very fine but the other ones get distorted. So I had the idea to use a simple quad and now I'm facing this scenario:
The image should be displayed like this:
I'm placing the texture by the code below. The math is working fine:
public int offsetX = 0;
public int offsetY = 0;
private const float offset = 0.0625f; // the rate of each texture square
void Start ()
{
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector2[] UVs = new Vector2[mesh.vertices.Length];
UVs[0] = new Vector2(offsetX * offset, offsetY * offset);
UVs[1] = new Vector2((offsetX * offset) + offset, offsetY * offset);
UVs[2] = new Vector2(offsetX * offset, (offsetY * offset) + offset);
UVs[3] = new Vector2((offsetX * offset) + offset, (offsetY * offset) + offset);
mesh.uv = UVs;
}
What should I do to place the texture in the quad mesh as the image reference?

For those looking for an answer:
I've fixed that changing the tail (offset) and scale of the shader. Example:
using UnityEngine;
public class Cube : MonoBehaviour {
public int offsetX = 0;
public int offsetY = 0;
private Renderer _rend;
private Material _material;
private const float Offset = 0.0625f;
// Use this for initialization
private void Start ()
{
_rend = GetComponent<Renderer>();
_material = _rend.material;
_material.mainTextureScale = new Vector2(Offset,Offset);
}
private void Update()
{
_material.mainTextureOffset = new Vector2(offsetX*Offset,offsetY*Offset);
}
}

Related

Why isn't my perspective transform working

I am building a test 3D renderer in WinForms using the objects in System.Numerics such as Vector3 and Matrix4x4.
The object drawn is a point cloud, centered around (0,0,0), and rotated about the origin. Each node renders as dots on the screen. Here is what the 3D shape should look like
Fake Perspective
and more specifically when viewed from the front the perspective should be obvious with the blue dots that are further away from the eye to be at a smaller distance from the center
Fake Perspective
The pipeline is roughly as follows:
Rotation transformation
Matrix4x4 RY = Matrix4x4.CreateRotationY(ry);
Perspective transformation (fov=90, aspect=1.0f, near=1f, far=100f)
Matrix4x4 P = Matrix4x4.CreatePerspectiveFieldOfView(fov.Radians(), 1.0f, 1f, 100f);
Camera transformation
Matrix4x4 C = RY * P;
var node = Vector3.Transform(face.Nodes[i], C);
Project to 2D
Vector2 point = new Vector2(node.X, node.Y);
View transformation
Matrix3x2 S = Matrix3x2.CreateScale(height / scale, -height / scale);
Matrix3x2 T = Matrix3x2.CreateTranslation(width / 2f, height / 2f);
Matrix3x2 V = S*T
point = Vector2.Transform(point, V);
Pixel Coordinates & Render
PointF pixel = new PointF(point.X, point.Y);
e.Graphics.FillEllipse(brush,pixel.X - 2, pixel.Y - 2, 4, 4);
So what I am seeing is an orthographic projection.
Program Output
The blue nodes further away are not smaller as expected. Somehow the perspective transformation is being ignored.
So my question is my usage of Matrix4x4.CreatePerspectiveFieldOfView() correct in step #2? And is the projection from 3D to 2D in step #4 correct?
Steps #1, #5 and #6 seem to be working exactly as intended, my issue is with steps #2-#4 somewhere.
Example code to reproduce the issue
Form1.cs
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
public Shape Object { get; set; }
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
this.Object = Shape.DemoShape1();
}
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
float width = ClientSize.Width, height = ClientSize.Height;
float scale = 40f, fov = 90f;
Matrix4x4 RY = Matrix4x4.CreateRotationY(ry);
Matrix4x4 RX = Matrix4x4.CreateRotationX(rx);
Matrix4x4 P = Matrix4x4.CreatePerspectiveFieldOfView(fov.Radians(), 1.0f, 1f, 100f);
Matrix4x4 C = RY * RX * P;
Matrix3x2 S = Matrix3x2.CreateScale(
height / scale, -height / scale);
Matrix3x2 T = Matrix3x2.CreateTranslation(
width / 2f, height / 2f);
Matrix3x2 V = S * T;
using (var pen = new Pen(Color.Black, 0))
{
var arrow = new AdjustableArrowCap(4f, 9.0f);
pen.CustomEndCap = arrow;
using (var brush = new SolidBrush(Color.Black))
{
// Draw coordinate triad (omited)
// Each face has multiple nodes with the same color
foreach (var face in Object.Faces)
{
brush.Color = face.Color;
PointF[] points = new PointF[face.Nodes.Count];
for (int i = 0; i < points.Length; i++)
{
// transform nodes into draw points
var item = Vector4.Transform(face.Nodes[i], C);
var point = Vector2.Transform(item.Project(), V);
points[i] = point.ToPoint();
}
// Draw points as dots
e.Graphics.SmoothingMode = SmoothingMode.HighQuality;
for (int i = 0; i < points.Length; i++)
{
e.Graphics.FillEllipse(brush,
points[i].X - 2, points[i].Y - 2,
4, 4);
}
}
}
}
}
}
GraphicsExtensions.cs
public static class GraphicsExtensions
{
public static PointF ToPoint(this Vector2 vector)
=> new PointF(vector.X, vector.Y);
public static Vector2 Project(this Vector3 vector)
=> new Vector2(vector.X, vector.Y);
public static Vector2 Project(this Vector4 vector)
=> new Vector2(vector.X, vector.Y);
public static float Radians(this float degrees) => (float)(Math.PI/180) * degrees;
public static float Degrees(this float radians) => (float)(180/Math.PI) * radians;
}

Why does the array I pass to my multithreading job struct act as a reference type?

I'm working on a unity project involving deformable terrain based on marching-cubes. It works by generating a map of density over the 3-dimensional coordinates of a terrain chunk and using that data to create a mesh representing the surface of the terrain. It has been working, however the process is very slow. I'm attempting to introduce multithreading to improve performance, but I've run into a problem that's left me scratching my head.
When I run CreateMeshData() and try to pass my density map terrainMap into the MarchCubeJob struct, it recognizes it as a reference type, not a value type. I've seemed to whittle down the errors to this one, but I've tried to introduce the data in every way I know how and I'm stumped. I thought passing a reference like this was supposed to create a copy of the data disconnected from the reference, but my understanding must be flawed. My goal is to pass each marchingcube cube into a job and have them run concurrently.
I'm brand new to multithreading, so I've probably made some newbie mistakes here and I'd appreciate if someone would help me out with a second look. Cheers!
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Unity.Jobs;
using Unity.Collections;
using Unity.Burst;
public class Chunk
{
List<Vector3> vertices = new List<Vector3>();
List<int> triangles = new List<int>();
public GameObject chunkObject;
MeshFilter meshFilter;
MeshCollider meshCollider;
MeshRenderer meshRenderer;
Vector3Int chunkPosition;
public float[,,] terrainMap;
// Job system
NativeList<Vector3> marchVerts;
NativeList<Vector3> marchTris;
MarchCubeJob instanceMarchCube;
JobHandle instanceJobHandle;
int width { get { return Terrain_Data.chunkWidth;}}
int height { get { return Terrain_Data.chunkHeight;}}
static float terrainSurface { get { return Terrain_Data.terrainSurface;}}
public Chunk (Vector3Int _position){ // Constructor
chunkObject = new GameObject();
chunkObject.name = string.Format("Chunk x{0}, y{1}, z{2}", _position.x, _position.y, _position.z);
chunkPosition = _position;
chunkObject.transform.position = chunkPosition;
meshRenderer = chunkObject.AddComponent<MeshRenderer>();
meshFilter = chunkObject.AddComponent<MeshFilter>();
meshCollider = chunkObject.AddComponent<MeshCollider>();
chunkObject.transform.tag = "Terrain";
terrainMap = new float[width + 1, height + 1, width + 1]; // Weight of each point
meshRenderer.material = Resources.Load<Material>("Materials/Terrain");
// Generate chunk
PopulateTerrainMap();
CreateMeshData();
}
void PopulateTerrainMap(){
...
}
void CreateMeshData(){
ClearMeshData();
vertices = new List<Vector3>();
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
for (int z = 0; z < width; z++) {
Debug.Log(x + ", " + y + ", " + z + ", begin");
Vector3Int position = new Vector3Int(x, y, z);
// Set up memory pointers
NativeList<Vector3> marchVerts = new NativeList<Vector3>(Allocator.TempJob);
NativeList<int> marchTris = new NativeList<int>(Allocator.TempJob);
NativeList<float> mapSample = new NativeList<float>(Allocator.TempJob);
// Split marchcube into jobs by cube
instanceMarchCube = new MarchCubeJob(){
position = position,
marchVerts = marchVerts,
marchTris = marchTris,
mapSample = terrainMap
};
// Run job for each cube in a chunk
instanceJobHandle = instanceMarchCube.Schedule();
instanceJobHandle.Complete();
// Copy data from job to mesh data
//instanceMarchCube.marchVerts.CopyTo(vertices);
vertices.AddRange(marchVerts);
triangles.AddRange(marchTris);
// Dispose of memory pointers
marchVerts.Dispose();
marchTris.Dispose();
mapSample.Dispose();
Debug.Log(x + ", " + y + ", " + z + ", end");
}
}
}
BuildMesh();
}
public void PlaceTerrain (Vector3 pos, int radius, float speed){
...
CreateMeshData();
}
public void RemoveTerrain (Vector3 pos, int radius, float speed){
...
CreateMeshData();
}
void ClearMeshData(){
vertices.Clear();
triangles.Clear();
}
void BuildMesh(){
Mesh mesh = new Mesh();
mesh.vertices = vertices.ToArray();
mesh.triangles = triangles.ToArray();
mesh.RecalculateNormals();
meshFilter.mesh = mesh;
meshCollider.sharedMesh = mesh;
}
private void OnDestroy(){
marchVerts.Dispose();
marchTris.Dispose();
}
}
// Build a cube as a job
[BurstCompile]
public struct MarchCubeJob: IJob{
static float terrainSurface { get { return Terrain_Data.terrainSurface;}}
public Vector3Int position;
public NativeList<Vector3> marchVerts;
public NativeList<int> marchTris;
public float[,,] mapSample;
public void Execute(){
//Sample terrain values at each corner of cube
float[] cube = new float[8];
for (int i = 0; i < 8; i++){
cube[i] = SampleTerrain(position + Terrain_Data.CornerTable[i]);
}
int configIndex = GetCubeConfiguration(cube);
// If done (-1 means there are no more vertices)
if (configIndex == 0 || configIndex == 255){
return;
}
int edgeIndex = 0;
for (int i = 0; i < 5; i++){ // Triangles
for (int p = 0; p < 3; p++){ // Tri Vertices
int indice = Terrain_Data.TriangleTable[configIndex, edgeIndex];
if (indice == -1){
return;
}
// Get 2 points of edge
Vector3 vert1 = position + Terrain_Data.CornerTable[Terrain_Data.EdgeIndexes[indice, 0]];
Vector3 vert2 = position + Terrain_Data.CornerTable[Terrain_Data.EdgeIndexes[indice, 1]];
Vector3 vertPosition;
// Smooth terrain
// Sample terrain values at either end of current edge
float vert1Sample = cube[Terrain_Data.EdgeIndexes[indice, 0]];
float vert2Sample = cube[Terrain_Data.EdgeIndexes[indice, 1]];
// Calculate difference between terrain values
float difference = vert2Sample - vert1Sample;
if (difference == 0){
difference = terrainSurface;
}
else{
difference = (terrainSurface - vert1Sample) / difference;
}
vertPosition = vert1 + ((vert2 - vert1) * difference);
marchVerts.Add(vertPosition);
marchTris.Add(marchVerts.Length - 1);
edgeIndex++;
}
}
}
static int GetCubeConfiguration(float[] cube){
int configurationIndex = 0;
for (int i = 0; i < 8; i++){
if (cube[i] > terrainSurface){
configurationIndex |= 1 << i;
}
}
return configurationIndex;
}
public float SampleTerrain(Vector3Int point){
return mapSample[point.x, point.y, point.z];
}
}

Using Perlin Noise across multiple Unity Terrain objects

I have a class project in which we are supposed to use Unities Terrain 3D objects and create a 3x3 smoothly generated terrain. For this we have been told to create a central Terrain the has adjacent terrains in the 8 cardinal directions. I have gotten the Perlin Noise to work through this method
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class TerrainNoiseGeneration : MonoBehaviour
{
private TerrainData myTerrainData;
public Vector3 worldSize;
public int resolution = 129;
private float userInput = (float)4.2;
public float offsetX;
public float offsetZ;
// Start is called before the first frame update
void Start()
{
myTerrainData = gameObject.GetComponent<TerrainCollider>().terrainData;
worldSize = new Vector3(100, 50, 100);
myTerrainData.size = worldSize;
myTerrainData.heightmapResolution = resolution;
float[,] heightArray = new float[resolution, resolution];
heightArray = PerlinNoise(userInput, offsetX, offsetZ);
myTerrainData.SetHeights(0, 0, heightArray);
}
// Update is called once per frame
void Update()
{
float[,] heightArray = new float[resolution, resolution];
heightArray = PerlinNoise(userInput, offsetX, offsetZ);
myTerrainData.SetHeights(0, 0, heightArray);
}
float[,] PerlinNoise(float userInput, float offsetX, float offsetZ)
{
float[,] heights = new float[resolution, resolution];
for (int z = 0; z < resolution; z++)
{
for (int x = 0; x < resolution; x++)
{
float nx = (x + offsetX) / resolution * userInput;
float ny = (z + offsetZ) / resolution * userInput;
heights[z, x] = Mathf.PerlinNoise(nx, ny);
}
}
return heights;
}
This code allows me to Generate a smooth terrain in the first Terrain object but when I try entering the offset values so that the edges can line-up they do not have the same values.
I would appreciate any assistance on this issue as I have tried a lot of different solutions, none of which are working
Update: I was able to solve the problem with a rather simple solution of the fact that I needed to use my resolution as the offset not the distance between the terrains
I needed to set the OffsetX and OffsetZ equal to that of their respective resolution positions instead of their unity positions.
For example my terrains are 100x100 so I was setting offset to 100 or -100 depending on its location but instead I needed to use 128 or -128 to keep it in line with the resolution

How to access raw data from RenderTexture in Unity

Short Version of Problem
I am trying to access the contents of a RenderTexture in Unity which I have been drawing with an own Material using Graphics.Blit.
Graphics.Blit (null, renderTexture, material);
My material converts some yuv image to rgb successfully, which I have tested by assigning it to the texture of an UI element. The result is the correct RGB image visible on the screen.
However, I also need the raw data for a QR code scanner. I am doing this like I would access it from a camera as explained here. In a comment down there, it was mentioned that the extraction is also possible from a RenderTexture that was filled with Graphics.Blit. But when I am trying to that, my texture only contains the value 205 everywhere. This is the code I am using in the Update function, directly after the Graphics.Blit call:
RenderTexture.active = renderTexture;
texture.ReadPixels (new Rect (0, 0, width, height), 0, 0);
texture.Apply ();
RenderTexture.active = null;
When assigning this texture to the same UI element, it is gray and slightly transparent. When viewing the image values, they are all 205.
Why is the possible? May there be problems with the formats that the RenderTexture and Texture2D I am trying to fill?
Complete Code
In the following I add the whole code I am using. The names of the variables slightly differ to the ones I used above but they do essentially the same:
/**
* This class continously converts the y and uv textures in
* YUV color space to a RGB texture, which can be used somewhere else
*/
public class YUV2RGBConverter : MonoBehaviour {
public Material yuv2rgbMat;
// Input textures, set these when they are available
[HideInInspector]
public Texture2D yTex;
[HideInInspector]
public Texture2D uvTex;
// Output, the converted textures
[HideInInspector]
public RenderTexture rgbRenderTex;
[HideInInspector]
public Texture2D rgbTex;
[HideInInspector]
public Color32[] rawRgbData;
/// Describes how often per second the image should be transferred to the CPU
public float GPUTransferRate = 1.0f;
private float timeSinceLastGPUTransfer = 0.0f;
private int width;
private int height;
/**
* Initializes the used textures
*/
void Start () {
updateSize (width, height);
}
/**
* Depending on the sizes of the texture, creating the needed textures for this class
*/
public void updateSize(int width, int height)
{
// Generate the input textures
yTex = new Texture2D(width / 4, height, TextureFormat.RGBA32, false);
uvTex = new Texture2D ((width / 2) * 2 / 4, height / 2, TextureFormat.RGBA32, false);
// Generate the output texture
rgbRenderTex = new RenderTexture(width, height, 0);
rgbRenderTex.antiAliasing = 0;
rgbTex = new Texture2D (width, height, TextureFormat.RGBA32, false);
// Set to shader
yuv2rgbMat.SetFloat("_TexWidth", width);
yuv2rgbMat.SetFloat("_TexHeight", height);
}
/**
* Sets the y and uv textures to some dummy data
*/
public void fillYUWithDummyData()
{
// Set the y tex everywhere to time rest
float colorValue = (float)Time.time - (float)((int)Time.time);
for (int y = 0; y < yTex.height; y++) {
for (int x = 0; x < yTex.width; x++) {
Color yColor = new Color (colorValue, colorValue, colorValue, colorValue);
yTex.SetPixel (x, y, yColor);
}
}
yTex.Apply ();
// Set the uv tex colors
for (int y = 0; y < uvTex.height; y++) {
for (int x = 0; x < uvTex.width; x++) {
int firstXCoord = 2 * x;
int secondXCoord = 2 * x + 1;
int yCoord = y;
float firstXRatio = (float)firstXCoord / (2.0f * (float)uvTex.width);
float secondXRatio = (float)secondXCoord / (2.0f * (float)uvTex.width);
float yRatio = (float)y / (float)uvTex.height;
Color uvColor = new Color (firstXRatio, yRatio, secondXRatio, yRatio);
uvTex.SetPixel (x, y, uvColor);
}
}
uvTex.Apply ();
}
/**
* Continuously convert y and uv texture to rgb texture with custom yuv2rgb shader
*/
void Update () {
// Draw to it with the yuv2rgb shader
yuv2rgbMat.SetTexture ("_YTex", yTex);
yuv2rgbMat.SetTexture ("_UTex", uvTex);
Graphics.Blit (null, rgbRenderTex, yuv2rgbMat);
// Only scan once per second
if (timeSinceLastGPUTransfer > 1 / GPUTransferRate) {
timeSinceLastGPUTransfer = 0;
// Fetch its pixels and set it to rgb texture
RenderTexture.active = rgbRenderTex;
rgbTex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
rgbTex.Apply ();
RenderTexture.active = null;
rawRgbData = rgbTex.GetPixels32 ();
} else {
timeSinceLastGPUTransfer += Time.deltaTime;
}
}
}
Ok, sorry that I have to answer my question on my own. The solution is very easy:
The width and height property that I was using in this line:
rgbTex.ReadPixels (new Rect (0, 0, width, height), 0, 0);
where not initialized, so they where 0.
I just had to add those lines to the updateSize function:
this.width = width;
this.height = height;

Tiled Map Editor and draw order in Unity 3D

I have a similar problem to:
stackoverflow
But in my case I'm using Unity3D + TiledMapEditor + Tiled2Unity.
I'm loading my map to Unity3D by Tiled2Unity program and as a player parameter Order in Layer I can change easily by:
Renderer renderer = GetComponent<Renderer>();
renderer.sortingOrder = -(int)(transform.position.y * 100);
Object "map" can only change the parameter Order In Layer for the individual layers.
For example: floor = 0, wall = 1, collision = 2. I have no idea how to get to a single "tile" the map and change its Order In Layer because of where it is located. To map was drawn from top to bottom (The lower the Order in Layer increased).
The script hooked the object "map":
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using UnityEngine;
namespace Tiled2Unity
{
public class TiledMap : MonoBehaviour
{
public int NumTilesWide = 0;
public int NumTilesHigh = 0;
public int TileWidth = 0;
public int TileHeight = 0;
public float ExportScale = 1.0f;
// Note: Because maps can be isometric and staggered we simply can't multply tile width (or height) by number of tiles wide (or high) to get width (or height)
// We rely on the exporter to calculate the width and height of the map
public int MapWidthInPixels = 0;
public int MapHeightInPixels = 0;
public float GetMapWidthInPixelsScaled()
{
return this.MapWidthInPixels * this.transform.lossyScale.x * this.ExportScale;
}
public float GetMapHeightInPixelsScaled()
{
return this.MapHeightInPixels * this.transform.lossyScale.y * this.ExportScale;
}
private void OnDrawGizmosSelected()
{
Vector2 pos_w = this.gameObject.transform.position;
Vector2 topLeft = Vector2.zero + pos_w;
Vector2 topRight = new Vector2(GetMapWidthInPixelsScaled(), 0) + pos_w;
Vector2 bottomRight = new Vector2(GetMapWidthInPixelsScaled(), -GetMapHeightInPixelsScaled()) + pos_w;
Vector2 bottomLeft = new Vector2(0, -GetMapHeightInPixelsScaled()) + pos_w;
Gizmos.color = Color.red;
Gizmos.DrawLine(topLeft, topRight);
Gizmos.DrawLine(topRight, bottomRight);
Gizmos.DrawLine(bottomRight, bottomLeft);
Gizmos.DrawLine(bottomLeft, topLeft);
}
}
}
to better understand (because my level of English is poor):
mesh.png
map.png

Categories