Why would frames drop considerably the longer a Unity Game runs? - c#

I am using Unity 2020.3.4f1 to create a 2D game for mobile.
I created a map builder for the game.
When I hit play, it makes a bunch of 'maps' that are saved as json files. The longer the game plays seems to result in extremely slow frame rates (200fps -> 2 fps) based on the stats menu while the game is running.
The strange thing is if I go to the "Scene" tab and left click on a sprite the fps instantly jumps back up again.
Screenshots
The problem seems related to taking screenshots within Unity.
The big bulge happens & only resets when I un-pause the game.
Questions
Why would the frame rates drop considerably over time the longer the game is running?
Why would the frame rate jump back up after selecting a sprite in the "Scene" tab?
What happens in Unity when selecting a sprite in the "Scene" tab? Is there a garbage collection method?
Script:
private void Awake()
{
myCamera = gameObject.GetComponent<Camera>();
instance = this;
width = 500;
height = 500;
}
private void OnPostRender()
{
if(takeScreenShotOnNextFrame)
{
takeScreenShotOnNextFrame = false;
RenderTexture renderTexture = myCamera.targetTexture;
Texture2D renderResult = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
Rect rect = new Rect(0, 0, renderTexture.width, renderTexture.height);
renderResult.ReadPixels(rect, 0, 0);
float myValue = 0;
float totalPixels = renderResult.width * renderResult.height;
for (int i = 0; i < renderResult.width; i++)
{
for (int j = 0; j < renderResult.height; j++)
{
Color myColor = renderResult.GetPixel(i, j);
myValue += myColor.r;
//Debug.Log("Pixel (" + i + "," + j + "): " + myColor.r);
}
}
occlusion = ((myValue / totalPixels) * 100);
byte[] byteArray = renderResult.EncodeToPNG();
System.IO.File.WriteAllBytes(Application.dataPath + "/Resources/ScreenShots/CameraScreenshot.png", byteArray);
// Cleanup
RenderTexture.ReleaseTemporary(renderTexture);
myCamera.targetTexture = null;
renderResult = null;
}
}
private void TakeScreenshot(int screenWidth, int screenHeight)
{
width = screenWidth;
height = screenHeight;
if(myCamera.targetTexture != null)
{
RenderTexture.ReleaseTemporary(myCamera.targetTexture);
//Debug.Log("Camera target texture null: " + myCamera.targetTexture == null);
}
myCamera.targetTexture = RenderTexture.GetTemporary(width, height, 16);
takeScreenShotOnNextFrame = true;
}
public static void TakeScreenshot_Static(int screenWidth, int screenHeight)
{
instance.TakeScreenshot(screenWidth, screenHeight);
}
}

Related

Why does the array I pass to my multithreading job struct act as a reference type?

I'm working on a unity project involving deformable terrain based on marching-cubes. It works by generating a map of density over the 3-dimensional coordinates of a terrain chunk and using that data to create a mesh representing the surface of the terrain. It has been working, however the process is very slow. I'm attempting to introduce multithreading to improve performance, but I've run into a problem that's left me scratching my head.
When I run CreateMeshData() and try to pass my density map terrainMap into the MarchCubeJob struct, it recognizes it as a reference type, not a value type. I've seemed to whittle down the errors to this one, but I've tried to introduce the data in every way I know how and I'm stumped. I thought passing a reference like this was supposed to create a copy of the data disconnected from the reference, but my understanding must be flawed. My goal is to pass each marchingcube cube into a job and have them run concurrently.
I'm brand new to multithreading, so I've probably made some newbie mistakes here and I'd appreciate if someone would help me out with a second look. Cheers!
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Unity.Jobs;
using Unity.Collections;
using Unity.Burst;
public class Chunk
{
List<Vector3> vertices = new List<Vector3>();
List<int> triangles = new List<int>();
public GameObject chunkObject;
MeshFilter meshFilter;
MeshCollider meshCollider;
MeshRenderer meshRenderer;
Vector3Int chunkPosition;
public float[,,] terrainMap;
// Job system
NativeList<Vector3> marchVerts;
NativeList<Vector3> marchTris;
MarchCubeJob instanceMarchCube;
JobHandle instanceJobHandle;
int width { get { return Terrain_Data.chunkWidth;}}
int height { get { return Terrain_Data.chunkHeight;}}
static float terrainSurface { get { return Terrain_Data.terrainSurface;}}
public Chunk (Vector3Int _position){ // Constructor
chunkObject = new GameObject();
chunkObject.name = string.Format("Chunk x{0}, y{1}, z{2}", _position.x, _position.y, _position.z);
chunkPosition = _position;
chunkObject.transform.position = chunkPosition;
meshRenderer = chunkObject.AddComponent<MeshRenderer>();
meshFilter = chunkObject.AddComponent<MeshFilter>();
meshCollider = chunkObject.AddComponent<MeshCollider>();
chunkObject.transform.tag = "Terrain";
terrainMap = new float[width + 1, height + 1, width + 1]; // Weight of each point
meshRenderer.material = Resources.Load<Material>("Materials/Terrain");
// Generate chunk
PopulateTerrainMap();
CreateMeshData();
}
void PopulateTerrainMap(){
...
}
void CreateMeshData(){
ClearMeshData();
vertices = new List<Vector3>();
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
for (int z = 0; z < width; z++) {
Debug.Log(x + ", " + y + ", " + z + ", begin");
Vector3Int position = new Vector3Int(x, y, z);
// Set up memory pointers
NativeList<Vector3> marchVerts = new NativeList<Vector3>(Allocator.TempJob);
NativeList<int> marchTris = new NativeList<int>(Allocator.TempJob);
NativeList<float> mapSample = new NativeList<float>(Allocator.TempJob);
// Split marchcube into jobs by cube
instanceMarchCube = new MarchCubeJob(){
position = position,
marchVerts = marchVerts,
marchTris = marchTris,
mapSample = terrainMap
};
// Run job for each cube in a chunk
instanceJobHandle = instanceMarchCube.Schedule();
instanceJobHandle.Complete();
// Copy data from job to mesh data
//instanceMarchCube.marchVerts.CopyTo(vertices);
vertices.AddRange(marchVerts);
triangles.AddRange(marchTris);
// Dispose of memory pointers
marchVerts.Dispose();
marchTris.Dispose();
mapSample.Dispose();
Debug.Log(x + ", " + y + ", " + z + ", end");
}
}
}
BuildMesh();
}
public void PlaceTerrain (Vector3 pos, int radius, float speed){
...
CreateMeshData();
}
public void RemoveTerrain (Vector3 pos, int radius, float speed){
...
CreateMeshData();
}
void ClearMeshData(){
vertices.Clear();
triangles.Clear();
}
void BuildMesh(){
Mesh mesh = new Mesh();
mesh.vertices = vertices.ToArray();
mesh.triangles = triangles.ToArray();
mesh.RecalculateNormals();
meshFilter.mesh = mesh;
meshCollider.sharedMesh = mesh;
}
private void OnDestroy(){
marchVerts.Dispose();
marchTris.Dispose();
}
}
// Build a cube as a job
[BurstCompile]
public struct MarchCubeJob: IJob{
static float terrainSurface { get { return Terrain_Data.terrainSurface;}}
public Vector3Int position;
public NativeList<Vector3> marchVerts;
public NativeList<int> marchTris;
public float[,,] mapSample;
public void Execute(){
//Sample terrain values at each corner of cube
float[] cube = new float[8];
for (int i = 0; i < 8; i++){
cube[i] = SampleTerrain(position + Terrain_Data.CornerTable[i]);
}
int configIndex = GetCubeConfiguration(cube);
// If done (-1 means there are no more vertices)
if (configIndex == 0 || configIndex == 255){
return;
}
int edgeIndex = 0;
for (int i = 0; i < 5; i++){ // Triangles
for (int p = 0; p < 3; p++){ // Tri Vertices
int indice = Terrain_Data.TriangleTable[configIndex, edgeIndex];
if (indice == -1){
return;
}
// Get 2 points of edge
Vector3 vert1 = position + Terrain_Data.CornerTable[Terrain_Data.EdgeIndexes[indice, 0]];
Vector3 vert2 = position + Terrain_Data.CornerTable[Terrain_Data.EdgeIndexes[indice, 1]];
Vector3 vertPosition;
// Smooth terrain
// Sample terrain values at either end of current edge
float vert1Sample = cube[Terrain_Data.EdgeIndexes[indice, 0]];
float vert2Sample = cube[Terrain_Data.EdgeIndexes[indice, 1]];
// Calculate difference between terrain values
float difference = vert2Sample - vert1Sample;
if (difference == 0){
difference = terrainSurface;
}
else{
difference = (terrainSurface - vert1Sample) / difference;
}
vertPosition = vert1 + ((vert2 - vert1) * difference);
marchVerts.Add(vertPosition);
marchTris.Add(marchVerts.Length - 1);
edgeIndex++;
}
}
}
static int GetCubeConfiguration(float[] cube){
int configurationIndex = 0;
for (int i = 0; i < 8; i++){
if (cube[i] > terrainSurface){
configurationIndex |= 1 << i;
}
}
return configurationIndex;
}
public float SampleTerrain(Vector3Int point){
return mapSample[point.x, point.y, point.z];
}
}

Blend 2 Textures Unity C#

How can I blend two textures into a new one?
I have a texture from the android gallery and some logo png texture. I need to add this logo into the texture from the gallery and store this as variable to save into the gallery as a new image.
These shaders blend between two textures based on a 0-1 value that you control. The first version is extra-fast because it does not use lighting, and the second uses the same basic ambient + diffuse calculation that I used in my Simply Lit shader.
http://wiki.unity3d.com/index.php/Blend_2_Textures
Drag a different texture onto each of the material's variable slots, and use the Blend control to mix them to taste.
Take note that the lit version requires two passes on the GPU used in the oldest iOS devices.
ShaderLab - Blend 2 Textures.shader
Shader "Blend 2 Textures" {
Properties {
_Blend ("Blend", Range (0, 1) ) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
}
}
ShaderLab - Blend 2 Textures, Simply Lit.shader
Shader "Blend 2 Textures, Simply Lit" {
Properties {
_Color ("Color", Color) = (1,1,1)
_Blend ("Blend", Range (0,1)) = 0.5
_MainTex ("Texture 1", 2D) = ""
_Texture2 ("Texture 2", 2D) = ""
}
Category {
Material {
Ambient[_Color]
Diffuse[_Color]
}
// iPhone 3GS and later
SubShader {Pass {
Lighting On
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
SetTexture[_] {Combine previous * primary Double}
}}
// pre-3GS devices, including the September 2009 8GB iPod touch
SubShader {
Pass {
SetTexture[_MainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, [_Blend])
Combine texture Lerp(constant) previous
}
}
Pass {
Lighting On
Blend DstColor SrcColor
}
}
}
}
I had a similar task with a paint tool I was making. So here's my approach:
First, import or instantiate logo and picture textures as Texture2D in order to use Texture2D.GetPixel() and Texture2D.SetPixel() methods.
Assuming that logo size is smaller than picture itself, store logo pixels into the Color[] array:
Color[] logoPixels = logo.GetPixels();
We need to apply logo above the picture, considering alpha level in logo image itself:
//Method GetPixels stores pixel colors in 1D array
int i = 0; //Logo pixel index
for (int y = 0; y < picture.height; y++) {
for (int x = 0; x < picture.width; x++) {
//Get color of original pixel
Color c = picture.GetPixel (logoPositionX + x, logoPositionY + y);
//Lerp pixel color by alpha value
picture.SetPixel (logoPositionX + x, logoPositionY + y, Color.Lerp (c, logoPixels[i], logoPixels[i].a));
i++;
}
}
//Apply changes
picture.Apply();
So, if pixel's alpha = 0 we leave it without changes.
Get bytes of resulting image with picture.GetRawTextureData() and save it as png in a regular way. And to use SetPixel() and SetPixels() methods, make sure both, logo and picture it being applied to, are set Read/Write enabled in the import settings!
It's an old question but I have another solution:
public static Texture2D merge(params Texture2D[] textures) {
if (textures == null || textures.Length == 0)
return null;
int oldQuality = QualitySettings.GetQualityLevel();
QualitySettings.SetQualityLevel(5);
RenderTexture renderTex = RenderTexture.GetTemporary(
textures[0].width,
textures[0].height,
0,
RenderTextureFormat.Default,
RenderTextureReadWrite.Linear);
Graphics.Blit(textures[0], renderTex);
RenderTexture previous = RenderTexture.active;
RenderTexture.active = renderTex;
GL.PushMatrix();
GL.LoadPixelMatrix(0, textures[0].width, textures[0].height, 0);
for (int i = 1; i < textures.Length; i++)
Graphics.DrawTexture(new Rect(0, 0, textures[0].width, textures[0].height), textures[i]);
GL.PopMatrix();
Texture2D readableText = new Texture2D(textures[0].width, textures[0].height);
readableText.ReadPixels(new Rect(0, 0, renderTex.width, renderTex.height), 0, 0);
readableText.Apply();
RenderTexture.active = previous;
RenderTexture.ReleaseTemporary(renderTex);
QualitySettings.SetQualityLevel(oldQuality);
return readableText;
}
And here is the use:
Texture2D coloredTex = ImageUtils.merge(tex,
sprites[0].texture,
sprites[1].texture,
sprites[2].texture,
sprites[3].texture);
Hope it helps
I made this solution, It works with two texture2d in Unity.
public Texture2D ImageBlend(Texture2D Bottom, Texture2D Top)
{
var bData = Bottom.GetPixels();
var tData = Top.GetPixels();
int count = bData.Length;
var final = new Color[count];
int i = 0;
int iT = 0;
int startPos = (Bottom.width / 2) - (Top.width / 2) -1;
int endPos = Bottom.width - startPos -1;
for (int y = 0; y < Bottom.height; y++)
{
for (int x = 0; x < Bottom.width; x++)
{
if (y > startPos && y < endPos && x > startPos && x < endPos)
{
Color B = bData[i];
Color T = tData[iT];
Color R;
R = new Color((T.a * T.r) + ((1-T.a) * B.r),
(T.a * T.g) + ((1 - T.a) * B.g),
(T.a * T.b) + ((1 - T.a) * B.b), 1.0f);
final[i] = R;
i++;
iT++;
}
else
{
final[i] = bData[i];
i++;
}
}
}
var res = new Texture2D(Bottom.width, Bottom.height);
res.SetPixels(final);
res.Apply();
return res;
}

Unity Text Ui looks small on android device

I'm new to unity and I've developing a simple 2d game.
at scoreboard scene i've managed to save scores and display them on a scrollview. when i run it in unity it works fine but when i build and run in my android phone the scrollview looks a bit bigger and text ui (added by script) look very small.
Here is the code to display scores in content game object in scrollview :
void Start () {
if (PlayerPrefs.HasKey (0 + "HScore")) {
float y = -30;
for (int i = 0; i < 10; i++) {
if (PlayerPrefs.GetInt (i + "HScore") == 0) {
break;
}
GameObject textobj = new GameObject (i + "HScoreName", typeof(RectTransform));
GameObject textobj2 = new GameObject (i + "HScore", typeof(RectTransform));
Text name = textobj.AddComponent<Text> ();
Text score = textobj2.AddComponent<Text> ();
GameObject lineObj = new GameObject ("Line", typeof(RectTransform));
Image l = lineObj.AddComponent<Image> ();
l.color = Color.white;
lineObj.transform.localScale = new Vector3 (500, 0.01f, 1);
name.text = "#" + (i + 1) + "- " + PlayerPrefs.GetString (i + "HScoreName");
score.text = PlayerPrefs.GetInt (i + "HScore").ToString ();
name.color = Color.white;
score.color = Color.white;
name.alignment = TextAnchor.MiddleLeft;
score.alignment = TextAnchor.MiddleLeft;
name.horizontalOverflow = HorizontalWrapMode.Overflow;
name.font = Resources.GetBuiltinResource<Font> ("Arial.ttf");
score.font = Resources.GetBuiltinResource<Font> ("Arial.ttf");
name.fontSize = 15;
score.fontSize = 15;
score.fontStyle = FontStyle.Bold;
textobj.transform.position = content.transform.position + new Vector3 (70, y, 0);
textobj.transform.SetParent (content.transform);
textobj2.transform.position = content.transform.position + new Vector3 (180, y, 0);
textobj2.transform.SetParent (content.transform);
lineObj.transform.position = content.transform.position + new Vector3 (60, y - 25, 0);
lineObj.transform.SetParent (content.transform);
y = y - 50;
}
}
}
is there anything missing in this script to keep text fit with screen?
You are using flat numbers, what you need s a percentage of Screen.width and Screen.height
For example if you are running in a phone with resolution 150x150 for example, if you want it in position 5, 5 you write this
transform.position = (5 / 100) * 150, it will set your object at 5 PERCENT from the bottom left edge.
There is a component - Canvas Scaler on the Canvas. Try to to change UI Scale Mode - Scale with Screen Size - it must be on the Canvas. And also you can play with Math value
Try to change UI Scale Mode - Scale with Screen Size - it must be on the Canvas.

Eye detection using OpenCVSharp in Unity (fps issues)

I'm currently working on a project involving integrating OpenCVSharp into Unity, to allow eye tracking within a game environment. I've managed to get OpenCVSharp integrated into the Unity editor and currently have eye-detection (not tracking) working within a game. It can find your eyes within a webcam image, then display where its currently detected them on a texture, which I display within the scene.
However its causing a HUGE fps drop, mainly because every frame its converting a webcam texture into an IPLimage so that OpenCV can handle it. It then has to convert it back to a 2Dtexture to be displayed within the scene, after its done all the eye detection. So understandably its too much for the CPU to handle. (As far as I can tell its only using 1 core on my CPU).
Is there a way to do all the eye detection without converting the texture to an IPLimage? Or any other way to fix the fps drop. Some things that I've tried include:
Limiting the frames that it updates on. However this just causes it
to run smoothly, then stutter horribly on the frame that it has to
update.
Looking at threading, but as far as I'm aware Unity doesn't allow it.
As far as I can tell its only using 1 core on my CPU which seems a bit silly. If there was a way to change this it could fix the issue?
Tried different resolutions on the camera, however the resolution that the game can actually run smoothly at, is too small for the eye's to actually be detected, let alone tracked.
I've included the code below, of if you would prefer to look at it in a code editor here is a link to the C# File. Any suggestions or help would be greatly appreciated!
For reference I used code from here (eye detection using opencvsharp).
using UnityEngine;
using System.Collections;
using System;
using System.IO;
using OpenCvSharp;
//using System.Xml;
//using OpenCvSharp.Extensions;
//using System.Windows.Media;
//using System.Windows.Media.Imaging;
public class CaptureScript : MonoBehaviour
{
public GameObject planeObj;
public WebCamTexture webcamTexture; //Texture retrieved from the webcam
public Texture2D texImage; //Texture to apply to plane
public string deviceName;
private int devId = 1;
private int imWidth = 640; //camera width
private int imHeight = 360; //camera height
private string errorMsg = "No errors found!";
static IplImage matrix; //Ipl image of the converted webcam texture
CvColor[] colors = new CvColor[]
{
new CvColor(0,0,255),
new CvColor(0,128,255),
new CvColor(0,255,255),
new CvColor(0,255,0),
new CvColor(255,128,0),
new CvColor(255,255,0),
new CvColor(255,0,0),
new CvColor(255,0,255),
};
const double Scale = 1.25;
const double ScaleFactor = 2.5;
const int MinNeighbors = 2;
// Use this for initialization
void Start ()
{
//Webcam initialisation
WebCamDevice[] devices = WebCamTexture.devices;
Debug.Log ("num:" + devices.Length);
for (int i=0; i<devices.Length; i++) {
print (devices [i].name);
if (devices [i].name.CompareTo (deviceName) == 1) {
devId = i;
}
}
if (devId >= 0) {
planeObj = GameObject.Find ("Plane");
texImage = new Texture2D (imWidth, imHeight, TextureFormat.RGB24, false);
webcamTexture = new WebCamTexture (devices [devId].name, imWidth, imHeight, 30);
webcamTexture.Play ();
matrix = new IplImage (imWidth, imHeight, BitDepth.U8, 3);
}
}
void Update ()
{
if (devId >= 0)
{
//Convert webcam texture to iplimage
Texture2DtoIplImage();
/*DO IMAGE MANIPULATION HERE*/
//do eye detection on iplimage
EyeDetection();
/*END IMAGE MANIPULATION*/
if (webcamTexture.didUpdateThisFrame)
{
//convert iplimage to texture
IplImageToTexture2D();
}
}
else
{
Debug.Log ("Can't find camera!");
}
}
void EyeDetection()
{
using(IplImage smallImg = new IplImage(new CvSize(Cv.Round (imWidth/Scale), Cv.Round(imHeight/Scale)),BitDepth.U8, 1))
{
using(IplImage gray = new IplImage(matrix.Size, BitDepth.U8, 1))
{
Cv.CvtColor (matrix, gray, ColorConversion.BgrToGray);
Cv.Resize(gray, smallImg, Interpolation.Linear);
Cv.EqualizeHist(smallImg, smallImg);
}
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
{
storage.Clear ();
CvSeq<CvAvgComp> eyes = Cv.HaarDetectObjects(smallImg, cascade, storage, ScaleFactor, MinNeighbors, 0, new CvSize(30, 30));
for(int i = 0; i < eyes.Total; i++)
{
CvRect r = eyes[i].Value.Rect;
CvPoint center = new CvPoint{ X = Cv.Round ((r.X + r.Width * 0.5) * Scale), Y = Cv.Round((r.Y + r.Height * 0.5) * Scale) };
int radius = Cv.Round((r.Width + r.Height) * 0.25 * Scale);
matrix.Circle (center, radius, colors[i % 8], 3, LineType.AntiAlias, 0);
}
}
}
}
void OnGUI ()
{
GUI.Label (new Rect (200, 200, 100, 90), errorMsg);
}
void IplImageToTexture2D ()
{
int jBackwards = imHeight;
for (int i = 0; i < imHeight; i++) {
for (int j = 0; j < imWidth; j++) {
float b = (float)matrix [i, j].Val0;
float g = (float)matrix [i, j].Val1;
float r = (float)matrix [i, j].Val2;
Color color = new Color (r / 255.0f, g / 255.0f, b / 255.0f);
jBackwards = imHeight - i - 1; // notice it is jBackward and i
texImage.SetPixel (j, jBackwards, color);
}
}
texImage.Apply ();
planeObj.renderer.material.mainTexture = texImage;
}
void Texture2DtoIplImage ()
{
int jBackwards = imHeight;
for (int v=0; v<imHeight; ++v) {
for (int u=0; u<imWidth; ++u) {
CvScalar col = new CvScalar ();
col.Val0 = (double)webcamTexture.GetPixel (u, v).b * 255;
col.Val1 = (double)webcamTexture.GetPixel (u, v).g * 255;
col.Val2 = (double)webcamTexture.GetPixel (u, v).r * 255;
jBackwards = imHeight - v - 1;
matrix.Set2D (jBackwards, u, col);
//matrix [jBackwards, u] = col;
}
}
}
}
You can move these out of the per frame update loop :
using(CvHaarClassifierCascade cascade = CvHaarClassifierCascade.FromFile (#"C:\Users\User\Documents\opencv\sources\data\haarcascades\haarcascade_eye.xml"))
using(CvMemStorage storage = new CvMemStorage())
No reason to be building the recognizer graph each frame.
Threading is the logical way to go moving forward if you want real speed updates, unity itself is not threaded, but you can fold in other threads if your careful.
Do the texture -> ipl image on the main thread then trigger an event to fire off your thread.
The thread can do all the CV work, probably construct the tex2d and then push back to main to render.
You should also be able to gain some performance improvements if you use:
Color32[] pixels;
pixels = new Color32[webcamTexture.width * webcamTexture.height];
webcamTexture.GetPixels32(pixels);
The Unity doco suggests that this can be quite a bit faster than calling "GetPixels" (and certainly faster than calling GetPixel for each pixel), and then you don't need to scale each RGB channel against 255 manually.

XNA Isometric tile collision

Im a C#/XNA student and I've recently been working on an isometric tile engine and so far it works fairly well. But im having problem trying to figure out on how to do collision, this is what my tile engine does at the moment:
Draws the world from an image and place a tile depending on what color is on my image. For instance color red would draw a grass tile. (Tiles are 64x32)
Camera following player, and my draw loop only draws what the camera sees.
This is how my game looks if that would be of any help:
I don't know what sort of collision would work best. Should i do collision points, or intersects or any other sort of collision. I've read somewhere that you could do Worldtoscreen/Screentoworld but im far to inexperienced and don't know how that works nor how the code would look like.
Here is my code drawing tiles etc:
class MapRow
{
public List<MapCell> Columns = new List<MapCell>();
}
class TileMap
{
public List<MapRow> Rows = new List<MapRow>();
public static Texture2D image;
Texture2D tileset;
TileInfo[,] tileMap;
Color[] pixelColor;
public TileMap(string TextureImage, string Tileset)
{
tileset = Game1.Instance.Content.Load<Texture2D>(Tileset);
image = Game1.Instance.Content.Load<Texture2D>(TextureImage);
pixelColor = new Color[image.Width * image.Height]; // pixelColor array that is holding all pixel in the image
image.GetData<Color>(pixelColor); // Save all the pixels in image to the array pixelColor
tileMap = new TileInfo[image.Height, image.Width];
int counter = 0;
for (int y = 0; y < image.Height; y++)
{
MapRow thisRow = new MapRow();
for (int x = 0; x < image.Width; x++)
{
tileMap[y, x] = new TileInfo();
if (pixelColor[counter] == new Color(0, 166, 81))
{
tileMap[y, x].cellValue = 1;//grass
}
if (pixelColor[counter] == new Color(0, 74, 128))
{
tileMap[y, x].cellValue = 2;//water
}
if (pixelColor[counter] == new Color(255, 255, 0))
{
tileMap[y, x].cellValue = 3;//Sand
}
tileMap[y, x].LoadInfoFromCellValue();//determine what tile it should draw depending on cellvalue
thisRow.Columns.Add(new MapCell(tileMap[y, x]));
counter++;
}
Rows.Add(thisRow);
}
}
public static int printx;
public static int printy;
public static int squaresAcross = Settings.screen.X / Tile.TileWidth;
public static int squaresDown = Settings.screen.Y / Tile.TileHeight;
int baseOffsetX = -32;
int baseOffsetY = -64;
public void draw(SpriteBatch spriteBatch)
{
printx = (int)Camera.Location.X / Tile.TileWidth;
printy = (int)Camera.Location.Y / Tile.TileHeight;
squaresAcross = (int)Camera.Location.X / Tile.TileWidth + Settings.screen.X / Tile.TileWidth;
squaresDown = 2*(int)Camera.Location.Y / Tile.TileHeight + Settings.screen.Y / Tile.TileHeight + 7;
for (printy = (int)Camera.Location.Y / Tile.TileHeight; printy < squaresDown; printy++)
{
int rowOffset = 0;
if ((printy) % 2 == 1)
rowOffset = Tile.OddRowXOffset;
for (printx = (int)Camera.Location.X / Tile.TileWidth; printx < squaresAcross; printx++)
{
if (tileMap[printy, printx].Collides(MouseCursor.mousePosition))
Console.WriteLine(tileMap[printy, printx].tileRect);
foreach (TileInfo tileID in Rows[printy].Columns[printx].BaseTiles)
{
spriteBatch.Draw(
tileset,
tileMap[printy, printx].tileRect = new Rectangle(
(printx * Tile.TileStepX) + rowOffset + baseOffsetX,
(printy * Tile.TileStepY) + baseOffsetY,
Tile.TileWidth, Tile.TileHeight),
Tile.GetSourceRectangle(tileID.cellValue),
Color.White,
0.0f,
Vector2.Zero,
SpriteEffects.None,
tileID.drawDepth);
}
}
}
}
}
Why don't you just draw stuff just like in normal tile based games, and then rotate the camera with a 45degree? Of course then you'd need to make your graphics a bit odd, but would be easier to handle the tiles.
But if you prefer your way, then I'd suggest using simple math to calculate the "tile to the right", "tile to the left" , "tile to the up" and "tile to the down" ones, you know, the tiles around the player(or another tile). You can simply work with your lists, and with some math, basic math, like getting the next tile, is quite simple.
Edit:
You could get the player's next position's tile value with a code something like this:
tileMap[Math.Floor((player.y+playerVelociy.Y)/tileHeight)]
[Math.Floor((player.x+playerVelocity.X)/tileWidth)]
In this code, I assume that the first tile is at 0,0 and you're drawing to right and down. (If not, then just change the Math.Floor to Math.Ceil)
THIS link could help you get the idea, however it's in AS3.0, only the syntax is different.

Categories