Apply Textures to Only Certain Faces of a Cube in Unity - c#

I am trying to make realistic-looking procedurally generated buildings. So far I have made a texture generation program for them. I would like to apply that texture to a simple cube in Unity, however, I don't want to apply the texture to all of the faces. Currently when I apply the texture to the cube's material it apply's the same texture to all of the faces and on some of the faces the texture is upside down. Do you recommend that I make plane objects and apply the textures to each of those (and form a cube that way). I know this would work, but is it efficient at a large scale? Or is there a way to apply different textures to individual faces of a cube in C#?

Considering that you are trying to create buildings you should procedurally generate your own 'cube'/building mesh data.
For example:
using UnityEngine;
using System.Collections;
public class ExampleClass : MonoBehaviour {
public Vector3[] newVertices;
public Vector2[] newUV;
public int[] newTriangles;
void Start() {
Mesh mesh = new Mesh();
mesh.vertices = newVertices;
mesh.uv = newUV;
mesh.triangles = newTriangles;
GetComponent<MeshFilter>().mesh = mesh;
}
}
Then you populate the verticies, and tris with data.
For example, you could create a small 1 by 1 cube around origin 0 with:
int size = 1;
newVertices = new Vector3[]{
Vector3(-size, -size, -size),
Vector3(-size, size, -size),
Vector3( size, size, -size),
Vector3( size, -size, -size),
Vector3( size, -size, size),
Vector3( size, size, size),
Vector3(-size, size, size),
Vector3(-size, -size, size)
};
Then because you only want to render the texture on 1 of the meshes faces;
newUV = new Vector2[]{
Vector2(0,0),
Vector2(0,1),
Vector2(1,0),
Vector2(1,1),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0)
};
Note that only allocating UV coords for some of the verts This will effectively only place your desired texture on 1 of the faces and then depending on how you allocate the tris, the rest of the UVs can be altered as you see fit to appear as though they are untextured.
Please note that I wrote this with out an IDE so there may be some syntax errors. Also, this isn't the most straight forward process, but I promise that it far exceeds the use of a series of quads as a building because with this you can make a whole range of shapes 'easily'.
Resources:
http://docs.unity3d.com/ScriptReference/Mesh.html
https://msdn.microsoft.com/en-us/library/windows/desktop/bb205592%28v=vs.85%29.aspx
http://docs.unity3d.com/Manual/GeneratingMeshGeometryProcedurally.html

AFAIK unity does not natively support this.
There are however some easy ways to work around. Your example of using a plane is a good thought, but using a quad would make more sense. Or even easier, create a multi faced cube as a model and export it as for example a .fbx

Related

Sprite.OverridePhysicsShape( ? );

Here is the link that says about:
https://docs.unity3d.com/ScriptReference/Sprite.OverridePhysicsShape.html
The part of IList I understand. But Vector2[], I do not understand how I would use it to populate with data and use it here.
I haven't actually used this and I'm not in a position to knock up a quick test at the moment, but this might help. You should be able to define the physics boundaries for your sprite as a collection of shapes.
Imagine you have a single sprite with two separate components and a gap in the middle. Rather than defining a single physics shape around the entire thing, you might choose to define the physics shape in two parts so that it better conforms to the two parts of the image.
Therefore, the IList is your collection of shapes. This could quite likely just contain a single shape. The Vector2 array defines the points (at least three points in sprite space) for that individual shape. All of the Vector2 arrays combined form the overall physics shape.
EDIT: How to set the values? Something like this perhaps?
void Start ()
{
SpriteRenderer spriteRenderer = GetComponent<SpriteRenderer>();
Sprite sprite = spriteRenderer.sprite;
sprite.OverridePhysicsShape(new List<Vector2[]> {
new Vector2[] { new Vector2(0, 0), new Vector2(1, 0), new Vector2(1, 1), new Vector2(0, 1) },
new Vector2[] { new Vector2(2, 2), new Vector2(3, 2), new Vector2(3, 3), new Vector2(2, 3) },
});
}
DISCLAIMER: Not sure if that actually works.
I'm sorry, but the official Unity answer is "Don't bother." Calls to Sprite.OverridePhysicsShape() only work when importing (or reimporting) images and cannot work at runtime (or even in the editor). Please see this link about it:
https://issuetracker.unity3d.com/issues/sprite-dot-overridephysicsshape-raises-an-error-when-using-it-in-editor-mode
The previous answer is correct that this is a list of Vector2[] arrays, but the coordinates of the Vector2 points must all be between [0,0] and [1,1] because the coordinates are parametric to the bounds of the Sprite.
Here is some code that should work. It doesn't throw an error now (in v2020.3.1), but it also doesn't seem to do anything.
public class SpritePhysicsShapeTest : MonoBehaviour {
public Sprite sprite;
// Start is called before the first frame update
void Start() {
List<Vector2[]> pts = new List<Vector2[]>();
pts.Add( new Vector2[] {
new Vector2( 0, 0 ),
new Vector2( 0.5f, 0 ),
new Vector2( 0.5f, 0.5f ),
new Vector2( 0, 0.5f )
} );
sprite.OverridePhysicsShape(pts);
}
}
If you're trying to programmatically affect collision on a Unity Tilemap, my current solution is to have one Tilemap for visuals (with 256 possible Sprites) and a second invisible Tilemap that handles collisions (with only 16 possible Sprites). This seems to work well, though I did have to manually draw those 16 Sprites' collisions in the Sprite Editor.

How can stop my cameras outputting to the same render texture in Unity?

I have two renderer objects (A and B) in my scene connected to two different cameras (green square and red square):
I am using the following script on both render objects to create a render texure on the corresponding camera and then draw this as a texture on the object on each frame:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class CameraRenderer : MonoBehaviour
{
public Camera Camera;
public Renderer Renderer;
void Start()
{
RenderTexture renderTexture = new RenderTexture (256, 256, 16, RenderTextureFormat.ARGB32);
renderTexture.Create ();
Camera.targetTexture = renderTexture;
}
void Update ()
{
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();
}
Texture2D GetCameraTexture()
{
RenderTexture currentRenderTexture = RenderTexture.active;
RenderTexture.active = Camera.targetTexture;
Camera.Render();
Texture2D texture = new Texture2D(Camera.targetTexture.width, Camera.targetTexture.height);
texture.ReadPixels(new Rect(0, 0, Camera.targetTexture.width, Camera.targetTexture.height), 0, 0);
texture.Apply();
RenderTexture.active = currentRenderTexture;
return texture;
}
}
I am expecting to see two different images on A and B from the different cameras, but I am seeing the same image. I originally was using a render texture that I created in the editor attached to the camera, but though that might be what was causing them to render the same thing so I tried creating a new texture on each object. Sadly this still resulted in the same outcome.
I'm pretty new to unity so I've run out of ideas pretty fast - any suggestions would be great!
I wouldn't advise naming your objects your class names. Anyways I think the renderers are using the same material and they both render the same texture whichever camera gives them last.
Either use Renderer.material to automatically create an new instance of the material, or manually assign different materials to the 2 renderers.
Try,
Renderer.material.mainTexture = GetCameraTexture ();
Instead of,
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();

How do I resize sprites in a C# XNA game based on window size?

I'm making a game in C# and XNA 4.0. It uses multiple objects (such as a player character, enemies, platforms, etc.), each with their own texture and hitbox. The objects are created and drawn using code similar to the following:
class Object
{
Texture2D m_texture;
Rectangle m_hitbox;
public Object(Texture2D texture, Vector2 position)
{
m_texture = texture;
m_hitbox = new Rectangle((int)position.X, (int)position.Y, texture.Width, texture.Height);
}
public void Draw(SpriteBatch spriteBatch)
{
spriteBatch.Draw(texture, m_hitbox, Color.White);
}
}
Everything works properly, but I also want to allow the player to resize the game window. The main game class uses the following code to do so:
private void Update(GameTime gameTime)
{
if (playerChangedWindowSize == true)
{
graphics.PreferredBackBufferHeight = newHeight;
graphics.PreferredBackBufferWidth = newWidth;
graphics.ApplyChanges();
}
}
This will inevitably cause the positions and hitboxes of the objects to become inaccurate whenever the window size is changed. Is there an easy way for me to change the positions and hitboxes based on a new window size? If the new window width was twice as big as it was before I could probably just double the width of every object's hitbox, but I'm sure that's a terrible way of doing it.
Consider normalizing your coordinate system to view space {0...1} and only apply the window dimensions scalar at the point of rendering.
View Space to Screen Space Conversion
Pseudo code for co-ordinates:
x' = x * screenResX
y' = y * screenResY
Similarly for dimensions. Let's say you have a 32x32 sprite originally designed for 1920x1080 and wish to scale so that it fits the same logical space on screen (so it doesn't appear unnaturally small):
r = 32 * screenResX' / screenResY
width' = width * r
height' = height * r
Then it won't matter what resolution the user has set.
If you are concerned over performance this may impose, then you can perform the above at screen resolution change time for a one-off computation. However you should still always keep the original viewspace {0...1}.
Collision Detection
It's arguably more efficient to perform CD on screen space coordinates
Hope this helps

3D graphics in wpf c# on windows

I am new to 3d Graphics and also wpf and need to combine these two in my current project. I add points and normals to MeshGeometry3D and add MeshGeometry3D to GeometryModel3D. Then add GeometryModel3D to ModelVisual3D and finally add ModelVisual3D to ViewPort3D. Now if i need to rotate i perform the required Transform either on GeometryModel3D or ModelVisual3D and add it again finally to the ViewPort3D. I'm running into a problems:
objViewPort3D.Remove(objModelVisual3D);
objGeometryModel3D.Transform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), angle += 15));
objModelVisual3D.Content = objGeometryModel3D;
objViewPort3D.Children.Add(objModelVisual3D);
to rotate it everytime by 15 degrees why must i do angle += 15 and not just 15? It seems that the stored model is not transformed by Transform operation but transformation is applied only when displaying by ViewPort3D. I want the transformation to actually change the coordinates in the stored MeshGeometry3D object so that when i do the transform next time it does on the previously transformed model and not the original model. How do i obtain this behaviour?
I think you can use Animation
some pseudo-code:
angle = 0
function onClick:
new_angle = angle + 30
Animate(angle, new_angle)
angle = new_angle
You have to do angle += 15 because you're applying a new RotateTransform3D each time.
This might help:
public RotateTransform3D MyRotationTransform { get; set; }
...
//constructor
public MyClass()
{
MyRotationTransform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0));
}
//in your method
MyRotationTransform.Rotation += 15;
objGeometryModel3D.Transform = MyRotationTransform;
Correct, the position of the mesh is not transformed by the "Transform" operation. Instead the Transform property defines the world transform of the mesh during rendering.
In 3d graphics the world transform transforms the points of the mesh from object space to world space during the render of the object.
(Image from World, View and Projection Matrix Unveiled)
It's much faster to set the world transform and let the renderer draw the mesh in a single transform than transforming each vertex of a mesh, like you want.

Render to a texture2d XNA

I need to render a sprite in a texture2d so that this texture can later be render on the screen, but at the same time I need to access the pixels of this modified texture so, if I add let's say a sprite in the texture and I call a get pixel function in a coordinate where the sprite was then it should give me the new pixel values that correspond to the sprite (that has been blended with the texture2d).
I am using xna 4.0 not 3.5 or less.
thanks.
the equivalent of Graphics.FromImage(img).DrawImage(... in GDI
I tried this and failed
public static Texture2D DrawSomething(Texture2D old, int X, int Y, int radius) {
var pp = Res.game.GraphicsDevice.PresentationParameters;
var r = new RenderTarget2D(Res.game.GraphicsDevice, old.Width, old.Height, false, pp.BackBufferFormat, pp.DepthStencilFormat,
pp.MultiSampleCount, RenderTargetUsage.DiscardContents);
Res.game.GraphicsDevice.SetRenderTarget(r);
var s = new SpriteBatch(r.GraphicsDevice);
s.Begin();
s.Draw(old, new Vector2(0, 0), Color.White);
s.Draw(Res.picture, new Rectangle(X - radius / 2, Y - radius / 2, radius, radius), Color.White);
s.End();
Res.game.GraphicsDevice.SetRenderTarget(null);
return r;
}
Res.game is basically a pointer to the main game form and Res.picture is a random texture2d
Use a RenderTarget2D: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.rendertarget2d.aspx
If possible, avoid creating a new render target every time. Create it outside of the method and reuse it for best performance.
Here some pseudo-code:
public Texture2D DrawOnTop(RenderTarget2D target, Texture2D oldTexture, Texture2D picture)
{
SetRenderTarget(target);
Draw(oldTexture);
Draw(picture);
SetRenderTarget(null);
return target;
}
If the size changes frequently and you cannot reuse the target, at least dispose the previous one, like annonymously suggested in the comments. Each new target will consume memory, unless you release the resource in time. But dispose it after you used it in a shader or did whatever you wanted to do with it. Once disposed it is gone.

Categories