I am working on a 3D game and it has lots of game objects in a scene. So I'm working on reducing draw calls. I'v used mesh combining on my static game objects. But my player is'n static and I can't use mesh combining on it. My player is nothing but combination of some cubes which uses the standard shader and some different color Materials on different parts. So I'm guessing, I can use texture atlasing on my player to reduce darw calls. But I don't know how to do it.
Is my theory of work right? If I'm right please help me with atlasing, and if I'm wrong please point out my fault.
Thanks in advance.
Put all the required images into the same texture. Create a material from that texture. Apply the same material to all of the cubes making up you character. UV map all cubes to the relevant part of the texture (use UV offset in the Unity editor if the UV blocks are quite simple, otherwise you'll need to move the UV elements in your 3D modelling program).
Related
I'm creating simple game and I need to create extending cylinder. I know how to normally do it - by changing objects pivot point and scaling it up, but now I have texture on it and I don't want it to stretch.
I fought about adding small segments to the end of the cylinder when it has to grow, but this wont work smooth and might affect performance. Do you know any solution to this?
This the rope texture that I'm using right now(but it might change in the future):
Before scaling
After scaling
I don't want it to stretch
This is not easy to accomplish but it is possible.
You have two ways to do this.
1.One is to procedurally generate the rope mesh and assign material real-time. This is complicated for beginners. Can't help on this one.
2.Another solution that doesn't require mesh procedurally mesh generation. Change the tiles while the Object is changing size.
For this to work, your texture must be tileable. You can't just use any random texture online. Also, you need a normal map to actually make it look like a rope. Here is a tutorial for making a rope texture with normal map in Maya. There are other parts of the video you have to watch too.
Select the texture, change Texture Type to Texture, change Wrap Mode to Repeat then click Apply.
Get the MeshRenderer of the Mesh then get Material of the 3D Object from the MeshRenderer.Use ropeMat.SetTextureScale to change the the tile of the texture. For example, when changing the xTile and yTile value of the code below, the texture of the Mesh will be tiled.
public float xTile, yTile;
public GameObject rope;
Material ropeMat;
void Start()
{
ropeMat = rope.GetComponent<MeshRenderer>().material;
}
void Update()
{
ropeMat.SetTextureScale("_MainTex", new Vector2(xTile, yTile));
}
Now, you have to find a way to map the xTile and yTile values to the size of the Mesh. It's not simple. Here is complete method to calculate what value xTile and yTile should be when re-sizing the mesh/rope.
I'm new to Unity/C# and building a simple game in order to learn it; essentially it involves flying over a terrain and shooting at things.
The question is - How can I create a non-flat, fixed size (ie, not endless) 2D terrain that I can randomise/generate once at the start of a scene.
Here's what I've thought/tried so far:
I started off by creating a massive, flat expanse using a game object with a Sprite Renderer and a Box Collider 2D. I'm using particles for rocket trails and explosions and set them to collide with a plane which worked well. This works, but the terrain is just flat and boring.
Next, I drew some rudimentary mountains in photoshop, added them as sprites with polygon collider 2D components and dragged them onto the scene (with an idea to randomise their locations later). This is quick and works well but doesn't play nice with the particles because of the 2D/3D mix, and I imagine would lead to all levels looking the same unless I can draw a LOT of different terrain sprites. Is this a normal approach? Is it possible to add more planes to the objects to allow particles to collide?
Another thought was - could I create a polygon collider 2D / or edge collider 2D and basically fill the area with a texture? No luck on that one, it seems like it could work but I couldn't get a texture to appear and fit the shape. A shame, especially as I could then generate or randomise the position of the points later.
Finally, I wondered whether I should just use 3D shapes. I can texture them and it works great with the particles, but doesn't interact right now with all my other 2D assets, I'd have to add 3D colliders to them and that doesn't seem right, since Unity specifically provides a 2D toolset and I am trying to make a 2D game.
All of my solutions have pros and cons - maybe there's something I've missed?
You can create meshes procedurally in unity in scripts. They can be either 2d or 3d depending on how you set the vertices. Colliders are independent of the rendered mesh. I dont have an indepth tutorial for this but i think this video will hint you in the right direction and give you keywords to google or read about in the unity documentation:
https://www.youtube.com/watch?v=nPX8dw283X4
I searched some tutorials to learn HLSL and every tutorials I found talked about VertexShader and PixelShader. (the tutorials used a 3D game as example).
So I don't know if vertexShader is usefull for 2D games.
As I know, a 2D game doesn't have any vertex right ?
I have to use shader as post-processing.
My question is : Could you tell me if I have to learn VertexShader for my 2D game ?
2D graphics in XNA (and Direct3D, OpenGL, etc) are achieved by rendering what is essentially a 3D scene with an orthographic projection that basically disregards the depth of the vertices that it projects (things don't get smaller as they get further away). You could create your own orthographic projection matrix with Matrix.CreateOrthographic.
While you could render arbitrary 3D geometry this way, sprite rendering - like SpriteBatch - simply draws camera-facing "quads" (a square or rectangle made up of 2 triangles formed by 4 vertices). These quads have 3D coordinates - the orthographic projection simply ignores the Z axis.
A vertex shader basically runs on the GPU and manipulates the vertices you pass in, before the triangles that they form are rasterised. You certainly can use a vertex shader on these vertices that are used for 2D rendering.
For all modern hardware, vertex transformations - including the above-mentioned orthographic projection transformation - are usually done in the vertex shader. This is basically boilerplate, and it's unusual to use complicated vertex shaders in a 2D game. But it's not unheard of - for example my own game Dark uses a vertex shader to do 2D shadow projection.
But, if you are using SpriteBatch, and all you want is a pixel shader, then you can just omit the VertexShader from your .fx file - your pixel shader will still work. (To use it, load it as an Effect and pass it to this overload of SpriteBatch.Begin).
If you don't replace it, SpriteBatch will use its default vertex shader. You can view the source for it here. It does a single matrix transformation (the passed matrix being an orthographic projection matrix, multiplied by an optional matrix passed to Begin).
Don't think so. If you're focusing your attention on 2D game, you may avoid learging Shaders in order to complete your projects.
Another option may be, rendering in 2D the scene that is actually generated in 3D, so in other words, you render a projection of your 3D scene to some surface. In this case, you may come to need to use them, especially if you would like to do some cool transformations, without CPU (also often it's an only option).
Hope this helps.
I need to write a text on top of 3d model on xna or on the model it self ?
I've made models for players and I need to show the name of each player either on it or on top of it !
thanks in advance !
Well, you could either use Billboarding which draws the texture (or text) so that it always points at the camera, or you can make a new plane in your 3d modelling program, and position it where you want the text to appear on the player. Then texture it separately from the rest of the player and change the texture with BasicEffect.Texture
msdn: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.basiceffect_members.aspx
I would definitely recommend billboarding over changing the texture, as it makes it a whole lot less complicated.
I need to make a DirectX 3D mesh at run time using Managed DirectX from C#. I have not been able to locate any information on how to do this.
No, I can't use a 3D modeler program to make my objects. They must be precisely sized and shaped, and I don't have ANY of the size or shape information until runtime.
No, I can't build up the model from existing DirectX mesh capabilities. (A simple example: DirectX would let you easily model a pencil by using a cone mesh and a cylinder mesh. Of course, you have to carry around two meshes for your pencil, not just one, and properly position and orient each. But you can not even make a model of a pencil split in half lengthwise as there is no half cylinder nor half cone mesh provided.)
At runtime, I have all of the vertices already computed and know which ones to connect to make the necessary triangles.
All I need is a solid color. I don't need to texture map.
One can get a mesh of a sphere using this DirectX call:
Mesh sphere = Mesh.Sphere(device, sphereRadius, sphereSlices, sphereStacks);
This mesh is built at runtime.
What I need to know is how to make a similar function:
Mesh shape = MakeCustomMesh(device, vertexlist, trianglelist);
where the two lists could be any suitable container/format.
If anyone can point me to managed DirectX (C#) sample code, even if it just builds a mesh from 3 hardcoded triangles, that would be a great benefit.
There's some sample code showing how to do this on MDXInfo. This creates a mesh with multiple subsets - if you don't need that, it's even easier.
Basically, you just create the mesh:
Mesh mesh = new Mesh(numIndices, numVerticess, MeshFlags.Managed, CustomVertex.PositionColored.Format /* you'll need to set this */ , device);
Then, you can grab the mesh vertex buffer and index buffer, and overwrite it, using:
IndexBuffer indices = mesh.IndexBuffer;
VertexBuffer vertices = mesh.VertexBuffer;
Then, fill in the indices and vertices appropriately.