vertexShader in a 2D game? - c#

I searched some tutorials to learn HLSL and every tutorials I found talked about VertexShader and PixelShader. (the tutorials used a 3D game as example).
So I don't know if vertexShader is usefull for 2D games.
As I know, a 2D game doesn't have any vertex right ?
I have to use shader as post-processing.
My question is : Could you tell me if I have to learn VertexShader for my 2D game ?

2D graphics in XNA (and Direct3D, OpenGL, etc) are achieved by rendering what is essentially a 3D scene with an orthographic projection that basically disregards the depth of the vertices that it projects (things don't get smaller as they get further away). You could create your own orthographic projection matrix with Matrix.CreateOrthographic.
While you could render arbitrary 3D geometry this way, sprite rendering - like SpriteBatch - simply draws camera-facing "quads" (a square or rectangle made up of 2 triangles formed by 4 vertices). These quads have 3D coordinates - the orthographic projection simply ignores the Z axis.
A vertex shader basically runs on the GPU and manipulates the vertices you pass in, before the triangles that they form are rasterised. You certainly can use a vertex shader on these vertices that are used for 2D rendering.
For all modern hardware, vertex transformations - including the above-mentioned orthographic projection transformation - are usually done in the vertex shader. This is basically boilerplate, and it's unusual to use complicated vertex shaders in a 2D game. But it's not unheard of - for example my own game Dark uses a vertex shader to do 2D shadow projection.
But, if you are using SpriteBatch, and all you want is a pixel shader, then you can just omit the VertexShader from your .fx file - your pixel shader will still work. (To use it, load it as an Effect and pass it to this overload of SpriteBatch.Begin).
If you don't replace it, SpriteBatch will use its default vertex shader. You can view the source for it here. It does a single matrix transformation (the passed matrix being an orthographic projection matrix, multiplied by an optional matrix passed to Begin).

Don't think so. If you're focusing your attention on 2D game, you may avoid learging Shaders in order to complete your projects.
Another option may be, rendering in 2D the scene that is actually generated in 3D, so in other words, you render a projection of your 3D scene to some surface. In this case, you may come to need to use them, especially if you would like to do some cool transformations, without CPU (also often it's an only option).
Hope this helps.

Related

Texture rendering issue when scaling with a matrix

I am working on a game (in XNA/MonoGame) in which I have large world consisting of many individual tiles with different textures. I store my world as a 2D array and to render it, I simply loop through every tile and draw it at the correct position.
Recently I implemented a zoom feature in my camera. My game's camera is made in the 'usual' XNA way where you have create a matrix based on position and scale and that pass that into your SpriteBatch.Begin() calls. My transform matrix is calculated as so:
Transform = Matrix.Identity *
Matrix.CreateTranslation(-(int)Position.X, -(int)Position.Y, 0) *
Matrix.CreateTranslation(Origin.X, Origin.Y, 0) *
Matrix.CreateScale(Scale);
The problem I am now facing is when I zoom in (by changing the camera's scale variable) some tile textures look odd at some zoom levels. Here are some pictures showing what I mean:
Here is a map at perfectly fine zoom level:
Here is a zoomed out (and cropped) view of the same map, notice how the sand texture is wierdly "upscaled":
I do not have much experience with graphics programming and really have no idea what this is caused by, but it makes the map look very janky.

how to create an texture atlas of a 3d object in unity3d

I am working on a 3D game and it has lots of game objects in a scene. So I'm working on reducing draw calls. I'v used mesh combining on my static game objects. But my player is'n static and I can't use mesh combining on it. My player is nothing but combination of some cubes which uses the standard shader and some different color Materials on different parts. So I'm guessing, I can use texture atlasing on my player to reduce darw calls. But I don't know how to do it.
Is my theory of work right? If I'm right please help me with atlasing, and if I'm wrong please point out my fault.
Thanks in advance.
Put all the required images into the same texture. Create a material from that texture. Apply the same material to all of the cubes making up you character. UV map all cubes to the relevant part of the texture (use UV offset in the Unity editor if the UV blocks are quite simple, otherwise you'll need to move the UV elements in your 3D modelling program).

How to color a mesh with values at the vertices in WPF 3D?

We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.

Making a custom mesh in C# managed DirectX

I need to make a DirectX 3D mesh at run time using Managed DirectX from C#. I have not been able to locate any information on how to do this.
No, I can't use a 3D modeler program to make my objects. They must be precisely sized and shaped, and I don't have ANY of the size or shape information until runtime.
No, I can't build up the model from existing DirectX mesh capabilities. (A simple example: DirectX would let you easily model a pencil by using a cone mesh and a cylinder mesh. Of course, you have to carry around two meshes for your pencil, not just one, and properly position and orient each. But you can not even make a model of a pencil split in half lengthwise as there is no half cylinder nor half cone mesh provided.)
At runtime, I have all of the vertices already computed and know which ones to connect to make the necessary triangles.
All I need is a solid color. I don't need to texture map.
One can get a mesh of a sphere using this DirectX call:
Mesh sphere = Mesh.Sphere(device, sphereRadius, sphereSlices, sphereStacks);
This mesh is built at runtime.
What I need to know is how to make a similar function:
Mesh shape = MakeCustomMesh(device, vertexlist, trianglelist);
where the two lists could be any suitable container/format.
If anyone can point me to managed DirectX (C#) sample code, even if it just builds a mesh from 3 hardcoded triangles, that would be a great benefit.
There's some sample code showing how to do this on MDXInfo. This creates a mesh with multiple subsets - if you don't need that, it's even easier.
Basically, you just create the mesh:
Mesh mesh = new Mesh(numIndices, numVerticess, MeshFlags.Managed, CustomVertex.PositionColored.Format /* you'll need to set this */ , device);
Then, you can grab the mesh vertex buffer and index buffer, and overwrite it, using:
IndexBuffer indices = mesh.IndexBuffer;
VertexBuffer vertices = mesh.VertexBuffer;
Then, fill in the indices and vertices appropriately.

How To Produce A 2D Plane Cut from a 3D Image

I would like to write a C# program that generates a 2D image from a rendered 3D object(s) by "slicing" the 3D object or through a cut-plane. The desired output of the 2D image should be data that can be displayed using a CAD. For example:
A 3D image is defined by its vertices, these vertices is contained within Point3DList(). A method is then called taking Point3DList as its parameter e.g: Cut2D(Point3DList).The method then generates the 2D vertices and saved it inside Point2DList() and these vertices can be read through a CAD program which display it in 2D form.
My question therefore is whether there is a previous implementation of this in C#(.NET compatible) or is there any suggestion on third-party components/algorithms to solve this problem.
Thanks in advance.
You pose an interesting question, in part, by not including a full definition of a 3D shape. You need to specify either the vertices and edges, or an algorithm to obtain the edges from the vertex list. Since an algorithm to obtain the edges from the vertex list devolves into specifying the vertices and edges, I will only cover that case here. My description also works best when the vertices and edges are transformed into a list of flat polygons. To break a vertex list down into polygons, you have to find cycles in the undirected graph that is created by the vertices and edges. For a triangular polygon with vertices A, B, and C you will end up with edges AB, BC, and AC.
The easiest algorithm that I can think of is:
Transform all points so that your 2D plane where the Z axis is 0. (rotate, twist, and move as required to transform the desired 2D plane to line up with the XY plane where Z=0).
For each flat polygon:
a. For each edge, check to see if the vertices have opposite sign on the Z axis (or if one is 0). If Z0 * Z1 <= 0 then this is the case
b. Use the definition of a line and solve for the point where Z=0. This will give you the X,Y of the intersection.
c. You now have a dot, line, or polygon that represents the intersection of your the original flat polygon from step 1 intersecting the 2D plane.
d. Fill in the polygon formed by the shapes (if desired). If your 2D rendering package will not create a polygon from the list of vertices,you need to start rendering pixels using scanlines.
Each of the individual algorithms should be in "Algorithms in C" or similar.
Graphics programs can be quite rewarding when they start to work.
Have Fun,
Jacob
This is more opengl specific rather than c# specific, but what i'd do:
Rotate and transform by a 3d matrix, so that the 'slice' you want is 1 metre in 'front' of the camera.
Then set the near and far horizon limits to 1m and 1.001m, respectively.
-update- Are you even using opengl? If not, you could perform your matrix arithmetic yourself somehow.
It sounds like you want to get the 2D representation of the points of intersection of a plane with a three-dimensional surface or object. While I don't know the algorithm to produce such a thing off hand (I have done very little with 3D modeling applications), I think that is what you are asking about.
I encountered such an algorithm a number of years ago in either a Graphics Gems or GPU Gems or similar book. I could not find anything through a few Bing searches, but hopefully this will give you some ideas.
if its a 3d texture cant you just specify 3d tex coords (into the texture) for each vertex of a quad? wouldnt that auto-interpolate the texels?
If you are looking for a 3rd party implementation, maybe you should explore Coin3d. Capable of such things as you require, though I am not sure of its exact database format or input requirements. I find your description lacking in that you do not specify the direction from which you want to project the 3d image on to a 2d plane.

Categories