We all know there are no native functionality within WPF 4.5 to draw pure lines in 3D space on a viewport3D.
And I'm aware of the fact there are a few 3D-toolkits for WPF. But for my masterthesis I built my own 3D-engine. It's almost complete but i would like to be able to draw the wireframes of my object (for example to show tessellation of a sphere).
my engine can render almost every basic geometric form (cube, sphere, cone, cylinder, pyramid, ..)
have you any idea how to draw lines? (my only idea is to use a very thin cylinder or cube ... but i don't think that is very efficient because i have to render at least 8 points (12 triangles) for one single line)
So you just pass the neccessary data to the pixel shader of your engine along with color, multisampling, width or whatever information it might need and draw that lines with PS.
If you are fine with unitary-width lines and you don't mind rendering wireframed then you can go that way.
A more complete alternative is drawing quadrangles composed by two adjacent triangles. You can use lines as primitives and take advantage of the geometry shader's capabilities and use it to generate the quad.
Create a geometry shader receiving the two points forming each line and outputting a triangle list. You just need to append four points to the output stream. Use the computations shown in this paper from NVIDIA, to calculate each of the four coordinates of the quad.
While the input of the geometry shader were lines, the output will be two triangles properly set up forming your line. Actually, this technique offers quite some flexibility since the quads are not constraint to being rectangles (i.e. each side can have different widths).
I am working with C#, Monogame and XNA 4.0. In my scene I have a lot of cubes. Some are connected, some are not. I would like to render the edges of the cube with another shader than the filling. Besides that, I would like to render the outer edges of connected cubes in another color (or thicker) than the edges within the cube-object. Here is a small painting to make clear what I want to do (sorry for my bad painting skills, but I think you will get it).
I know how to render a cube with a specific shader and I am also able to render the wireframe but I was not able to connect both methods. Besids that, the outer lines can not be rendered differently with this approach.
I tried it with post-effects like the edgefinding of comic shaders but in this approach I am not able render only specific edges. Besides that if two cubes are next to each other the shader does not recognize the edges.
I am not searching for a ready-to-use solution from you but I would be glad to get some tips/approaches/tutorials/similar projects/etc on how to achieve my goal. Are there some shader experts out there? I am at my wit's end.
(If you however would like to post a ready to use solution I would not be miffy :D)
It is a shame you're not using deferred shading, this would be pretty straight forward to implement if you were.
If you can access the normal and material for each pixel on screen through a texture lookup you can easily post-process this. You could use a 3x3 filter kernel and search for sufficiently large normal discontinuities (this would catch silhouette edges) and also search for pixels that lie on the transition between material IDs (this would catch the edges between blue and orange cubes). If your filter neighborhood satisfied either of these two conditions, then draw a black pixel to form the outline.
You should be able to do this if you use MRT rendering when you draw your cubes, and encode the normal + material ID into an RGBA texture (x,y,z,material).
The basic theory is described in this paper (pp. 13). In this case instead of using the depth as the secondary characteristic for outlining, you would use the material (or object ID, if you want EVERY cube to have an outline).
I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.
We've got a sphere which we want to display in 3D and color given a function that depends on spherical coordinates.
The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface.
The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D.
With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid any more.
Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?
I've converted the algorithm to use a linear texture which is really just a lookup into the colormap. This seems to work great and is a much better solution than the previous one. Instead of creating a texture of size ThetaSamples * PhiSamples, I'm now only creating a fixed texture of 256 x 1.
Give a Coordinate, how can I color a single pixel in XNA? i.e.
Coordinate(10,11).Color = Color.Red
If you're planning on doing a lot of pixels, for something like a particle system, it would be better to use a shader. You'll probably run into performance issues eventually just using a SpriteBatch.
There's two ways depending on what coordinates you mean:
For screen coordinates the easiest way is to have a Texture2D that holds nothing but a single white pixel, then drawing it with SpriteBatch and passing whatever color you want to the Draw method.
For 3D space coordinates you want to use a PointList.
There's a bit more complicated things you could do as well: use Texture2D.SetData to make your own single white pixel texture at run time. Or, it's also possible to use a PointList and project to screen space.