Drawing 3D Lines with WPF - c#

We all know there are no native functionality within WPF 4.5 to draw pure lines in 3D space on a viewport3D.
And I'm aware of the fact there are a few 3D-toolkits for WPF. But for my masterthesis I built my own 3D-engine. It's almost complete but i would like to be able to draw the wireframes of my object (for example to show tessellation of a sphere).
my engine can render almost every basic geometric form (cube, sphere, cone, cylinder, pyramid, ..)
have you any idea how to draw lines? (my only idea is to use a very thin cylinder or cube ... but i don't think that is very efficient because i have to render at least 8 points (12 triangles) for one single line)

So you just pass the neccessary data to the pixel shader of your engine along with color, multisampling, width or whatever information it might need and draw that lines with PS.

If you are fine with unitary-width lines and you don't mind rendering wireframed then you can go that way.
A more complete alternative is drawing quadrangles composed by two adjacent triangles. You can use lines as primitives and take advantage of the geometry shader's capabilities and use it to generate the quad.
Create a geometry shader receiving the two points forming each line and outputting a triangle list. You just need to append four points to the output stream. Use the computations shown in this paper from NVIDIA, to calculate each of the four coordinates of the quad.
While the input of the geometry shader were lines, the output will be two triangles properly set up forming your line. Actually, this technique offers quite some flexibility since the quads are not constraint to being rectangles (i.e. each side can have different widths).

Related

C# Monogame Performance when Drawing Thousands of SpriteBatch.DrawString()

I'm currently creating a large map, that consists of a lot of rectangles (33,844), that all have a unique name (label), which I'm drawing on top of them using a SpriteFont.
Drawing all of the rectangles takes no performance hit at all. But, as soon as I try to write all of their labels with DrawString(), my performance goes into the dumps.
In my head, I would like to draw all my rectangles and text to one texture all at once, and only have to keep redrawing that entire finished texture. My issue is, this is an enormous map, and some of the coordinates for the rectangles are very high (example: one slot's x is 14869 and y is 23622), and they're far bigger than a Texture2D allows.
Since this is a map, I really only need to draw the entire thing once, and then allow the user to scroll/move around it. There's no need for me to continually redraw all of the individual rectangles and their labels.
Does anyone have experience with this type of situation?
Try to only render the labels that you can see on the screen and if you can zoom back far enough, just don't render them.
Textrendering is expensive, since it is basically creating a rectangle to draw on for every character in the font and then applying the same RGBA texture to it. So depending on the number of characters you write, the number of rectangles increases. This means four new vertices per character.
Depending on what you write you could simply create a texture with the text already on it and render that, but it won't be very dynamic.
EDIT: I need to clarify something.
There's no need for me to continually redraw all of the individual rectangles and their labels.
This is wrong. You have to draw the whole thing every frame. Sure, it doesn't increase memorywise, but it still is a lot to render and you will need to render it every frame.
But as I said: Try to only render the labels and the rectangles that collide with the screenboundaries, then you should be fine.
There are two ways to solve your problem.
You can either render your map to a RenderTarget(2D/3D) or you can cull the rectangles/text that are offscreen. However, I am not 100% sure that RenderTargets can go as large as you would need, but you could always segment your map into multiple smaller RenderTargets.
From more information on RenderTargets, you might want to check out RB Whitaker's article on them, http://rbwhitaker.wikidot.com/render-to-texture
Culling, in case you are familiar with the term when used in this context, means to only render what is visible to the end-user. There are various ways that culling can be implemented. This does however require you to have already implemented a camera (or some type of view region) and you perform a basic axis-aligned bounding box collision (AABB collision, which MonoGame's Rectangle supports out of the box) of the rectangles against the camera's viewport and only render it if there is a collision.
A basic implementation of culling would look something like this:
Rectangle myRect = new Rectangle(100, 100, 48, 32);
public void DrawMapItem(SpriteBatch batch, Rectangle viewRegion)
{
if (viewRegion.Contains(myRect))
{
//Render your object here with the SpriteBatch
}
}
Where 'viewRegion' is the area of you world that the camera/end-user can actually see.
You can also combine the two methods, and render the map to multiple render targets, and then cull the render targets.
Hope this helps!

3D z-buffer c# GDI plus

Im writting very simple 3d engine in c# and GDI+, just for render some models (I think Directx or OpenGL is like shovel to eat soup). So far I have succesfully implemented drawing Wireframe of my model, but next step is of course Faces. And there is my problem, for now I just project my 3d points to 2d point and then drawn it using simple
for each faceg.DrawPolygon(Pens.Red, projected_points); and for wireframe its ok.
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon,
Or better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set new pixel.
If first option is possible, which one is faster (for implement and calculating)?
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon, Or
better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set
new pixel.
If first option is possible, which one is faster (for implement and calculating)?
Yes, it is possible. You can test every polygon with every other of your list. The complexity depends on the type of the polygon (of course, it's easiest with triangles). But the performance may drop drastically with high count of polygons. And even if you find the overlapping areas, you will need to interpolate colors, or texture coordinates (if you plan to use such). Also I'm not sure about the API you use for drawing, but GDI doesn't support fill polygon with interpolated colors.
I have heard that this was the approach used in 3d graphics before inventing the Z buffer. :)
I once tried to realize similar project and used Z-buffer + my own routine to fill triangles with interpolated colors (which uses the Z-buffer). I drawed directly to a GDI bitmap's pixel data buffer. Then after all polygons has been rendered, I bitblt'ed the result to the screen.

OpenTK Primitive type switching in immediate mode

I can't post images because I don't have enough reputation, so only links are provided.
I'm using OpenTK to render a 3D model (importing it works fine). The model was made in sketchup and the edge lines are also exported. When I try to render it, the lines are drawn over all of the polygons, even if they should be behind and therefore invisible. I have deduced that this is probably due to calling GL.End() and then switching to line mode and (probably) rendering over the existing image.
Is there a way to have both lines and triangles rendered simultaneously (with correct depth and overlap)? If not, how should I draw the lines? Can I draw them as triangles?
In my renderer: http://i.stack.imgur.com/oC9Ux.png
In sketchup(how I'm trying to make it look): http://i.stack.imgur.com/Jmu1S.png
My rendering code (world is the imported collada data, geometryLines are the lines being rendered and the triangles are the triangulated faces).
GL.Begin(PrimitiveType.Triangles);
GL.Color3(Color.Red);
GL.Enable(EnableCap.LineSmooth);
GL.Hint(HintTarget.LineSmoothHint, HintMode.Nicest);
foreach (Triangle tri in world.triangles)
{
GL.Vertex3(tri.vertices[0]);
GL.Vertex3(tri.vertices[1]);
GL.Vertex3(tri.vertices[2]);
}
GL.End();
GL.Begin(PrimitiveType.Lines);
GL.Color3(Color.Blue);
GL.Enable(EnableCap.LineSmooth);
GL.Hint(HintTarget.LineSmoothHint, HintMode.Nicest);
foreach (GeometryLine line in world.geometryLines)
{
GL.Vertex3(line.vertices[0]);
GL.Vertex3(line.vertices[1]);
}
GL.End();
SwapBuffers();
(responding to your update)
The issue you're seeing with lines not fully appearing over the geometry you've rendered is a typical case of Z-fighting. The depth buffer records the depth of the closest geometry that was rendered at each pixel. When new geometry (your lines) is rendered above existing geometry (your triangles), the new pixels are only kept if their depth is less than what was recorded in the depth buffer. However, when you render two objects at more or less exactly the same depth (such as your lines and the edges of your triangles), which geometry is closer or farther at a given pixel becomes somewhat arbitrary due to floating-point errors.
The usual way to work around this issue is to use a hack called depth biasing. You start by drawing your triangle-based geometry, then you draw your lines while instructing OpenGL to offset their depth a tiny bit towards the viewer. This is achieved using the glPolygonOffset function. The result is that the lines pass the depth test and are drawn above the triangles.
This is the technique used for decals such as bullet holes on walls. Note that you may want to disable depth writing when drawing decals since you usually consider that they are at the same depth as the wall itself, and want to avoid Z-fighting between multiple decals.

How to render specific edges of a cube different from filling in XNA? (Monogame)

I am working with C#, Monogame and XNA 4.0. In my scene I have a lot of cubes. Some are connected, some are not. I would like to render the edges of the cube with another shader than the filling. Besides that, I would like to render the outer edges of connected cubes in another color (or thicker) than the edges within the cube-object. Here is a small painting to make clear what I want to do (sorry for my bad painting skills, but I think you will get it).
I know how to render a cube with a specific shader and I am also able to render the wireframe but I was not able to connect both methods. Besids that, the outer lines can not be rendered differently with this approach.
I tried it with post-effects like the edgefinding of comic shaders but in this approach I am not able render only specific edges. Besides that if two cubes are next to each other the shader does not recognize the edges.
I am not searching for a ready-to-use solution from you but I would be glad to get some tips/approaches/tutorials/similar projects/etc on how to achieve my goal. Are there some shader experts out there? I am at my wit's end.
(If you however would like to post a ready to use solution I would not be miffy :D)
It is a shame you're not using deferred shading, this would be pretty straight forward to implement if you were.
If you can access the normal and material for each pixel on screen through a texture lookup you can easily post-process this. You could use a 3x3 filter kernel and search for sufficiently large normal discontinuities (this would catch silhouette edges) and also search for pixels that lie on the transition between material IDs (this would catch the edges between blue and orange cubes). If your filter neighborhood satisfied either of these two conditions, then draw a black pixel to form the outline.
You should be able to do this if you use MRT rendering when you draw your cubes, and encode the normal + material ID into an RGBA texture (x,y,z,material).
The basic theory is described in this paper (pp. 13). In this case instead of using the depth as the secondary characteristic for outlining, you would use the material (or object ID, if you want EVERY cube to have an outline).

Drawing massive amounts of textured cubes in XNA 4.0

I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.

Categories