Polygon Depth Sorting in c# - c#

I'm writing a game in c#, using opengl immediate mode rendering. Many times, transparent polygons do not appear correctly due to bring incorrectly sorted. I've been searching a lot but cannot find a tutorial on how to quickly do depth sorting. My attempt way be calculating the depth of each transparent triangle from the camera using List.sort, but that was incredibly slow ( seconds per frame, not frames per second)
Is there a standard way to do depth sortinng?
Are there any good tutorials for c# on how to do it?
Is there a fast way to do it?

Order-independent rendering of translucent polygons may be one of the most painful effects to get right in a generic manner. That's why people use various tricks with different tradeoffs between speed and quality. The simplest approach is to simply render your geometry in two passes:
Render all opaque geometry.
Disable depth-writes GL.DepthMask(false) and render your translucent geometry.
This way, your translucent polygons will be depth-tested against opaque polygons, but will not modify the depth buffer (i.e. they won't be depth-tested against other translucent polygons.)
This is simple, fast and avoids the necessity of sorting polygons. The downside is that it only works for translucent effects that use additive or multiplicative blending (the so-called "commutative" blend modes). For other blending effects, you will either have to sort your translucent polygons, or use a technique such as depth peeling.
References:
http://www.openglsuperbible.com/2013/08/20/is-order-independent-transparency-really-necessary/
https://gamedev.stackexchange.com/questions/43635/what-is-the-order-less-rendering-technique-that-allows-partial-transparency
https://developer.nvidia.com/content/interactive-order-independent-transparency
http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf

Related

OpenGL Transparent Cube missing Edge

in OpenGL I have drawn a transparent cube in orthogonal projection, the enviroment has a front ligth.
The result is reported in figure.
What I don't understand is why there is a missing edge?
gl.Enable(OpenGL.GL_BLEND);
gl.BlendFunc(OpenGL.GL_SRC_ALPHA, OpenGL.GL_ONE_MINUS_SRC_ALPHA);
I use a DrawElements() function that uses indices.
Are there any suggestions?
This is caused by depth testing. OpenGL renders one triangle (or square) at a time. When depth testing is turned on, it skips rendering a pixel if it's already rendered something in front of that pixel. This is good for solid objects, but it doesn't work for transparent ones, because the back parts have to be rendered first or else they don't get rendered at all.
There are many ways to do transparency, but none of them are particularly nice. Unfortunately, whichever way you slice it, transparent objects are just not as easy to render as opaque ones.
So here are some ways to render transparent things:
Sort the faces and render back-to-front.
Sort the faces and render front-to-back, so the back is invisible.
Use face culling and render twice: cull front faces and render, then cull back faces and render. This gives the same effect as back-to-front by getting OpenGL to do it for you. Only works for convex objects (cubes are convex) and if you have more than one transparent object you still have to sort the objects.
Use face culling to cull back faces. This gives the same effect as rendering back-to-front, by getting OpenGL to do it for you. Same caveats as the previous one.
Use a different blending mode where the rendering order doesn't matter, such as multiplicative or additive, and turn depth testing off. Multiplicative blending mode removes light without adding it - looks like cellophane instead of stained glass - you'd need a white background. Additive blending mode looks like one of them sci-fi spaceship control screens - it makes its own light and you can also see through it.
Depth peeling and linked list buffers are two techniques which do separate sorting for each pixel, but they require more intense processing and very complicated shaders.
Raytracing (enough said)
If all the faces are the same colour, you can just turn depth testing off and it will look okay. Since the faces are all the same you can't tell they are rendered in the wrong order. This works for your red cube but it won't work if you add different colours or a texture.
To properly render a cube with a translucent interior, not just a translucent surface, you need a volumetric translucency effect which is also more complicated and out of scope here. You would render the back, the front, and apply a different amount of translucency depending on the distance between them.
I resolved thaks to user253751 putting this to instructions:
gl.Enable(OpenGL.GL_DEPTH_TEST);
gl.DepthFunc(OpenGL.GL_ALWAYS);

WinForms Draw parts of image at a rotated rectangle frame

I'm working on image transitions for my digital photo frame and am trying to achieve this transition:
It's more of a radar-style transition with the wiping effect going from one side to another in a 180 degree angle. Although, it doesn't appear that "blocky", I just spaced out the rectangles for illustration purposes. The entire thing should be a smooth transitions without any FPS stuttering effects.
My logic is to draw the specific part of the image at (theta) rotation angle like my drawing above - but that will end up drawing 100's of rectangles that sweeps across the screen.
Is there a more efficient way to do this? If not, could I have a few code tips to point me in the right direction?
It is practically impossible to have without any FPS shuttering especially in bigger screens because WinForms uses CPU only rendering. You will have to embed OpenTK (if you want to use OpenGL) or Direct3D frame inside, or maybe WPF where you do the animation.
If you use any of them (for example OpenGL), you have to load it as a texture, and the animation would be done on the triangle level (dragging the corners only) not on the image itself.
If you want to have a curved surface, like a real page transition, I recommend to use a bezier patch as is found here: http://nehe.gamedev.net/tutorial/bezier_patches__fullscreen_fix/18003/
This coding takes a lot of time, and is much more over the purpose of StackOverflow (to setup a full OpenGL/DirectX control + how to do a Bezier patch if you want to set it up).
If you don't want to embed anything, you may look to this transformations tutorial using WPF, but I'm not 100% sure that this is what you need:
http://www.codeproject.com/Articles/14895/WPF-Tutorial-Part-Transformations

3D z-buffer c# GDI plus

Im writting very simple 3d engine in c# and GDI+, just for render some models (I think Directx or OpenGL is like shovel to eat soup). So far I have succesfully implemented drawing Wireframe of my model, but next step is of course Faces. And there is my problem, for now I just project my 3d points to 2d point and then drawn it using simple
for each faceg.DrawPolygon(Pens.Red, projected_points); and for wireframe its ok.
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon,
Or better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set new pixel.
If first option is possible, which one is faster (for implement and calculating)?
It is possible to calculate overlapping part of polygon? and then draw FilledPolygon, Or
better idea is drawing pixel by pixel and if z-buffer of my pixel is further then set
new pixel.
If first option is possible, which one is faster (for implement and calculating)?
Yes, it is possible. You can test every polygon with every other of your list. The complexity depends on the type of the polygon (of course, it's easiest with triangles). But the performance may drop drastically with high count of polygons. And even if you find the overlapping areas, you will need to interpolate colors, or texture coordinates (if you plan to use such). Also I'm not sure about the API you use for drawing, but GDI doesn't support fill polygon with interpolated colors.
I have heard that this was the approach used in 3d graphics before inventing the Z buffer. :)
I once tried to realize similar project and used Z-buffer + my own routine to fill triangles with interpolated colors (which uses the Z-buffer). I drawed directly to a GDI bitmap's pixel data buffer. Then after all polygons has been rendered, I bitblt'ed the result to the screen.

How to render specific edges of a cube different from filling in XNA? (Monogame)

I am working with C#, Monogame and XNA 4.0. In my scene I have a lot of cubes. Some are connected, some are not. I would like to render the edges of the cube with another shader than the filling. Besides that, I would like to render the outer edges of connected cubes in another color (or thicker) than the edges within the cube-object. Here is a small painting to make clear what I want to do (sorry for my bad painting skills, but I think you will get it).
I know how to render a cube with a specific shader and I am also able to render the wireframe but I was not able to connect both methods. Besids that, the outer lines can not be rendered differently with this approach.
I tried it with post-effects like the edgefinding of comic shaders but in this approach I am not able render only specific edges. Besides that if two cubes are next to each other the shader does not recognize the edges.
I am not searching for a ready-to-use solution from you but I would be glad to get some tips/approaches/tutorials/similar projects/etc on how to achieve my goal. Are there some shader experts out there? I am at my wit's end.
(If you however would like to post a ready to use solution I would not be miffy :D)
It is a shame you're not using deferred shading, this would be pretty straight forward to implement if you were.
If you can access the normal and material for each pixel on screen through a texture lookup you can easily post-process this. You could use a 3x3 filter kernel and search for sufficiently large normal discontinuities (this would catch silhouette edges) and also search for pixels that lie on the transition between material IDs (this would catch the edges between blue and orange cubes). If your filter neighborhood satisfied either of these two conditions, then draw a black pixel to form the outline.
You should be able to do this if you use MRT rendering when you draw your cubes, and encode the normal + material ID into an RGBA texture (x,y,z,material).
The basic theory is described in this paper (pp. 13). In this case instead of using the depth as the secondary characteristic for outlining, you would use the material (or object ID, if you want EVERY cube to have an outline).

Opengl - "fullscreen" texture rendering performance issue

I am writing a 2D game in openGL and I ran into some performance problems while rendering a couple of textures covering the whole window.
What I do is actually create a texture with the size of the screen, render my scene onto that texture using FBO and then render the texture a couple of times with a different offsets to get a kind of "shadow" going. But when I do that I get a massive performance drop while using my integrated video card.
So all in all I render 7 quads onto the whole screen(background image, 5 "shadow images" with a black "tint" and the same texture with its true colors). I am using RGBA textures which are 1024x1024 in size and fit them in a 900x700 window. I am getting 200 FPS when I am not rendering the textures and 34 FPS when I do (in both scenarios I actually create the texture and render the scene onto it). I find this quite odd because I am only rendering 7 quads essentially. A strange thing is also that when I run a CPU profiler it doesn't suggest that this is the bottleneck(I know that opengl uses a pipeline architecture and this thing can happen but most of the times it doesn't).
When I use my external video card I get consistent 200 FPS when I do the tests above. But when I disable the scene rendering onto the texture and disable the texture rendering onto the screen I get ~1000 FPS. This happens only to my external video card - when I disable the FBO using the integrated one I get the same 200 FPS. This really confuses me.
Can anyone explain what's going on and if the above numbers sound right?
Integrated video card - Intel HD Graphics 4000
External video card - NVIDIA GeForce GTX 660M
P.S. I am writing my game in C# - so I use OpenTK if that is of any help.
Edit:
First of all thanks for all of the responses - they were all very helpful in a way, but unfortunately I think there is just a little bit more to it than just "simplify/optimize your code". Let me share some of my rendering code:
//fields defined when the program is initialized
Rectangle viewport;
//Texture with the size of the viewport
Texture fboTexture;
FBO fbo;
//called every frame
public void Render()
{
//bind the texture to the fbo
GL.BindFramebuffer(FramebufferTarget.Framebuffer, fbo.handle);
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, fboTexture,
TextureTarget.Texture2D, texture.TextureID, level: 0);
//Begin rendering in Ortho 2D space
GL.MatrixMode(MatrixMode.Projection);
GL.PushMatrix();
GL.LoadIdentity();
GL.Ortho(viewport.Left, viewport.Right, viewport.Top, viewport.Bottom, -1.0, 1.0);
GL.MatrixMode(MatrixMode.Modelview);
GL.PushMatrix();
GL.LoadIdentity();
GL.PushAttrib(AttribMask.ViewportBit);
GL.Viewport(viewport);
//Render the scene - this is really simple I render some quads using shaders
RenderScene();
//Back to Perspective
GL.PopAttrib(); // pop viewport
GL.MatrixMode(MatrixMode.Projection);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Modelview);
GL.PopMatrix();
//Detach the texture
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, fboTexture, 0,
0, level: 0);
//Unbind the fbo
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
GL.PushMatrix();
GL.Color4(Color.Black.WithAlpha(128)); //Sets the color to (0,0,0,128) in a RGBA format
for (int i = 0; i < 5; i++)
{
GL.Translate(-1, -1, 0);
//Simple Draw method which binds the texture and draws a quad at (0;0) with
//its size
fboTexture.Draw();
}
GL.PopMatrix();
GL.Color4(Color.White);
fboTexture.Draw();
}
So I don't think there is actually anything wrong with the fbo and rendering onto the texture, because this is not causing the program to slow down on both of my cards. Previously I was initializing the fbo every frame and that might have been the reason for my Nvidia card to slow down, but now when I am pre-initializing everything I get the same FPS both with and without fbo.
I think the problem is not with the textures in general because if I disable textures and just render the untextured quads I get the same result. And still I think that my integrated card should run faster than 40 FPS when rendering only 7 quads on the screen, even if they cover the whole of it.
Can you give me some tips on how can I actually profile this and post back the result? That would be really useful.
Edit 2:
Ok I experimented a bit and managed to get much better performance. First I tried rendering the final quads with a shader - this didn't have any impact on performance as I expected.
Then I tried to run a profiler. But I far as I know SlimTune is just a CPU profiler and it didn't give me the results I wanted. Then I tried gDEBugger. It has an integration with visual studio which I later found out that it does not support .NET projects. I tried running the external version but it didn't seem to work (but maybe I just haven't played with it enough).
The thing that really did the trick was that rather than rendering the 7 quads directly to the screen I first render them on a texture, again using fbo, and then render the final texture once onto the screen. This got my fps from 40 to 120. Again this seem kind of curios to say the least. Why is rendering to a texture way faster than directly rendering to the screen? Nevertheless thanks for the help everyone - it seems that I have fixed my problem. I would really appreciate if someone come up with reasonable explanation of the situation.
Obviously this is a guess since I haven't seen or profiled your code, but I would guess that integrated cards are just struggling with your post-processing (drawing the texture several times to achieve your "shadow" effect).
I don't know your level of familiarity with these concepts, so sorry if I'm a bit verbose here.
About Post-Processing
Post-processing is the process of taking your completed scene, rendered to a texture, and applying effects to the image before displaying it on the screen. Typical uses of post-processing include:
Bloom - Simulate brightness more naturally by "bleeding" bright pixels into neighboring darker ones.
High Dynamic Range rendering - Bloom's big brother. The scene is rendered to a floating-point texture, allowing greater color ranges (as opposed to the usual 0 for black and 1 for full brightness). The final colors displayed on the screen are calculated using the average luminance of all the pixels on the screen. The effect of all of this is that the camera acts somewhat like the human eye - in a dark room, a bright light (say, through a window) looks extremely bright, but once you get outside, the camera adjusts and light only looks that bright if you stare directly at the sun.
Cel-shading - Colors are modified to give a cartoon-like appearance.
Motion blur
Depth of field - The in-game camera approximates a real one (or your eyes), where only objects at a certain distance are in-focus and the rest are blurry.
Deferred shading - A fairly advanced application of post-processing where lighting is calculated after the scene has been rendered. This costs a lot of video RAM (it usually uses several fullscreen textures) but allows a large number of lights to be added to the scene quickly.
In short, you can use post-processing for a lot of neat tricks. Unfortunately...
Post Processing Has a Cost
The cool thing about post-processing is that its cost is independent of the scene's geometric complexity - it will take the same amount of time whether you drew a million triangles or whether you drew a dozen. That's also its drawback, however. Even though you're only rendering a quad over and over to do post-processing, there is a cost for rendering each pixel. If you were to use a larger texture, the cost would be larger.
A dedicated graphics card obviously has far more computing resources to apply post-processing, whereas an integrated card usually has much fewer resources it can apply. It is for this reason that "low" graphics settings on video games often disable many post-processing effects. This wouldn't show up as a bottleneck on a CPU profiler because the delay happens on the graphics card. The CPU is waiting for the graphics card to finish before continuing your program (or, more accurately, the CPU is running another program while it waits for the graphics card to finish).
How Can You Speed Things Up?
Use fewer passes. If you halve the passes, you halve the time it takes to do post-processing. To that end,
Use shaders. Since I didn't see you mention them anywhere, I'm not sure if you're using shaders for your post-processing. Shaders essentially allow you to write a function in a C-like language (since you're in OpenGL, you can use either GLSL or Cg) which is run on every rendered pixel of an object. They can take any parameters you like, and are extremely useful for post-processing. You set the quad to be drawn using your shader, and then you can insert whatever algorithm you'd like to be run on every pixel of your scene.
Seeing some code would be nice. If the only difference between the two is using an external GPU or not, the difference could be in memory management (ie how and when you're creating an FBO, etc.), since streaming data to the GPU can be slow. Try moving anything that creates any sort of OpenGL buffer or sends any sort of data to it to initialization. I can't really give any more detailed advice without seeing exactly what you're doing.
It isn't just about number of quads you render, and I believe in your case it's got more to do with amout of triangle filling your video card has to do.
As was mentioned, the common way to do fullscreen post-processing is with shaders. If you want better performance on your integrated card and can't use shaders, then you should simplify your rendering routine.
Make sure you really need alpha blending. On some cards/drivers rendering textures with alpha channel can significantly reduce performance.
A somewhat low-quality way to reduce the amount of fullscreen filling would be to first perform all of your shadow draws on another, smaller texture (say, 256x256 instead of 1024x1024). Then you would draw a quad with that compound shadow texture onto your buffer. This way instead of 7 1024x1024 quads you would only need 6 256x256 and one 1024x1024. But you will lose in resolution.
Another technique, and I'm not sure it can be applied in your case, is to pre-render your complex background so you'll have to do less drawing in your rendering loop.

Categories