I'm currently creating a large map, that consists of a lot of rectangles (33,844), that all have a unique name (label), which I'm drawing on top of them using a SpriteFont.
Drawing all of the rectangles takes no performance hit at all. But, as soon as I try to write all of their labels with DrawString(), my performance goes into the dumps.
In my head, I would like to draw all my rectangles and text to one texture all at once, and only have to keep redrawing that entire finished texture. My issue is, this is an enormous map, and some of the coordinates for the rectangles are very high (example: one slot's x is 14869 and y is 23622), and they're far bigger than a Texture2D allows.
Since this is a map, I really only need to draw the entire thing once, and then allow the user to scroll/move around it. There's no need for me to continually redraw all of the individual rectangles and their labels.
Does anyone have experience with this type of situation?
Try to only render the labels that you can see on the screen and if you can zoom back far enough, just don't render them.
Textrendering is expensive, since it is basically creating a rectangle to draw on for every character in the font and then applying the same RGBA texture to it. So depending on the number of characters you write, the number of rectangles increases. This means four new vertices per character.
Depending on what you write you could simply create a texture with the text already on it and render that, but it won't be very dynamic.
EDIT: I need to clarify something.
There's no need for me to continually redraw all of the individual rectangles and their labels.
This is wrong. You have to draw the whole thing every frame. Sure, it doesn't increase memorywise, but it still is a lot to render and you will need to render it every frame.
But as I said: Try to only render the labels and the rectangles that collide with the screenboundaries, then you should be fine.
There are two ways to solve your problem.
You can either render your map to a RenderTarget(2D/3D) or you can cull the rectangles/text that are offscreen. However, I am not 100% sure that RenderTargets can go as large as you would need, but you could always segment your map into multiple smaller RenderTargets.
From more information on RenderTargets, you might want to check out RB Whitaker's article on them, http://rbwhitaker.wikidot.com/render-to-texture
Culling, in case you are familiar with the term when used in this context, means to only render what is visible to the end-user. There are various ways that culling can be implemented. This does however require you to have already implemented a camera (or some type of view region) and you perform a basic axis-aligned bounding box collision (AABB collision, which MonoGame's Rectangle supports out of the box) of the rectangles against the camera's viewport and only render it if there is a collision.
A basic implementation of culling would look something like this:
Rectangle myRect = new Rectangle(100, 100, 48, 32);
public void DrawMapItem(SpriteBatch batch, Rectangle viewRegion)
{
if (viewRegion.Contains(myRect))
{
//Render your object here with the SpriteBatch
}
}
Where 'viewRegion' is the area of you world that the camera/end-user can actually see.
You can also combine the two methods, and render the map to multiple render targets, and then cull the render targets.
Hope this helps!
Related
I can't post images because I don't have enough reputation, so only links are provided.
I'm using OpenTK to render a 3D model (importing it works fine). The model was made in sketchup and the edge lines are also exported. When I try to render it, the lines are drawn over all of the polygons, even if they should be behind and therefore invisible. I have deduced that this is probably due to calling GL.End() and then switching to line mode and (probably) rendering over the existing image.
Is there a way to have both lines and triangles rendered simultaneously (with correct depth and overlap)? If not, how should I draw the lines? Can I draw them as triangles?
In my renderer: http://i.stack.imgur.com/oC9Ux.png
In sketchup(how I'm trying to make it look): http://i.stack.imgur.com/Jmu1S.png
My rendering code (world is the imported collada data, geometryLines are the lines being rendered and the triangles are the triangulated faces).
GL.Begin(PrimitiveType.Triangles);
GL.Color3(Color.Red);
GL.Enable(EnableCap.LineSmooth);
GL.Hint(HintTarget.LineSmoothHint, HintMode.Nicest);
foreach (Triangle tri in world.triangles)
{
GL.Vertex3(tri.vertices[0]);
GL.Vertex3(tri.vertices[1]);
GL.Vertex3(tri.vertices[2]);
}
GL.End();
GL.Begin(PrimitiveType.Lines);
GL.Color3(Color.Blue);
GL.Enable(EnableCap.LineSmooth);
GL.Hint(HintTarget.LineSmoothHint, HintMode.Nicest);
foreach (GeometryLine line in world.geometryLines)
{
GL.Vertex3(line.vertices[0]);
GL.Vertex3(line.vertices[1]);
}
GL.End();
SwapBuffers();
(responding to your update)
The issue you're seeing with lines not fully appearing over the geometry you've rendered is a typical case of Z-fighting. The depth buffer records the depth of the closest geometry that was rendered at each pixel. When new geometry (your lines) is rendered above existing geometry (your triangles), the new pixels are only kept if their depth is less than what was recorded in the depth buffer. However, when you render two objects at more or less exactly the same depth (such as your lines and the edges of your triangles), which geometry is closer or farther at a given pixel becomes somewhat arbitrary due to floating-point errors.
The usual way to work around this issue is to use a hack called depth biasing. You start by drawing your triangle-based geometry, then you draw your lines while instructing OpenGL to offset their depth a tiny bit towards the viewer. This is achieved using the glPolygonOffset function. The result is that the lines pass the depth test and are drawn above the triangles.
This is the technique used for decals such as bullet holes on walls. Note that you may want to disable depth writing when drawing decals since you usually consider that they are at the same depth as the wall itself, and want to avoid Z-fighting between multiple decals.
I am working with C#, Monogame and XNA 4.0. In my scene I have a lot of cubes. Some are connected, some are not. I would like to render the edges of the cube with another shader than the filling. Besides that, I would like to render the outer edges of connected cubes in another color (or thicker) than the edges within the cube-object. Here is a small painting to make clear what I want to do (sorry for my bad painting skills, but I think you will get it).
I know how to render a cube with a specific shader and I am also able to render the wireframe but I was not able to connect both methods. Besids that, the outer lines can not be rendered differently with this approach.
I tried it with post-effects like the edgefinding of comic shaders but in this approach I am not able render only specific edges. Besides that if two cubes are next to each other the shader does not recognize the edges.
I am not searching for a ready-to-use solution from you but I would be glad to get some tips/approaches/tutorials/similar projects/etc on how to achieve my goal. Are there some shader experts out there? I am at my wit's end.
(If you however would like to post a ready to use solution I would not be miffy :D)
It is a shame you're not using deferred shading, this would be pretty straight forward to implement if you were.
If you can access the normal and material for each pixel on screen through a texture lookup you can easily post-process this. You could use a 3x3 filter kernel and search for sufficiently large normal discontinuities (this would catch silhouette edges) and also search for pixels that lie on the transition between material IDs (this would catch the edges between blue and orange cubes). If your filter neighborhood satisfied either of these two conditions, then draw a black pixel to form the outline.
You should be able to do this if you use MRT rendering when you draw your cubes, and encode the normal + material ID into an RGBA texture (x,y,z,material).
The basic theory is described in this paper (pp. 13). In this case instead of using the depth as the secondary characteristic for outlining, you would use the material (or object ID, if you want EVERY cube to have an outline).
I have a map, containing many objects in an area sized 5000*5000.
my screen size is 800*600.
how can i scroll my map, i don't want to move all my objects left and right, i want the "camera" to move, But unfortunately i didn't found any way to move it.
Thanks
I think you are looking for the transformMatrix parameter to SpriteBatch.Begin (this overload).
You say you don't want the objects to move, but you want the camera to move. But, at the lowest level, in both 2D and 3D rendering, there is no concept of a "camera". Rendering always happens in the same region - and you must use transformations to place your vertices/sprites into that region.
If you want the effect of a camera, you have to implement it by moving the entire world in the opposite direction.
Of course, you don't actually store the moved data. You just apply an offset when you render the data. Emartel's answer has you do that for each sprite. However using a matrix is cleaner, because you don't have to duplicate the code for every single Draw - you just let the GPU do it.
To finish with an example: Say you want your camera placed at (100, 200). To achieve this, pass Matrix.CreateTranslation(-100, -200, 0) to SpriteBatch.Begin.
(Performing a frustum cull yourself, as per emartel's answer, is probably a waste of time, unless your world is really huge. See this answer for an explanation of the performance considerations.)
Viewport
You start by creating your camera viewport. In the case of a 2D game it can be as easy as defining the bottom left position where you want to start rendering and expand it using your screen resolution, in your case 800x600.
Rectangle viewportRect = new Rectangle(viewportX, viewportY, screenWidth, screenHeight);
Here's an example of what your camera would look like if it was offset off 300,700 (the drawing is very approximate, it's just to give you a better idea)
Visibility Check
Now, you want to find every sprite that intersects the red square, which can be understood as your Viewport. This could be done with something similar to (this is untested code, just a sample of what it could look like)
List<GameObject> objectsToBeRendered = new List<GameObject>();
foreach(GameObject obj in allGameObjects)
{
Rectangle objectBounds = new Rectangle(obj.X, obj.Y, obj.Width, obj.Height);
if(viewportRect.IntersectsWith(objectBounds))
{
objectsToBeRendered.Add(obj);
}
}
Here's what it would look like graphically, the green sprites are the ones added to objectsToBeRendered. Adding the objects to a separate list makes it easy if you want to sort them from Back to Front before rendering them!
Rendering
Now that we found which objects were intersecting we need to figure out where on the screen the will end up.
spriteBatch.Begin();
foreach(GameObject obj in objectsToBeRendered)
{
Vector2 pos = new Vector2(obj.X - viewportX, obj.Y - viewportY);
spriteBatch.Draw(obj.GetTexture(), pos, Color.White);
}
spriteBatch.End();
As you can see, we deduce the X and Y position of the viewport to bring the world position of the object into Screen Coordinates within the viewport. This means that the small square that could be at 400, 800 in World Coordinates would be rendered at 100, 100 on the screen given the viewport we have here.
Edit:
While I agree with the change of "correct answer", keep in mind that what I posted here is still very useful when deciding which animations to process, which AIs to update, etc... letting the camera and the GPU make the work alone prevents you from knowing which objects were actually on screen!
Currently as a trail effect in my game I have for every 5 frames a translucent texture copy of a sprite is added to a List<> of trails.
The alpha values of these trails is decremented every frame and a draw function iterates through the list and draws each texture. Once they hit 0 alpha they are removed from the List<>.
The result is a nice little trail effect behind moving entities. The problem is for about 100+ entities, the frame rate begins to drop drastically.
All trail textures come from the same sprite sheet so i dont think it's batching issue. I profiled the code and the CPU intensity is lower during the FPS drop spikes then it is at normal FPS so I assume that means its a GPU limitation?
Is there any way to achieve this effect more efficiently?
Heres the general code im using:
// fade alpha
m_alpha -= (int)(gameTime.ElapsedGameTime.TotalMilliseconds / 10.0f);
// draw
if (m_alpha > 0) {
// p is used to alter RGB of the trails color (m_tint) depending on alpha value
float p = (float)m_alpha/255.0f;
Color blend = new Color((int)(m_tint.R*p), (int)(m_tint.G*p), (int)(m_tint.B*p), m_alpha);
// draw texture to sprite batch
Globals.spriteBatch.Draw(m_texture, getOrigin(), m_rectangle, blend, getAngle(), new Vector2(m_rectangle.Width/2, m_rectangle.Height/2), m_scale, SpriteEffects.None, 0.0f);
} else {
// flag to remove from List<>
m_isDone = true;
}
I guess i should note, the m_texture given to the trail class is a reference to a global texture shared by all trails. Im note creating a hard copy for each trail.
EDIT: If I simply comment out the SpriteBatch.Draw call, even when im allocating a new trail every single frame for hundreds of objects there is no drop in frames... there has got to be a better way to do this.
Usually for trails, instead of clearing the screen on every frame, you simply draw a transparent screen-sized rectangle before drawing the current frame. Thus the previous frame is "dimmed" or "color blurred" while the newer frame is fully "clear" and "bright". As this is repeated, a trail is generated from all the previous frames, which are never cleared but rather "dimmed".
This technique is VERY efficient and it is used in the famous Flurry screensaver (www.youtube.com/watch?v=pKPEivA8x4g).
In order to make the trails longer, you simply increase the transparency of the rectangle that you use to clear the screen. Otherwise, you make it more opaque to make the trail shorter. Note, however, that if you make the trails too long by making the rectangle too transparent, you risk leaving some light traces of the trail that due to alpha blending, might not completely erase even after a long time. The Flurry screensaver suffers from this kind of artifact, but there are ways to compensate for it.
Depending on your situation, you might have to adapt the technique. For instance, you might want to have several drawing layers that allow certain objects to leave a trail while others don't generate trails.
This technique is more efficient for long trails than trying to redraw a sprite thousands of times as your current approach.
On the other hand, I think the bottleneck in your code is the following line:
Globals.spriteBatch.Draw(m_texture, getOrigin(), m_rectangle, blend, getAngle(), new Vector2(m_rectangle.Width/2, m_rectangle.Height/2), m_scale, SpriteEffects.None, 0.0f);
It is inefficient to have thousands of GPU calls like Draw(). It would be more efficient if you had a list of polygons in a buffer, where each polygon is located in the correct position and it has transparency information stored with it. Then, with a SINGLE call to Draw(), you can then render all polygons with the correct texture and transparency. Sorry I cannot provide you with code for this, but if you want to continue with your approach, this might be the direction you are headed. In short, your GPU can certainly draw millions of polygons at a time, but it can't call Draw() that many times...
I am trying to write a custom Minecraft Classic multiplayer client in XNA 4.0, but I am completely stumped when it comes to actually drawing the world in the game. Each block is a cube in 3D space, and it is possible for it to have different textures on each side. I have been reading around the Internet, and found out that for a cube to have a different texture on each side, each face needs its own set of vertices. That makes a total of 24 vertices for each cube, and if you have a world that consists of 64*64*64 cubes (or possibly even more!), that makes a lot of vertices.
In my original code, I split up the texture map I had into separate textures, and applied these before drawing each side of every cube. I was told that this is a very expensive approach, and that I should keep the textures in the same file, and simply use the UV coordinates to map certain subtextures onto the cube. This didn't do much for performance though, since the sheer amount of vertices is simply too much. I was also told to collect the vertices in a VertexBuffer and draw them all at once, but this didn't help much either, and occasionally causes an exception when the number of vertices exceeds the maximum size of the buffer. Any attempt I've tried to make cubes share vertices has also failed, resulting in massive slowdown and glitchy cubes.
I have no idea what to do with this. I am pretty good at programming in general, but any kind of 3D programming or game development completely escapes me.
Here is the method I use to draw the cubes. I have two global lists List<VertexPositionTexture> and List<int>, one for vertices and one for indices. When drawing, I iterate through all of the cubes in the world and do RenderShape on the ones that aren't empty (like Air). The shape class that I have is pasted below. The commented code in the AddVertices method is the attempt to make cubes share vertices. When all of the cubes' vertices have been added to the list, the data is pasted into a VertexBuffer and IndexBuffer, and DrawIndexedPrimitives is called.
To be honest, I am probably doing it completely wrong, but I really have no idea how to do it, and there are no tutorials that actually describe how to draw lots of objects, only extremely simple ones. I had to figure out how to redo the BasicShape to have several textures myself.
The shape:
http://pastebin.com/zNUFPygP
You can get a copy of the code I wrote with a few other devs called TechCraft:
http://techcraft.codeplex.com
Its free and open source. It should show you how to create an engine similar to Minecrafts.
There are a lot of things you can do to speed this up:
What you want to do is bake a region of cubes into a vertex buffer. What I mean by this is to take all of the cubes in a small area, and put them all into one vertex buffer. Only update this buffer when a cube changes.
In a world like minecraft's, LOTS of faces are occluding each other. The biggest thing you can do is to hide faces that are shared between two cubes. Imagine two cubes sitting right next to each other, you don't really need to draw the face in between, since it can never be seen anyway. In our engine, this resulted in 20 times less vertices.
_ _ _ _
|_|_| == |_ _|
As for your textures, it is a good idea, like you said, to use a texture atlas. This greatly reduces your draw calls.
Good luck! And if you feel like cheating, look at Infiniminer. Infiniminer is the game minecraft was based off. It's written in XNA and is open-source!
You need to think about reducing the size of the problem. How can you produce the same image by doing less work?
If your cubes are spaced at regular intervals and are all the same size, you may not need to store the vertices at all - your shader may be able to calculate the vertex positions as it runs. If they are different sizes and not spaced at regular intervals, then you may still be able to use some for onf instancing (where you supply the position and size of a cube to a shader and it works out where to render the vertices to make a cube appear at that location)
If your cubes obscure anything behnd them, then you only need to draw the front-most cubes - anything behind them is just not visible. A natural approach for this would be to use an octree data structure, which divides 3D space into voxels (cubes). Using an octree you could quickly deternine which cubes are visible, and just draw those cubes - so rather than drawing 64x64x64 cubes, you may find you nly have to draw a few hundred per frame. You will also find that as the camera moves, the set of visible cubes will not change much, so you may be able to use this "temporal coherence" to update your data structures to minimise the work that needs to be done to decide which cubes are visible.
I don't think Minecraft draws all the cubes, all the time. Most of them are interior, and you need to draw only those on the surface. So basically, you need an efficient voxel renderer.
I recently wrote an engine to do this in XNA, the technique you want to look into is called hardware instancing and allows you to pass one model into the shader with a stream of world positions to "instance" that model hundreds (even thousands of times) all over your game world.
I built my engine on top of this example, replacing the instanced model with my own.
http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing
Once you make it into a re-usable class, it and its accompanying shaders become very useful for rendering thousands of pretty much anything you want (bushes, trees, cubes, swarms of birds, etc).
Once you have a base model (could be one face of the block), its mesh will have an associated texture that you can then replace with whatever you want to allow you to dynamically change block texturing for each side and differing types of blocks.