Does drawing outside of screen bounds affect performance - c#

In my 2D game I have large map, and I scroll around it (like in Age of Empires).
On draw I draw all the elements (which are textures/Images). Some of them are on the screen, but most of them are not (because you see about 10% of the map).
Does XNA know not to draw them by checking that the destination rectangle won't fall on the screen?
Or should I manually check it and avoid drawing them at all?

A sometimes overlooked concept of CPU performance while using SpriteBatch is how you go about batching your sprites. And using drawable game component doesn't lend itself easily to efficiently batching sprites.
Basically, The GPU drawing the sprites is not slow & the CPU organizing all the sprites into a batch is not slow. The slow part is when the CPU has to communicate to the GPU what it needs to draw (sending it the batch info). This CPU to GPU communication does not happen when you call spriteBatch.Draw(). It happens when you call spriteBatch.End(). So the low hanging fruit to efficiency is calling spriteBatch.End() less often. (of course this means calling Begin() less often too). Also, use spriteSortMode.Immediate very sparingly because it immediately causes the CPU to send each sprites info to the GPU (slow)).
So if you call Begin() & End() in each game component class, and have many components, you are costing yourself a lot of time unnecessarily and you will probably save more time coming up with a better batching scheme than worrying about offscreen sprites.
Aside: The GPU automatically ignores offscreen sprites from its pixel shader anyway. So culling offscreen sprites on the CPU won't save GPU time.
Reference here.

You must manually account for this, and it will largely effect the performance of the game as it grows, this is otherwise known as culling. Culling is not done just because drawing stuff off screen reduces performance, it is because calling Draw that many extra times is slow. Anything you don't need to update that is out of the viewport should be excluded too. You can see more about how you can do this and how SpriteBatch handles this here.

Related

Spatial Understanding limited by a small area

I am working with the Hololens in Unity and trying to map a large area (15x15x25) meters. I am able to map the whole area using the SpatialMapping prefab, but I want to do some spatial processing on that mesh to smoothen out the floors and walls. I have been trying to use SpatialUnderstanding for this, but there seems to be a hard limit on how big of an area you can scan with this, which has been detailed by hololens forums thread.
Currently, I don't understand how the pipeline of data works from SpatialMapping to SpatialUnderstanding. Why can I not simply use the meshes generated from SpatialMapping in SpatialUnderstanding? Is there some better method of creating smooth surfaces?
This solution works best for pre-generated rooms. In other words a general solution, one that could be expected to be used by end users, is not possible, given the current limitations.
I will start with the last question: "Is there some better method of creating smooth surfaces?"
Yes, use a tripod on wheels to generate the initial scan. Given the limited resolution of the accelometers and compasses in the hardware, reducing the variance in one linear axis, height, and one rotational axis, roll(roll should not vary during at all during a scan), will result in a much more accurate scan.
The other method to create smooth surfaces is to export the mesh to a 3D editing program and manually flatten the surfaces, then reimport the mesh into Unity3D.
"Why can I not simply use the meshes generated from SpatialMapping in SpatialUnderstanding?"
SpacialUnderstanding further divides the generated mesh into (8cm,8cm,8cm) voxels and then calculates the surfels based on each voxel. To keep performance and memory utilization in check, a hard limit of approximately(10m,10m,10m). That is implemented as(128,128,128) voxels.
Any attempt to use SpacialUnderstanding beyond its defined limits, will produce spurious results due to overflow of the underlying data structures.

Simple 2D Unity Game has periodical jerkiness

I've made a simple 2D game in Unity using the application Tiled for the tilemap, but when I run it, every now and then the screen will jump like it has a quick fps drop.
I'm not sure what would be hurting the performance of the game, because it is not performing any demanding tasks. My only guess is that it could be the large tilemap I'm using, but I feel this is not the problem because the game was still jerky when the map was not large. Furthermore, I tried scaling the map to an even larger size but it didn't make the performance any worse than it already was.
Does anyone know what could be causing the performance issue here? Thanks.
Periodic frame-rate drops could be caused by a number of things - fortunately Unity gives you a very good tool to track this down in the form of the Profiler (Window > Profiler). It's recommended that you build the game (check Development Build and Autoconnect Profiler) rather than testing in the editor, as often the editor creates a lot of overhead that shows up in the Profiler and may lead you astray.
Play the game and see if there are any spikes when you notice the fps drop. I would not be surprised if you have Garbage Collection (GC) causing your periodic fps drops. GC is triggered periodically in the background to clean up memory that was allocated but is no longer needed. You can get a good idea of how much memory is being allocated by selecting CPU Usage and looking at the GC Alloc column. This can add up quickly! Optimising a game to avoid GC spikes is a whole topic unto itself.
But, of course, you problem might have nothing to do with GC spikes. Hopefully the Profiler will tell you which stones you should be turning over.

Render an object multiple times a frame using different textures. What's more performant? Multiple Materials or calling SetTexture multiple times?

I need to render an object multiple times a frame using different textures. I was wondering about the most performant way to do so. My first approach was to have one Material and in OnRenderImage() call SetTexture() on it for the given number of textures I have. Now I'm wondering if it would be a noticeable improvement if I set up one Material per Texture in Start() and change between Materials in OnRenderImage(). Then I wouldn't need the SetTexture() call. But I can't find what SetTexture() actually does. Does it just set a flag or does it copy or upload the texture somewhere?
From working with low-end devices extensively, performance comes from batching. It's hard to pin point what would improve performance in your context w/o a clear understanding of the scope:
How many objects
How many images
Target platform
Are the images packaged or external at runtime
...
But as a general rule you want to use least amount of materials and individual textures possible. If pre-processing is an option, I would recommend creating spritesheets with as many images as possible on a single image. Then using UV offsetting you can show multiple images on multiple objects for 1 draw call.
I have used extensively a solution called TexturePacker which supports Unity. It's not cheap, there's an app to buy plus a plugin for Unity but it saves time, and draw calls, in the end.
Things like packing hundreds of images into a few 4k textures and down to 3 or 4 draw calls vs 100s before.
That might not be a solution in your case, but the concept is still valid.
Also unity prefabs will not save Draw Calls, but reduce memory usage.
hth.
J.

Optimal way to set pixel data?

I'm working on a "falling sand" style of game.
I've tried many ways of drawing the sand to the screen, however, each way seems to produce some problem in one form or another.
List of things I've worked through:
Drawing each pixel individually, one at a time from a pixel sized texture. Problem: Slowed down after about 100,000 pixels were changing per update.
Drawing each pixel to one big texture2d, drawing the texture2d, then clearing the data. Problems: using texture.SetPixel() is very slow, and even with disposing the old texture, it would cause a small memory leak (about 30kb per second, which added up quick), even after calling dispose on the object. I simply could not figure out how to stop it. Overall, however, this has been the best method (so far). If there is a way to stop that leak, I'd like to hear it.
Using Lockbits from bitmap. This worked wonderfully from the bitmaps perspective, but unfortunately, I still had to convert the bitmap back to a texture2d, which would cause the frame rate to drop to less than one. So, this has the potential to work very well, if I can find a way to draw the bitmap in xna without converting it (or something).
Setting each pixel into a texture2d with set pixel, by replacing the 'old' position of pixels with transparent pixels, then setting the new position with the proper color. This doubled the number of pixel sets necessary to finish the job, and was much much slower than using number 2.
So, my question is, any better ideas? Or ideas on how to fix styles 2 or 3?
My immediate thought is that you are stalling the GPU pipeline. The GPU can have a pipeline that lags several frames behind the commands that you are issuing.
So if you issue a command to set data on a texture, and the GPU is currently using that texture to render an old frame, it must finish all of its rendering before it can accept the new texture data. So it waits, killing your performance.
The workaround for this might be to use several textures in a double- (or even triple- or quad-) buffer arrangement. Don't attempt to write to a texture that you have just used for rendering.
Also - you can write to textures from a thread other than your rendering thread. This might come in handy, particularly for clearing textures.
As you seem to have discovered, it's actually quicker to SetData in large chunks, rather than issue many, small SetData calls. Determining the ideal size for a "chunk" differs between GPUs - but it is a fair bit bigger than a single pixel.
Also, creating a texture is much slower than reusing one, in raw performance terms (if you ignore the pipeline effect I just described); so reuse that texture.
It's worth mentioning that a "pixel sprite" requires sending maybe 30 times as much data per-pixel to the GPU than a texture.
See also this answer, which has a few more details and some in-depth links if you want to go deeper.

XNA SpriteSortMode Speed and Shaders (4.0)

Evening/afternoon;
I'm getting to grips with writing a little custom 2D XNA engine for my own personal use. When deciding on how to draw the 2D sprites, I'm stuck in a quandry:
Firstly, I'd like, at some point, to implement some custom shader effects. Every tutorial I read on the internet said that I therefore am forced to use SpriteSortMode.Immediate, except one, which said that in XNA 4.0 that is no longer necessary.
Furthermore, I am unsure about which SpriteSortMode is fastest for my approach, regardless of shading effects. Ordering/layering of different sprites is definitely a necessity (i.e. to have a HUD in front of the game sprites, and the game sprites in front of a backdrop etc.); but would it be faster to implement a custom sorted list and just call the Draw()s in order, or use the BackToFront / FrontToBack options?
Thank you in advance.
Starting with XNA 4.0 you can use custom shader effects with any sprite sort mode. The Immediate sprite sort mode in XNA 3.1 was pretty broken. (see http://blogs.msdn.com/b/shawnhar/archive/2010/04/05/spritesortmode-immediate-in-xna-game-studio-4-0.aspx)
Concerning sorting I would say to sort them back to front for transparent sprites and front to back for opaque ones. See: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.spritesortmode.aspx
There is one extra interesting mode there (for a 2D engine at least), which is the Texture sorting mode where it sorts the draw calls by which texture is needed and thus reduces state change. That could be a big performance win for the main game sprites.
And I wouldn't worry too much about performance until your profiler says otherwise. SpriteBatch does quite a lot of batching (yes, really) and that will be the biggest performance improvement because it minimizes the number of state changes.
The only other way I can think of to improve performance is to use instancing, but I think that with XNA it might be a bad idea (ideally you'll want SM3.0 hardware and a custom vertex shader at least; I'm not sure how that plays with the SpriteBatch class).

Categories