Game engine sprite sheet performance issues - DrawImage srcRectangle performance issues - c#

I'm trying to make a game engine, with a sprite sheet loading system. Originally it was based upon single images but I ran into huge memory issues which ultimately led to me using a sprite sheet system.
Before the change the engine was getting around 300+ frames per second rendering, with a render time of around 2ms. Now the render time is 16 - 20ms and the fps about 40. I've tried optimizing my code by reducing the amount of calculations being ran but it has barely improved it at all.
What could I do to optimize this, the source of my problem seems to be using a srcRectangle to select the area of the sprite sheet to display, with the DrawImage method.
Draw method:
public virtual void Draw(Graphics g)
{
g.DrawImage(this.SpriteController.SpriteSheet,this.GetBoundingBox(), this.SpriteController.GetSpriteRectangle(),GraphicsUnit.Pixel);
}
GetBoundingBox method:
public Rectangle GetBoundingBox()
{
return new Rectangle((int)this.X, (int)this.Y, this.SpriteController.Width, this.SpriteController.Height);
}
GetSpriteRectangle method:
public RectangleF GetSpriteRectangle()
{
return new RectangleF(this.SpriteX, this.SpriteY, this.Width, this.Height);
}
That's all that is ran on render.

Related

C# How to Improve Efficiency in Direct2D Drawing

Good morning,
I have been teaching myself a bit of Direct2D programming in C#, utilizing native wrappers that are available (currently using d2dSharp, but have also tried SharpDX). I'm running into problems with efficiency, though, where the basic drawing Direct2D drawing methods are taking approximately 250 ms to draw 45,000 basic polygons. The performance I am seeing is on par, or even slower than, Windows GDI+. I'm hoping that someone can take a look at what I've done and propose a way(s) that I can dramatically improve the time it takes to draw.
The background to this is that I have a personal project in which I am developing a basic but functional CAD interface capable of performing a variety of tasks, including 2D finite element analysis. In order to make it at all useful, the interface needs to be able to display tens-of-thousands of primitive elements (polygons, circles, rectangles, points, arcs, etc.).
I initially wrote the drawing methods using Windows GDI+ (System.Drawing), and performance is pretty good until I reach about 3,000 elements on screen at any given time. The screen must be updated any time the user pans, zooms, draws new elements, deletes elements, moves, rotates, etc. Now, in order to improve efficiency, I utilize a quad tree data structure to store my elements, and I only draw elements that actually fall within the bounds of the control window. This helped significantly when zoomed in, but obviously, when fully zoomed out and displaying all elements, it makes no difference. I also use a timer and tick events to update the screen at the refresh rate (60 Hz), so I'm not trying to update thousands of times per second or on every mouse event.
This is my first time programming with DirectX and Direct2D, so I'm definitely learning here. That being said, I've spent days reviewing tutorials, examples, and forums, and could not find much that helped. I've tried a dozen different methods of drawing, pre-processing, multi-threading, etc. My code is below
Code to Loop Through and Draw Elements
List<IDrawingElement> elementsInBounds = GetElementsInDraftingWindow();
_d2dContainer.Target.BeginDraw();
_d2dContainer.Target.Clear(ColorD2D.FromKnown(Colors.White, 1));
if (elementsInBounds.Count > 0)
{
Stopwatch watch = new Stopwatch();
watch.Start();
#region Using Drawing Element DrawDX Method
foreach (IDrawingElement elem in elementsInBounds)
{
elem.DrawDX(ref _d2dContainer.Target, ref _d2dContainer.Factory, ZeroPoint, DrawingScale, _selectedElementBrush, _selectedElementPointBrush);
}
#endregion
watch.Stop();
double drawingTime = watch.ElapsedMilliseconds;
Console.WriteLine("DirectX drawing time = " + drawingTime);
watch.Reset();
watch.Start();
Matrix3x2 scale = Matrix3x2.Scale(new SizeFD2D((float)DrawingScale, (float)DrawingScale), new PointFD2D(0, 0));
Matrix3x2 translate = Matrix3x2.Translation((float)ZeroPoint.X, (float)ZeroPoint.Y);
_d2dContainer.Target.Transform = scale * translate;
watch.Stop();
double transformTime = watch.ElapsedMilliseconds;
Console.WriteLine("DirectX transform time = " + transformTime);
}
DrawDX Function for Polygon
public override void DrawDX(ref WindowRenderTarget rt, ref Direct2DFactory fac, Point zeroPoint, double drawingScale, SolidColorBrush selectedLineBrush, SolidColorBrush selectedPointBrush)
{
if (_pathGeometry == null)
{
CreatePathGeometry(ref fac);
}
float brushWidth = (float)(Layer.Width / (drawingScale));
brushWidth = (float)(brushWidth * 2);
if (Selected == false)
{
rt.DrawGeometry(Layer.Direct2DBrush, brushWidth, _pathGeometry);
//Note that _pathGeometry is a PathGeometry
}
else
{
rt.DrawGeometry(selectedLineBrush, brushWidth, _pathGeometry);
}
}
Code to Create Direct2D Factory & Render Target
private void CreateD2DResources(float dpiX, float dpiY)
{
Factory = Direct2DFactory.CreateFactory(FactoryType.SingleThreaded, DebugLevel.None, FactoryVersion.Auto);
RenderTargetProperties props = new RenderTargetProperties(
RenderTargetType.Default, new PixelFormat(DxgiFormat.B8G8R8A8_UNORM,
AlphaMode.Premultiplied), dpiX, dpiY, RenderTargetUsage.None, FeatureLevel.Default);
Target = Factory.CreateWindowRenderTarget(_targetPanel, PresentOptions.None, props);
Target.AntialiasMode = AntialiasMode.Aliased;
if (_selectionBoxLeftStrokeStyle != null)
{
_selectionBoxLeftStrokeStyle.Dispose();
}
_selectionBoxLeftStrokeStyle = Factory.CreateStrokeStyle(new StrokeStyleProperties1(LineCapStyle.Flat,
LineCapStyle.Flat, LineCapStyle.Flat, LineJoin.Bevel, 10, DashStyle.Dash, 0, StrokeTransformType.Normal), null);
}
I create a Direct2D factory and render target once and keep references to them at all times (that way I'm not recreating each time). I also create all of the brushes when the drawing layer (which describes color, width, etc.) is created. As such, I am not creating a new brush every time I draw, simply referencing a brush that already exists. Same with the geometry, as can be seen in the second code-snippet. I create the geometry once, and only update the geometry if the element itself is moved or rotated. Otherwise, I simply apply a transform to the render target after drawing.
Based on my stopwatches, the time taken to loop through and call the elem.DrawDX methods takes about 225-250 ms (for 45,000 polygons). The time taken to apply the transform is 0-1 ms, so it appears that the bottleneck is in the RenderTarget.DrawGeometry() function.
I've done the same tests with RenderTarget.DrawEllipse() or RenderTarget.DrawRectangle(), as I've read that using DrawGeometry is slower than DrawRectangle or DrawEllipse as the rectangle / ellipse geometry is known beforehand. However, in all of my tests, it hasn't mattered which draw function I use, the time for the same number of elements is always about equal.
I've tried building a multi-threaded Direct2D factory and running the draw functions through tasks, but that is much slower (about two times slower). The Direct2D methods appear to be utilizing my graphics card (hardware accelerated is enabled), as when I monitor my graphics card usage, it spikes when the screen is updating (my laptop has an NVIDIA Quadro mobile graphics card).
Apologies for the long-winded post. I hope this was enough background and description of things I've tried. Thanks in advance for any help!
Edit #1
So changed the code from iterating over a list using foreach to iterating over an array using for and that cut the drawing time down by half! I hadn't realized how much slower lists were than arrays (I knew there was some performance advantage, but didn't realize this much!). It still, however, takes 125 ms to draw. This is much better, but still not smooth. Any other suggestions?
Direct2D can be used with P/Invoke
See the sample "VB Direct2D Pixel Perfect Collision"
from https://social.msdn.microsoft.com/Forums/en-US/cea42526-4b82-454d-9d79-2e1d94083552/collisions?forum=vbgeneral
the animation is perfect, even done in VB

Camera.RenderWithShader function doesn't work properly

I currently work on a project in Unity 5. I am trying to apply a shader to one of my cameras using Camera.RenderWithShader, and after that read and save the image. Here is the code:
Texture2D screenshot = new Texture2D(this.screenWidth, this.screenHeight, TextureFormat.RGB24, false);
this.mainCamera.RenderWithShader(this.myShader,"RenderType");
screenshot.ReadPixels(new Rect(0, 0, this.cameraWidth, this.cameraHeight), 0, 0);
The problem is that, after I save the screenshot texture as a Bitmap, the shader is not applied on the entire image.
But if I use Camera.Render() and apply the shader using OnRenderImage(RenderTexture,RenderTexture), it works.
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, destination, this.disparityMaterial);
}
So, my question is: What is the difference between these two approaches and how can I make the Camera.RenderWithShader function work properly?
RenderWithShader and OnRenderImage are two completely different things and have nothing to do with each other.Read the linked manual pages for details and a better understand, but long story short:Prior is applying a shader to all (game)objects the camera can see without any image filters applied so basically it's about using a different shader for the same objects/prefabs/materials to alter something the way you want for the viewer (in your case, GOs should also have their tag set to "RenderType", otherwise the shader will not be applied on them),Latter one however is a "post processing" feature, applying filters only on images already rendered. I.e. an image effect feature.So a good use to prior one is e.g. nightvision on/off, or remove cloth from chicks with that special glasses the player can put hands on (mmmmm), etc while the latter one is clearly just image effects, e.g. a secret agent takes photos while one-finger-kills enemies, but when he gets hit, his equipment is more and more damaged, so photos taken as intel are getting more and more blurry, broken up and such - if that makes sense.

Unexplainable performance issues with BitmapSource in WPF

I have in my application a 3D world and data for this 3D world. The UI around the application is done with WPF and so far it seems to be working ok. But now I am implementing the following functionality: If you click on the terrain in the 3D view it will show the textures used in this chunk of terrain in a WPF control. The image data of the textures is compressed (S3TC) and I handle creation of BGRA8 data in a separate thread. Once its ready I'm using the main windows dispatcher to do the WPF related tasks. Now to show you this in code:
foreach (var pair in loadTasks)
{
var img = pair.Item2;
var loadInfo = TextureLoader.LoadToArgbImage(pair.Item1);
if (loadInfo == null)
continue;
EditorWindowController.Instance.WindowDispatcher.BeginInvoke(new Action(img =>
{
var watch = Stopwatch.StartNew();
var source = BitmapSource.Create(loadInfo.Width, loadInfo.Height, 96, 96, PixelFormats.Bgra32,
null,
loadInfo.Layers[0], loadInfo.Width * 4);
watch.Stop();
img.Source = source;
Log.Debug(watch.ElapsedMilliseconds);
}));
}
While I cant argue with the visual output there is a weird performance issue. As you can see I have added a stopwatch to check where the time is consumed and I found the culprit: BitmapSource.Create.
Typically I have 5-6 elemets in loadTasks and the images are 256x256 pixels. Interestingly now the first invocation shows 280-285ms for BitmapSource.Create. The next 4-5 all are below 1ms. This consistently happens every time I click the terrain and the loop is started. The only way to avoid the penalty in the first element is to click on the terrain constantly but as soon as I don't click the terrain (and therefore do not invoke the code above) for 1-2 seconds the next call to BitmapSource.Create gets the 280ms penalty again.
Since anything above 5ms is far beyond any reasonable or acceptable time to create 256x256 bitmap (my S3TC decompression does all 10(!) mip layers in less than 2 ms) I guess there has to be something else going on here?
FYI: All properties of loadInfo are static properties and do not perform any calculations you cant see in the code.

XNA clears textures mid-render?

We're prerendering large sets of textures to RenderTexture2D and this is the issue we're having:
It seems that randomly during the render of a chunk, the textures for each cell (the top and sides) will corrupt and disappear. The weird things is that they come back when the next chunk is rendered though, so it seems to be something that is occurring on a per-frame basis.
Does anyone know why this occurs (and randomly it seems; note the white rectangle is where a side texture corrupts and you can see from there on out the texture contains just transparent)?
EDIT: The sides of the cubes are being saved to Texture2D but they are still disappearing in the middle of a chunk render and then coming back on the next one. So I don't understand why graphics that are in Texture2D are disappearing and coming back, without reinitialization (and that's the weird part).
RenderTexture2D is only a temporary memory construct, and gets flushed quite quickly and regularly. It is because it is reused in an effort to save memory and to a lesser extent to speed things up. As such you should only treat it as a very temporary place to store your texture. You will want to shift it to a proper Texture2D which will be stored for longer. As just doing a simple:
Texture2D YourPic = (RenderTexture2D)SomeRenderedPic;
Will not do it. This just passes the pointer to the memory space of the rendered image. When the graphics card discards it, then it will still just vanish. What you want to do is something more like:
Color[] MyColorArray = new Color[SomeRenderedPic.Width * SomeRenderedPic.Height];
SomeRendeerPic.GetData<Color>(MyColorArray);
Texture2D YourPic = new Texture2D(
GraphicsDevice,
SomeRenderedPic.Width,
SomeRenderedPic.Height);
YourPic.SetData<Color>(MyColorArray);
Now if I have whipped up that code right then it should store the data and not the pointer into the new texture. This makes the new texture its own unique memory space that won't get flushed the same way a Render Target space would.
There is a down side to this method. It cannot be done at the full refresh rate of XNA. (Something like 60 frames a second... I think... maybe 30... I forget.) At any rate, this may not be fast enough if you need a very constant refreshing. However if you are creating a static texture that doesn't really change much if ever, then this may do the trick for you.
Hopefully this made sense as I am writing this on the fly and late at night. If this doesn't work I apologize. Feel free to write me at jareth_gk#hotmail.com if need be. If I am able to answer your questions I will be happy to.
Otherwise good luck, and be inventive. I am sure there is a solution.
x Jeremy M.
I can't say that we ever solved this issue for sure, but it appears to have been something caused by either threading or splitting the task across multiple cycles. It wasn't an issue with the RenderTarget2D since we were already doing that at the time.

Speeding Up Image Handling

This is a follow on to my question How To Handle Image as Background to CAD Application
I applied the resizing/resampling code but it is not making any difference. I am sure I do not know enough about GDI+ etc so please excuse me if I seem muddled.
I am using a third party graphics library (Piccolo). I do not know enough to be sure what it is doing under the hood other than it evenually wraps GDI+.
My test is to rotate the display at different zoom levels - this is the process that causes the worst performance hit. I know I am rotating the camera view. At zoom levels up to 1.0 there is no performance degradation and rotation is smooth using the mouse wheel. The image has to be scaled to the CAD units of 1m per pixel at a zoom level of 1.0. I have resized/ resampled the image to match that. I have tried different ways to speed this up based on the code given me in the last question:
public static Bitmap ResampleImage(Image img, Size size) {
using (logger.VerboseCall()) {
var bmp = new Bitmap(size.Width, size.Height, PixelFormat.Format32bppPArgb);
using (var gr = Graphics.FromImage(bmp)) {
gr.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.Low;
gr.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighSpeed;
gr.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighSpeed;
gr.DrawImage(img, new Rectangle(Point.Empty, size));
}
return bmp;
}
}
I guess this speeds up the resample but as far as I can tell has no effect on the performance when trying to rotate the display at high zoom levels. User a performance profiler (ANTS) I am able to find the code that is causing the performance hit:
protected override void Paint(PPaintContext paintContext) {
using (PUtil.logger.DebugCall()) {
try {
if (Image != null) {
RectangleF b = Bounds;
Graphics g = paintContext.Graphics;
g.DrawImage(image, b);
}
}
catch (Exception ex) {
PUtil.logger.Error(string.Format("{0}\r\n{1}", ex.Message, ex.StackTrace));
//----catch GDI OOM exceptions
}
}
}
The performance hit is entirely in g.DrawImage(image, b);
Bounds is the bounds of the image of course. The catch block is there to catch GDI+ OOM exceptions which seem worse at high zoom levels also.
The number of times this is called seems to increase as the zoom level increases....
There is another hit in the code painting the camera view but I have not enough information to explain that yet except that this seems to paint all the layers attached to the camera - and all the objects on them I assume - when when the cameras view matrix and clip are applied to the paintContext (whatever that means).
So is there some other call to g.DrawImage(image, b); that I could use? Or am I at the mercy of the graphics engine? Unfortunately it is so embedded that it would be very hard to change for me
Thanks again
I think you you use,if I'm not mistake, PImageNode object form Piccolo. The quantity of calls to that method could increase because Piccolo engine traces "real" drawing area on the user screen, based on zoom level (kind of Culling) and draws only the nodes which are Visible ones. If you have a lot of PImageNode objects on your scene and make ZoomOut it will increase the quantity of PImageNode objects need to be drawn, so the calls to that method.
What about the performance:
1) Try to use SetStyle(ControlStyles.DoubleBuffer,true); of the PCanvas (if it's not yet setted up)
2) look here CodeProject
Regards.

Categories