I am writing a particle engine and have noticed it is massively slower than it should be (I've written highly un-optimized 3D C++ particle engines that can render 50k particles at 60 fps, this one drops to 32 fps at around 1.2k..), I did some analysis on the code assuming the rendering of the particles or the rotations were the most CPU intensive operation, however I discovered that in fact these two little properties of the graphics object are actually eating up over 70% of my performance....
public void RotateParticle(Graphics g, RectangleF r,
RectangleF rShadow, float angle,
Pen particleColor, Pen particleShadow)
{
//Create a matrix
Matrix m = new Matrix();
PointF shadowPoint = new PointF(rShadow.Left + (rShadow.Width / 1),
rShadow.Top + (rShadow.Height / 1));
PointF particlePoint = new PointF(r.Left + (r.Width / 1),
r.Top + (r.Height / 2));
//Angle of the shadow gets set to the angle of the particle,
//that way we can rotate them at the same rate
float shadowAngle = angle;
m.RotateAt(shadowAngle, shadowPoint);
g.Transform = m;
//rotate and draw the shadow of the Particle
g.DrawRectangle(particleShadow, rShadow.X, rShadow.Y, rShadow.Width, rShadow.Height);
//Reset the matrix for the next draw and dispose of the first matrix
//NOTE: Using one matrix for both the shadow and the partice causes one
//to rotate at half the speed of the other.
g.ResetTransform();
m.Dispose();
//Same stuff as before but for the actual particle
Matrix m2 = new Matrix();
m2.RotateAt(angle, particlePoint);
//Set the current draw location to the rotated matrix point
//and draw the Particle
g.Transform = m2;
g.DrawRectangle(particleColor, r.X, r.Y, r.Width, r.Height);
m2.Dispose();
}
What is killing my performance is specifically these lines:
g.Transform = m;
g.Transform = m2;
A little background, the graphics object is getting grabbed from painteventargs, it is then rendering particles to the screen in a render particles method, which calls this method to do any rotations, multi-threading isn't a solution as the graphics object cannot be shared between multiple threads. Here is a link to the code analysis I ran just so you can see what is happening as well:
https://gyazo.com/229cfad93b5b0e95891eccfbfd056020
I am kinda thinking this is something that can't really be helped because it looks like the property itself is destroying the performance and not anything I've actually done (though I'm sure there's room for improvement), especially since the dll the class calls into is using the most cpu power. Anyways, any help would be greatly appreciated in trying to optimize this...maybe I'll just enable/disable rotation to increase performance, we'll see...
Well, you should scratch your head a while over the profile results you see. There is something else going on when you assign the Transform property. Something you can reason out by noting that ResetTransform() does not cost anything. Doesn't make sense of course, that method also changes the Transform property.
And do note that it should be DrawRectangle() that should be the expensive method since that is the one that actually puts the pedal to the metal and generates real drawing commands. We can't see what it cost from your screenshot, can't be more than 30%. That is not nearly enough.
I think what you see here is an obscure feature of GDI/plus, it batches drawing commands. In other words, internally it generates a list of drawing commands and does not pass them to the video driver until it has to. The native winapi has an function that explicitly forces that list to be flushed, it is GdiFlush(). That is however not exposed by the .NET Graphics class, it is done automagically.
So a pretty attractive theory is that GDI+ internally calls GdiFlush() when you assign the Transform property. So the cost you are seeing is actually the cost of a previous DrawRectangle() call.
You need to get ahead by giving it more opportunity to batch. Very strongly favor the Graphics class method that let you draw a large number of items. In other words, don't draw each individual particle but draw many. You'll like DrawRectangles(), DrawLines(), DrawPath(). Unfortunately no DrawPolygons(), the one you really like, technically you could pinvoke PolyPolygon() but that's hard to get going.
If my theory is incorrect then do note that you don't need Graphics.Transform. You can also use Matrix.TransformPoints() and Graphics.DrawPolygon(). Whether you can truly get ahead is a bit doubtful, the Graphics class doesn't use GPU acceleration directly so it never competes that well with DirectX.
I'm not sure if the following would help, but it's worth trying. Instead of allocating/assigning/disposing new Matrix, use the preallocated Graphics.Transform via Graphics methods - RotateTransform, ScaleTransform, TranslateTransform (and make sure to always ResetTransform when done).
The Graphics does not contain a direct equivalent of Matrix.RotateAt method, but it's not hard to make one
public static class GraphicsExtensions
{
public static void RotateTransformAt(this Graphics g, float angle, PointF point)
{
g.TranslateTransform(point.X, point.Y);
g.RotateTransform(angle);
g.TranslateTransform(-point.X, -point.Y);
}
}
Then you can update your code like this and see if that helps
public void RotateParticle(Graphics g, RectangleF r,
RectangleF rShadow, float angle,
Pen particleColor, Pen particleShadow)
{
PointF shadowPoint = new PointF(rShadow.Left + (rShadow.Width / 1),
rShadow.Top + (rShadow.Height / 1));
PointF particlePoint = new PointF(r.Left + (r.Width / 1),
r.Top + (r.Height / 2));
//Angle of the shadow gets set to the angle of the particle,
//that way we can rotate them at the same rate
float shadowAngle = angle;
//rotate and draw the shadow of the Particle
g.RotateTransformAt(shadowAngle, shadowPoint);
g.DrawRectangle(particleShadow, rShadow.X, rShadow.Y, rShadow.Width, rShadow.Height);
g.ResetTransform();
//Same stuff as before but for the actual particle
g.RotateTransformAt(angle, particlePoint);
g.DrawRectangle(particleColor, r.X, r.Y, r.Width, r.Height);
g.ResetTransform();
}
Can you create an off-screen buffer to draw your particle, and have OnPaint simply render your off screen buffer? If you need to periodically update your screen, you can invalidate your OnScreen control/canvas, say using a Timer
Bitmap bmp;
Graphics gOff;
void Initialize() {
bmp = new Bitmap(width, height);
gOff = bmp.FromImage();
}
private void OnPaint(object sender, System.Windows.Forms.PaintEventArgs e) {
e.Graphics.DrawImage(bmp, 0, 0);
}
void RenderParticles() {
foreach (var particle in Particles)
RotateParticle(gOff, ...);
}
On another note, any reason to create a matrix object every time you call RotateParticle? I haven't tried it, but the MSDN docs seem to suggest that get and set on Graphics.Transform will always create a copy. So you can keep a Matrix object at say class level and use it for transform. Just make sure to call Matrix.Reset() before using it. This might get you some performance improvement.
Related
I'm using XNA/MonoGame to draw some 2D polygons for me. I'd like a Texture I have to repeat on multiple polygons, based on their X and Y coordinates.
here's an example of what I mean:
I had thought that doing something like this would work (assuming a 256x256 pixel texture)
verticies[0].TextureCoordinate = new Vector2(blockX / 256f, (blockY + blockHeight) / 256f);
verticies[1].TextureCoordinate = new Vector2(blockX / 256f, blockY / 256f);
verticies[2].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, (blockY + blockHeight) / 256f);
verticies[3].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, blockY / 256f);
// each block is draw with a TriangleStrip, hence the odd ordering of coordinates.
// the blocks I'm drawing are not on a fixed grid; their coordinates and dimensions are in pixels.
but the blocks end up "textured" with long-horizontal lines that look like the texture has been extremely stretched.
(to check if the problem had to do with TriangleStrips, I tried removing the last vertex and drawing a TriangleList of 1 - this had the same result on the texture, and the expected result of drawing only one half of my blocks.)
what's the correct way to achieve this effect?
my math was correct, but it seems that other code was wrong, and I was missing at least one important thing.
maybe-helpful hints for other people trying to achieve this effect and running into trouble:
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
^ you need that code. but importantly, your SamplerState and other settings will get reset when you draw sprites (SpriteBatch's Begin()), so especially if you're abstracting your polygon-rendering code into little helper functions, be mindful of when and where you call them! ex:
spriteBatch.Begin();
// drawing sprites
MyFilledPolygonDrawer(args);
// drawing sprites
spriteBatch.End();
if you do this (assuming MyFilledPolygonDrawer uses 3D methods), you'll need to change all the settings (such as SamplerState) before you draw in 3D, and possibly after (depending on what settings you use for 2D rendering), all of which comes with a little overhead (and makes your code more fragile - you're more likely to screw up :P)
one way to avoid this is to draw all your 3D stuff and 2D stuff separately (all one, then all the other).
(in my case, I haven't got my code completely separated out in this way, but I was able to at least reduce some switching between 2D and 3D by using 2D methods to draw solid-color rectangles - Draw Rectangle in XNA using SpriteBatch - and 3D stuff only for less-regular and/or textured shapes.)
I'm developing a wp7 game where the player draws lines in the program and a ball bounces off of them. I'm using XNA and farseer physics. What is the best method for a user to draw a line, and then for the program to take it and turn it in to a physics object, or at least a list of vector2s? I've tried creating a list of TouchLocations, but it ends up spotty if the user does not draw very slow, like the picture I've attached. Any suggestions?
Thanks
http://img829.imageshack.us/img829/3985/capturehbn.png
Here's some code:
I'm using the gamestatemanagement sample, and this is in the HandleInput method
foreach (TouchLocation t in input.TouchState) {
pathManager.Update(gameTime, t.Position);
}
The pathManager class manages a collection of path classes, which are drawable physics objects. Here is pathManager.Update
public void Update(GameTime gameTime, Vector2 touchPosition) {
paths.Add(new Path(world,texture, new Vector2(5,5) ,0.1f));
paths[paths.Count-1].Position = touchPosition;
}
This is just what I'm doing now and I'm willing to throw it out for anything. You'd think that having a 5x5 rectangle for each touch location would kill the performance, but using farseer I didn't see any drops, even with a mostly full screen. However, this system doesn't create a smooth line at all if the line is drawn fast.
I doubt this helps any, but here is the Path constructor.
public Path(World world, Texture2D texture, Vector2 size, float mass) {
this.Size = size;
this.texture = texture;
body = BodyFactory.CreateRectangle(world, size.X * pixelToUnit, size.Y * pixelToUnit, 1);
body.BodyType = BodyType.Static;
body.Restitution = 1f;
body.Friction = 0;
body.Friction = 10;
}
How do I draw lines using XNA?
the best way to draw primitives is to use the basiceffect shader. this will be accelerated by the hardware. you can also add a texture to it if you'd like.
i'm not sure if its the same on WP7 but this works for Windows7 at least.
I'm trying to create a depth of field post process, but have no idea where to start (except render depth map, which I'm currently at). All the tutorials for it are either for XNA3.1, don't actually give you an explanation, or part of a book.
So, can you go through a detailed, step-by-step process on how DOF is rendered?
Here's a description on how to achieve a basic approximation of it using the "out of the box" features provided by XNA within the Reach profile.
Once you're across how to do it in C# using the inbuilt stuff, achieving it in HLSL will hopefully be a little more obvious.
Also, should you ever wish to produce a game for Windows Phone 7, you'll where to start (as Windows Phone 7 doesn't support custom shaders at this point in time).
First we'll define some instance level variable to hold the bits and pieces we need to produce the look:
BasicEffect effect;
List<Matrix> projections;
List<RenderTarget2D> renderTargets;
SpriteBatch spriteBatch;
Next, in the LoadContent() method, we'll start loading them up. Starting with a SpriteBatch that'll we'll use to render the final scene:
spriteBatch = new SpriteBatch(GraphicsDevice);
Followed by an instance of BasicEffect:
effect = new BasicEffect(GraphicsDevice);
effect.EnableDefaultLighting();
effect.DiffuseColor = Color.White.ToVector3();
effect.View = Matrix.CreateLookAt(
Vector3.Backward * 9 + Vector3.Up * 9,
Vector3.Zero,
Vector3.Up);
effect.World = Matrix.Identity;
effect.Texture = Content.Load<Texture2D>("block");
effect.TextureEnabled = true;
effect.EnableDefaultLighting();
The specifics of how the Basic Effect are configured aren't important here. Merely that we have an effect to render with.
Next up we're going to need a few projection Matrices:
projections = new List<Matrix>() {
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(60f),
GraphicsDevice.Viewport.AspectRatio,
9f,
200f),
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(60f),
GraphicsDevice.Viewport.AspectRatio,
7f,
10f),
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(60f),
GraphicsDevice.Viewport.AspectRatio,
0.2f,
8f)};
If you examine the last two parameters of each projection, you'll notice what we're effectively doing here is splitting the world up into "chunks" with each chunk covering a different range of distances from the camera.
e.g. everything from 9 units beyond, anything between 7 units and 10 units from the camera and finally anything closer then 8 units.
(You'll need to tweak these distances depending on your scene. Please note the small amount of overlap)
Next we'll create some render targets:
var pp = GraphicsDevice.PresentationParameters;
renderTargets = new List<RenderTarget2D>()
{
new RenderTarget2D(GraphicsDevice,
GraphicsDevice.Viewport.Width / 8,
GraphicsDevice.Viewport.Height / 8,
false, pp.BackBufferFormat, pp.DepthStencilFormat),
new RenderTarget2D(GraphicsDevice,
GraphicsDevice.Viewport.Width / 4,
GraphicsDevice.Viewport.Height / 4,
false, pp.BackBufferFormat, pp.DepthStencilFormat),
new RenderTarget2D(GraphicsDevice,
GraphicsDevice.Viewport.Width,
GraphicsDevice.Viewport.Height,
false, pp.BackBufferFormat, pp.DepthStencilFormat),
};
Each render target corresponds to an aforementioned "chunk". To achieve a really simplistic blur effect, each render target is set to a different resolution with the "furthest" chunk being a low resolution and the closest chunk being a high resolution.
Jumping across to the Draw() method, we can render our scene chunks:
(Being sure not to render the background in each chunk)
effect.Projection = projections[0];
GraphicsDevice.SetRenderTarget(renderTargets[0]);
GraphicsDevice.Clear(Color.Transparent);
// render scene here
effect.Projection = projections[1];
GraphicsDevice.SetRenderTarget(renderTargets[1]);
GraphicsDevice.Clear(Color.Transparent);
// render scene here
effect.Projection = projections[2];
GraphicsDevice.SetRenderTarget(renderTargets[2]);
GraphicsDevice.Clear(Color.Transparent);
// render scene here
GraphicsDevice.SetRenderTarget(null);
So now we've got our scene, broken up and blurred by distance, all that's left is to recombine it back together for our final image.
First step, render the (awesome) background:
GraphicsDevice.Clear(Color.CornflowerBlue);
Next render each chunk, from further to closest:
spriteBatch.Begin(
SpriteSortMode.Deferred,
BlendState.AlphaBlend,
SamplerState.AnisotropicClamp,
null, null);
spriteBatch.Draw(renderTargets[0], GraphicsDevice.Viewport.Bounds, Color.White);
spriteBatch.Draw(renderTargets[1], GraphicsDevice.Viewport.Bounds, Color.White);
spriteBatch.Draw(renderTargets[2], GraphicsDevice.Viewport.Bounds, Color.White);
spriteBatch.End();
And viola! We have a, albeit a little rough around the proverbial edges, approximation of Depth Of Field.
Now if you're planning to stay within the confines of the Reach profile, you can improve the blur effect by rendering each chunk at multiple resolutions and combining the resulting images together using something like the Additive BlendState.
If, on the other hand, you're planning to branch out into writing custom shaders in the HiDef profile, the concepts are roughly the same, just the method of execution changes.
For example, swapping the low resolution rendering for a more authentic Gaussian style blur... or... ditching the course grained idea of chunks and moving to the relatively fine grained method of blurring based off a depth map.
This is a follow on to my question How To Handle Image as Background to CAD Application
I applied the resizing/resampling code but it is not making any difference. I am sure I do not know enough about GDI+ etc so please excuse me if I seem muddled.
I am using a third party graphics library (Piccolo). I do not know enough to be sure what it is doing under the hood other than it evenually wraps GDI+.
My test is to rotate the display at different zoom levels - this is the process that causes the worst performance hit. I know I am rotating the camera view. At zoom levels up to 1.0 there is no performance degradation and rotation is smooth using the mouse wheel. The image has to be scaled to the CAD units of 1m per pixel at a zoom level of 1.0. I have resized/ resampled the image to match that. I have tried different ways to speed this up based on the code given me in the last question:
public static Bitmap ResampleImage(Image img, Size size) {
using (logger.VerboseCall()) {
var bmp = new Bitmap(size.Width, size.Height, PixelFormat.Format32bppPArgb);
using (var gr = Graphics.FromImage(bmp)) {
gr.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.Low;
gr.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighSpeed;
gr.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighSpeed;
gr.DrawImage(img, new Rectangle(Point.Empty, size));
}
return bmp;
}
}
I guess this speeds up the resample but as far as I can tell has no effect on the performance when trying to rotate the display at high zoom levels. User a performance profiler (ANTS) I am able to find the code that is causing the performance hit:
protected override void Paint(PPaintContext paintContext) {
using (PUtil.logger.DebugCall()) {
try {
if (Image != null) {
RectangleF b = Bounds;
Graphics g = paintContext.Graphics;
g.DrawImage(image, b);
}
}
catch (Exception ex) {
PUtil.logger.Error(string.Format("{0}\r\n{1}", ex.Message, ex.StackTrace));
//----catch GDI OOM exceptions
}
}
}
The performance hit is entirely in g.DrawImage(image, b);
Bounds is the bounds of the image of course. The catch block is there to catch GDI+ OOM exceptions which seem worse at high zoom levels also.
The number of times this is called seems to increase as the zoom level increases....
There is another hit in the code painting the camera view but I have not enough information to explain that yet except that this seems to paint all the layers attached to the camera - and all the objects on them I assume - when when the cameras view matrix and clip are applied to the paintContext (whatever that means).
So is there some other call to g.DrawImage(image, b); that I could use? Or am I at the mercy of the graphics engine? Unfortunately it is so embedded that it would be very hard to change for me
Thanks again
I think you you use,if I'm not mistake, PImageNode object form Piccolo. The quantity of calls to that method could increase because Piccolo engine traces "real" drawing area on the user screen, based on zoom level (kind of Culling) and draws only the nodes which are Visible ones. If you have a lot of PImageNode objects on your scene and make ZoomOut it will increase the quantity of PImageNode objects need to be drawn, so the calls to that method.
What about the performance:
1) Try to use SetStyle(ControlStyles.DoubleBuffer,true); of the PCanvas (if it's not yet setted up)
2) look here CodeProject
Regards.
Background: I'm using the SlimDX C# wrapper for DirectX, and am drawing many 2D sprites using the Sprite class (traditionally from the Direct3DX extension in the underlying dlls). I'm drawing multiple hundreds of sprites to the screen at once, and the performance is awesome -- on my quad core, it's using something like 3-6% of the processor for my entire game, including logic for 10,000+ objects, ai routines on a second thread, etc, etc. So clearly the sprites are being drawing using full hardware acceleration, and everything is as it should be.
Issue: The problem comes when I start introducing calls to the Line class. As soon as I draw 4 lines (for a drag selection box), processor usage skyrockets to 13-19%. This is with only four lines!
Things I have tried:
Turning line antialiasing off and on.
Turning GLLines off and on.
Manually calling the line.begin and line.end around my calls to draw.
Omitting all calls to line.begin and line.end.
Ensuring that my calls to line.draw are not inside a sprite.begin / sprite.end block.
Calling line.draw inside a sprite.begin / sprite.end block.
Rendering 4 lines, or rendering 300.
Turning off all sprite and text rendering, and just leaving the line rendering for 4 lines (to see if this was some sort of mode-changing issue).
Most combinations of the above.
In general, none of these had a significant impact on performance. #3 reduced processor usage by maybe 2%, but even then it's still 8% or more higher than it should be. The strangest thing is that #7 from above had absolutely zero impact on performance -- it was just as slow with 4 lines as it was with 300. The only thing that I can figure is that this is being run in software for some reason, and/or that it is causing the graphics card to continually switch back and forth between some sort of drawing modes.
Matrix Approach:
If anyone knows of any fix to the above issue, then I'd love to hear it!
But I'm under the assumption that this might just be an issue inside of directx in general, so I've been pursuing another route -- making my own sprite-based line. Essentially, I've got a 1px white image, and I'm using the diffuse color and transforms to draw the lines. This works, performance-wise -- drawing 300 of the "lines" like this puts me in the 3-6% processor utilization performance range that I'm looking for on my quad core.
I have two problems with my pixel-stretch line technique, which I'm hoping that someone more knowledgeable about transforms can help me with. Here's my current code for a horizontal line:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 ) / 2.0f;
Matrix m = Matrix.Transformation2D( new Vector2( X1 + width, Y ), 0f, new Vector2( width, HalfHeight ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1 + width, Y, 0 ), Color );
}
This works, insofar as it draws lines of mostly the right size at mostly the right location on the screen. However, things appear shifted to the right, which is strange. I'm not quite sure if my matrix approach is right at all: I just want to scale a 1x1 sprite by some amount of pixels horizontally, and a different amount vertically. Then I need to be able to position them -- by the center point is fine, and I think that's what I'll have to do, but if I could position it by upper-left that would be even better. This seems like a simple problem, but my knowledge of matrices is pretty weak.
This would get purely-horizontal and purely-vertical lines working for me, which is most of the battle. I could live with just that, and use some other sort of graphic in locations which I am currently using angled lines. But it would be really nice if there was a way for me to draw lines that are angled using this stretched-pixel approach. In other words, draw a line from 1,1 to 7,19, for instance. With matrix rotation, etc, it seems like this is feasible, but I don't know where to even begin other than guessing and checking, which would take forever.
Any and all help is much appreciated!
it sounds like a pipeline stall. You're switching some mode between rendering sprites and rendering lines, that forces the graphics card to empty its pipeline before starting with the new primitive.
Before you added the lines, were those sprites all you rendered, or were there other elements on-screen, using other modes already?
Okay, I've managed to get horizontal lines working after much experimentation. This works without the strange offsetting I was seeing before; the same principle will work for vertical lines as well. This has vastly better performance than the line class, so this is what I'll be using for horizontal and vertical lines. Here's the code:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 );
Matrix m = Matrix.Transformation2D( new Vector2( X1, Y - HalfHeight ), 0f, new Vector2( width, HalfHeight * 2 ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1, Y - HalfHeight, 0 ), Color );
}
I'd like to have a version of this that would work for lines at angles, too (as mentioned above). Any suggestions for that?