I'm trying to create a depth of field post process, but have no idea where to start (except render depth map, which I'm currently at). All the tutorials for it are either for XNA3.1, don't actually give you an explanation, or part of a book.
So, can you go through a detailed, step-by-step process on how DOF is rendered?
Here's a description on how to achieve a basic approximation of it using the "out of the box" features provided by XNA within the Reach profile.
Once you're across how to do it in C# using the inbuilt stuff, achieving it in HLSL will hopefully be a little more obvious.
Also, should you ever wish to produce a game for Windows Phone 7, you'll where to start (as Windows Phone 7 doesn't support custom shaders at this point in time).
First we'll define some instance level variable to hold the bits and pieces we need to produce the look:
BasicEffect effect;
List<Matrix> projections;
List<RenderTarget2D> renderTargets;
SpriteBatch spriteBatch;
Next, in the LoadContent() method, we'll start loading them up. Starting with a SpriteBatch that'll we'll use to render the final scene:
spriteBatch = new SpriteBatch(GraphicsDevice);
Followed by an instance of BasicEffect:
effect = new BasicEffect(GraphicsDevice);
effect.EnableDefaultLighting();
effect.DiffuseColor = Color.White.ToVector3();
effect.View = Matrix.CreateLookAt(
Vector3.Backward * 9 + Vector3.Up * 9,
Vector3.Zero,
Vector3.Up);
effect.World = Matrix.Identity;
effect.Texture = Content.Load<Texture2D>("block");
effect.TextureEnabled = true;
effect.EnableDefaultLighting();
The specifics of how the Basic Effect are configured aren't important here. Merely that we have an effect to render with.
Next up we're going to need a few projection Matrices:
projections = new List<Matrix>() {
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(60f),
GraphicsDevice.Viewport.AspectRatio,
9f,
200f),
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(60f),
GraphicsDevice.Viewport.AspectRatio,
7f,
10f),
Matrix.CreatePerspectiveFieldOfView(
MathHelper.ToRadians(60f),
GraphicsDevice.Viewport.AspectRatio,
0.2f,
8f)};
If you examine the last two parameters of each projection, you'll notice what we're effectively doing here is splitting the world up into "chunks" with each chunk covering a different range of distances from the camera.
e.g. everything from 9 units beyond, anything between 7 units and 10 units from the camera and finally anything closer then 8 units.
(You'll need to tweak these distances depending on your scene. Please note the small amount of overlap)
Next we'll create some render targets:
var pp = GraphicsDevice.PresentationParameters;
renderTargets = new List<RenderTarget2D>()
{
new RenderTarget2D(GraphicsDevice,
GraphicsDevice.Viewport.Width / 8,
GraphicsDevice.Viewport.Height / 8,
false, pp.BackBufferFormat, pp.DepthStencilFormat),
new RenderTarget2D(GraphicsDevice,
GraphicsDevice.Viewport.Width / 4,
GraphicsDevice.Viewport.Height / 4,
false, pp.BackBufferFormat, pp.DepthStencilFormat),
new RenderTarget2D(GraphicsDevice,
GraphicsDevice.Viewport.Width,
GraphicsDevice.Viewport.Height,
false, pp.BackBufferFormat, pp.DepthStencilFormat),
};
Each render target corresponds to an aforementioned "chunk". To achieve a really simplistic blur effect, each render target is set to a different resolution with the "furthest" chunk being a low resolution and the closest chunk being a high resolution.
Jumping across to the Draw() method, we can render our scene chunks:
(Being sure not to render the background in each chunk)
effect.Projection = projections[0];
GraphicsDevice.SetRenderTarget(renderTargets[0]);
GraphicsDevice.Clear(Color.Transparent);
// render scene here
effect.Projection = projections[1];
GraphicsDevice.SetRenderTarget(renderTargets[1]);
GraphicsDevice.Clear(Color.Transparent);
// render scene here
effect.Projection = projections[2];
GraphicsDevice.SetRenderTarget(renderTargets[2]);
GraphicsDevice.Clear(Color.Transparent);
// render scene here
GraphicsDevice.SetRenderTarget(null);
So now we've got our scene, broken up and blurred by distance, all that's left is to recombine it back together for our final image.
First step, render the (awesome) background:
GraphicsDevice.Clear(Color.CornflowerBlue);
Next render each chunk, from further to closest:
spriteBatch.Begin(
SpriteSortMode.Deferred,
BlendState.AlphaBlend,
SamplerState.AnisotropicClamp,
null, null);
spriteBatch.Draw(renderTargets[0], GraphicsDevice.Viewport.Bounds, Color.White);
spriteBatch.Draw(renderTargets[1], GraphicsDevice.Viewport.Bounds, Color.White);
spriteBatch.Draw(renderTargets[2], GraphicsDevice.Viewport.Bounds, Color.White);
spriteBatch.End();
And viola! We have a, albeit a little rough around the proverbial edges, approximation of Depth Of Field.
Now if you're planning to stay within the confines of the Reach profile, you can improve the blur effect by rendering each chunk at multiple resolutions and combining the resulting images together using something like the Additive BlendState.
If, on the other hand, you're planning to branch out into writing custom shaders in the HiDef profile, the concepts are roughly the same, just the method of execution changes.
For example, swapping the low resolution rendering for a more authentic Gaussian style blur... or... ditching the course grained idea of chunks and moving to the relatively fine grained method of blurring based off a depth map.
Related
I am writing a particle engine and have noticed it is massively slower than it should be (I've written highly un-optimized 3D C++ particle engines that can render 50k particles at 60 fps, this one drops to 32 fps at around 1.2k..), I did some analysis on the code assuming the rendering of the particles or the rotations were the most CPU intensive operation, however I discovered that in fact these two little properties of the graphics object are actually eating up over 70% of my performance....
public void RotateParticle(Graphics g, RectangleF r,
RectangleF rShadow, float angle,
Pen particleColor, Pen particleShadow)
{
//Create a matrix
Matrix m = new Matrix();
PointF shadowPoint = new PointF(rShadow.Left + (rShadow.Width / 1),
rShadow.Top + (rShadow.Height / 1));
PointF particlePoint = new PointF(r.Left + (r.Width / 1),
r.Top + (r.Height / 2));
//Angle of the shadow gets set to the angle of the particle,
//that way we can rotate them at the same rate
float shadowAngle = angle;
m.RotateAt(shadowAngle, shadowPoint);
g.Transform = m;
//rotate and draw the shadow of the Particle
g.DrawRectangle(particleShadow, rShadow.X, rShadow.Y, rShadow.Width, rShadow.Height);
//Reset the matrix for the next draw and dispose of the first matrix
//NOTE: Using one matrix for both the shadow and the partice causes one
//to rotate at half the speed of the other.
g.ResetTransform();
m.Dispose();
//Same stuff as before but for the actual particle
Matrix m2 = new Matrix();
m2.RotateAt(angle, particlePoint);
//Set the current draw location to the rotated matrix point
//and draw the Particle
g.Transform = m2;
g.DrawRectangle(particleColor, r.X, r.Y, r.Width, r.Height);
m2.Dispose();
}
What is killing my performance is specifically these lines:
g.Transform = m;
g.Transform = m2;
A little background, the graphics object is getting grabbed from painteventargs, it is then rendering particles to the screen in a render particles method, which calls this method to do any rotations, multi-threading isn't a solution as the graphics object cannot be shared between multiple threads. Here is a link to the code analysis I ran just so you can see what is happening as well:
https://gyazo.com/229cfad93b5b0e95891eccfbfd056020
I am kinda thinking this is something that can't really be helped because it looks like the property itself is destroying the performance and not anything I've actually done (though I'm sure there's room for improvement), especially since the dll the class calls into is using the most cpu power. Anyways, any help would be greatly appreciated in trying to optimize this...maybe I'll just enable/disable rotation to increase performance, we'll see...
Well, you should scratch your head a while over the profile results you see. There is something else going on when you assign the Transform property. Something you can reason out by noting that ResetTransform() does not cost anything. Doesn't make sense of course, that method also changes the Transform property.
And do note that it should be DrawRectangle() that should be the expensive method since that is the one that actually puts the pedal to the metal and generates real drawing commands. We can't see what it cost from your screenshot, can't be more than 30%. That is not nearly enough.
I think what you see here is an obscure feature of GDI/plus, it batches drawing commands. In other words, internally it generates a list of drawing commands and does not pass them to the video driver until it has to. The native winapi has an function that explicitly forces that list to be flushed, it is GdiFlush(). That is however not exposed by the .NET Graphics class, it is done automagically.
So a pretty attractive theory is that GDI+ internally calls GdiFlush() when you assign the Transform property. So the cost you are seeing is actually the cost of a previous DrawRectangle() call.
You need to get ahead by giving it more opportunity to batch. Very strongly favor the Graphics class method that let you draw a large number of items. In other words, don't draw each individual particle but draw many. You'll like DrawRectangles(), DrawLines(), DrawPath(). Unfortunately no DrawPolygons(), the one you really like, technically you could pinvoke PolyPolygon() but that's hard to get going.
If my theory is incorrect then do note that you don't need Graphics.Transform. You can also use Matrix.TransformPoints() and Graphics.DrawPolygon(). Whether you can truly get ahead is a bit doubtful, the Graphics class doesn't use GPU acceleration directly so it never competes that well with DirectX.
I'm not sure if the following would help, but it's worth trying. Instead of allocating/assigning/disposing new Matrix, use the preallocated Graphics.Transform via Graphics methods - RotateTransform, ScaleTransform, TranslateTransform (and make sure to always ResetTransform when done).
The Graphics does not contain a direct equivalent of Matrix.RotateAt method, but it's not hard to make one
public static class GraphicsExtensions
{
public static void RotateTransformAt(this Graphics g, float angle, PointF point)
{
g.TranslateTransform(point.X, point.Y);
g.RotateTransform(angle);
g.TranslateTransform(-point.X, -point.Y);
}
}
Then you can update your code like this and see if that helps
public void RotateParticle(Graphics g, RectangleF r,
RectangleF rShadow, float angle,
Pen particleColor, Pen particleShadow)
{
PointF shadowPoint = new PointF(rShadow.Left + (rShadow.Width / 1),
rShadow.Top + (rShadow.Height / 1));
PointF particlePoint = new PointF(r.Left + (r.Width / 1),
r.Top + (r.Height / 2));
//Angle of the shadow gets set to the angle of the particle,
//that way we can rotate them at the same rate
float shadowAngle = angle;
//rotate and draw the shadow of the Particle
g.RotateTransformAt(shadowAngle, shadowPoint);
g.DrawRectangle(particleShadow, rShadow.X, rShadow.Y, rShadow.Width, rShadow.Height);
g.ResetTransform();
//Same stuff as before but for the actual particle
g.RotateTransformAt(angle, particlePoint);
g.DrawRectangle(particleColor, r.X, r.Y, r.Width, r.Height);
g.ResetTransform();
}
Can you create an off-screen buffer to draw your particle, and have OnPaint simply render your off screen buffer? If you need to periodically update your screen, you can invalidate your OnScreen control/canvas, say using a Timer
Bitmap bmp;
Graphics gOff;
void Initialize() {
bmp = new Bitmap(width, height);
gOff = bmp.FromImage();
}
private void OnPaint(object sender, System.Windows.Forms.PaintEventArgs e) {
e.Graphics.DrawImage(bmp, 0, 0);
}
void RenderParticles() {
foreach (var particle in Particles)
RotateParticle(gOff, ...);
}
On another note, any reason to create a matrix object every time you call RotateParticle? I haven't tried it, but the MSDN docs seem to suggest that get and set on Graphics.Transform will always create a copy. So you can keep a Matrix object at say class level and use it for transform. Just make sure to call Matrix.Reset() before using it. This might get you some performance improvement.
I am creating a 2D platformer type game in XNA.
I currently have a camera object, with a position/rotation/zoomlevel that I use to generate a transformation matrix to pass to SpriteBatch.Begin(). This allows me to draw at in game coordinates instead of screen coordinates.
The relevant bit of the Camera code:
public Matrix GetViewMatrix() {
cameraMatrix = Matrix.CreateScale(new Vector3(1f, -1f, 1f))
* Matrix.CreateTranslation(position.X, position.Y, 0f)
* Matrix.CreateScale(new Vector3(zoom,zoom,1f))
* Matrix.CreateRotationZ(rotation)
* Matrix.CreateTranslation(new Vector3(screenWidth*0.5f,screenHeight*0.5f,0));
return cameraMatrix;
}
Which is used like so:
spriteBatch.Begin(SpriteSortMode.BackToFront, null, null, null,
null, null, camera.GetViewMatrix());
//Draw Stuff
spriteBatch.End();
The problem is, that in order to get anything to actually draw, I have to scale by (1,-1) when I call spriteBatch.Draw(), otherwise I believe the textures get depth culled.
spriteBatch.Draw(content.Load<Texture2D>("whiteSquare"), Vector2.Zero, null,
Color.White, 0f, Vector2.Zero,
new Vector2(1f, -1f),
SpriteEffects.None,0f);
Notice the Vector scaling argument in the 3rd line of the last sample. My question is twofold:
How do I avoid having to pass this scaling argument/calling the
longest form of spriteBatch.Draw() (kind of a violation of DRY,
though I could wrap it I suppose).
Am I doing something wrong (not
"it doesnt work wrong" but "thats the wrong way to approach that
problem" wrong)? I have seen mentions of viewport.Update() functions and Matrix.CreateOrthagonal etc, but I'm not quite sure if using them is simpler/better than a simple custom camera sort of deal.
Thank you very much.
Why you are using Matrix.CreateScale(new Vector3(1f, -1f, 1f))? If you creating 2d platformer correct way to create camera transform is:
Matrix.CreateTranslation(new Vector3(-Position.X, -Position.Y, 0))*
Matrix.CreateRotationZ(Rotation)*
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1))*
Matrix.CreateTranslation(
new Vector3(
GraphicsDevice.Viewport.Width*0.5f,
GraphicsDevice.Viewport.Height*0.5f, 0));
When using this camera transformation you can use default scale (1;1) on sprites.
Answering your questions:
In the end you most likely will need to call longest form of spriteBatch.Draw() anyway, because it gives most options to manipulate sprites - which most likely you will need later. So this is not a problem, however negative scale does not seem like the correct way of drawing sprites.
You are using the right way to draw sprites with camera transform in 2D, however as I mentioned before - transform itself seems incorrect.
P.S. Don't load content in the draw call ( content.Load<Texture2D>("whiteSquare") ), although it's cached and will not load each time you still should never do that.
I'm using XNA/MonoGame to draw some 2D polygons for me. I'd like a Texture I have to repeat on multiple polygons, based on their X and Y coordinates.
here's an example of what I mean:
I had thought that doing something like this would work (assuming a 256x256 pixel texture)
verticies[0].TextureCoordinate = new Vector2(blockX / 256f, (blockY + blockHeight) / 256f);
verticies[1].TextureCoordinate = new Vector2(blockX / 256f, blockY / 256f);
verticies[2].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, (blockY + blockHeight) / 256f);
verticies[3].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, blockY / 256f);
// each block is draw with a TriangleStrip, hence the odd ordering of coordinates.
// the blocks I'm drawing are not on a fixed grid; their coordinates and dimensions are in pixels.
but the blocks end up "textured" with long-horizontal lines that look like the texture has been extremely stretched.
(to check if the problem had to do with TriangleStrips, I tried removing the last vertex and drawing a TriangleList of 1 - this had the same result on the texture, and the expected result of drawing only one half of my blocks.)
what's the correct way to achieve this effect?
my math was correct, but it seems that other code was wrong, and I was missing at least one important thing.
maybe-helpful hints for other people trying to achieve this effect and running into trouble:
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
^ you need that code. but importantly, your SamplerState and other settings will get reset when you draw sprites (SpriteBatch's Begin()), so especially if you're abstracting your polygon-rendering code into little helper functions, be mindful of when and where you call them! ex:
spriteBatch.Begin();
// drawing sprites
MyFilledPolygonDrawer(args);
// drawing sprites
spriteBatch.End();
if you do this (assuming MyFilledPolygonDrawer uses 3D methods), you'll need to change all the settings (such as SamplerState) before you draw in 3D, and possibly after (depending on what settings you use for 2D rendering), all of which comes with a little overhead (and makes your code more fragile - you're more likely to screw up :P)
one way to avoid this is to draw all your 3D stuff and 2D stuff separately (all one, then all the other).
(in my case, I haven't got my code completely separated out in this way, but I was able to at least reduce some switching between 2D and 3D by using 2D methods to draw solid-color rectangles - Draw Rectangle in XNA using SpriteBatch - and 3D stuff only for less-regular and/or textured shapes.)
I'm looking over this tutorial to mix different textures based on the types of pixels I want to pass:
http://www.crappycoding.com/tag/xna/page/2/
and so far I hink I understand the whole concept, except for couple lines in creating the AlphaTestEffect object, as there is very little explanation to it given and I have no clue what it is there for and why it's set up like that.
Matrix projection = Matrix.CreateOrthographicOffCenter(0, PlanetDataSize, PlanetDataSize, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
alphaTestEffect.Projection = halfPixelOffset * projection;
Could somebody please explain these necesities, what they do and what they are for? I hope it won't take too much time, and my question is not a silly one.
cheers
Lucas
Because he is using a custom effect instead of the default SpriteBatch one, he has to make sure the projection works the same way as the default (or rather, he's making it the same to make everything play nice together).
http://blogs.msdn.com/b/shawnhar/archive/2010/04/05/spritebatch-and-custom-shaders-in-xna-game-studio-4-0.aspx
It's explained there if you scroll down a bit:
" This code configures BasicEffect to replicate the default SpriteBatch coordinate system:"
The default SpriteBatch camera is a simple orthographic projection with a half pixel offset to display 2D textures better. That can be explained here:
http://drilian.com/2008/11/25/understanding-half-pixel-and-half-texel-offsets/
Background: I'm using the SlimDX C# wrapper for DirectX, and am drawing many 2D sprites using the Sprite class (traditionally from the Direct3DX extension in the underlying dlls). I'm drawing multiple hundreds of sprites to the screen at once, and the performance is awesome -- on my quad core, it's using something like 3-6% of the processor for my entire game, including logic for 10,000+ objects, ai routines on a second thread, etc, etc. So clearly the sprites are being drawing using full hardware acceleration, and everything is as it should be.
Issue: The problem comes when I start introducing calls to the Line class. As soon as I draw 4 lines (for a drag selection box), processor usage skyrockets to 13-19%. This is with only four lines!
Things I have tried:
Turning line antialiasing off and on.
Turning GLLines off and on.
Manually calling the line.begin and line.end around my calls to draw.
Omitting all calls to line.begin and line.end.
Ensuring that my calls to line.draw are not inside a sprite.begin / sprite.end block.
Calling line.draw inside a sprite.begin / sprite.end block.
Rendering 4 lines, or rendering 300.
Turning off all sprite and text rendering, and just leaving the line rendering for 4 lines (to see if this was some sort of mode-changing issue).
Most combinations of the above.
In general, none of these had a significant impact on performance. #3 reduced processor usage by maybe 2%, but even then it's still 8% or more higher than it should be. The strangest thing is that #7 from above had absolutely zero impact on performance -- it was just as slow with 4 lines as it was with 300. The only thing that I can figure is that this is being run in software for some reason, and/or that it is causing the graphics card to continually switch back and forth between some sort of drawing modes.
Matrix Approach:
If anyone knows of any fix to the above issue, then I'd love to hear it!
But I'm under the assumption that this might just be an issue inside of directx in general, so I've been pursuing another route -- making my own sprite-based line. Essentially, I've got a 1px white image, and I'm using the diffuse color and transforms to draw the lines. This works, performance-wise -- drawing 300 of the "lines" like this puts me in the 3-6% processor utilization performance range that I'm looking for on my quad core.
I have two problems with my pixel-stretch line technique, which I'm hoping that someone more knowledgeable about transforms can help me with. Here's my current code for a horizontal line:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 ) / 2.0f;
Matrix m = Matrix.Transformation2D( new Vector2( X1 + width, Y ), 0f, new Vector2( width, HalfHeight ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1 + width, Y, 0 ), Color );
}
This works, insofar as it draws lines of mostly the right size at mostly the right location on the screen. However, things appear shifted to the right, which is strange. I'm not quite sure if my matrix approach is right at all: I just want to scale a 1x1 sprite by some amount of pixels horizontally, and a different amount vertically. Then I need to be able to position them -- by the center point is fine, and I think that's what I'll have to do, but if I could position it by upper-left that would be even better. This seems like a simple problem, but my knowledge of matrices is pretty weak.
This would get purely-horizontal and purely-vertical lines working for me, which is most of the battle. I could live with just that, and use some other sort of graphic in locations which I am currently using angled lines. But it would be really nice if there was a way for me to draw lines that are angled using this stretched-pixel approach. In other words, draw a line from 1,1 to 7,19, for instance. With matrix rotation, etc, it seems like this is feasible, but I don't know where to even begin other than guessing and checking, which would take forever.
Any and all help is much appreciated!
it sounds like a pipeline stall. You're switching some mode between rendering sprites and rendering lines, that forces the graphics card to empty its pipeline before starting with the new primitive.
Before you added the lines, were those sprites all you rendered, or were there other elements on-screen, using other modes already?
Okay, I've managed to get horizontal lines working after much experimentation. This works without the strange offsetting I was seeing before; the same principle will work for vertical lines as well. This has vastly better performance than the line class, so this is what I'll be using for horizontal and vertical lines. Here's the code:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 );
Matrix m = Matrix.Transformation2D( new Vector2( X1, Y - HalfHeight ), 0f, new Vector2( width, HalfHeight * 2 ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1, Y - HalfHeight, 0 ), Color );
}
I'd like to have a version of this that would work for lines at angles, too (as mentioned above). Any suggestions for that?