Efficient Line Drawing In Direct3D (Transformation2D?) - c#

Background: I'm using the SlimDX C# wrapper for DirectX, and am drawing many 2D sprites using the Sprite class (traditionally from the Direct3DX extension in the underlying dlls). I'm drawing multiple hundreds of sprites to the screen at once, and the performance is awesome -- on my quad core, it's using something like 3-6% of the processor for my entire game, including logic for 10,000+ objects, ai routines on a second thread, etc, etc. So clearly the sprites are being drawing using full hardware acceleration, and everything is as it should be.
Issue: The problem comes when I start introducing calls to the Line class. As soon as I draw 4 lines (for a drag selection box), processor usage skyrockets to 13-19%. This is with only four lines!
Things I have tried:
Turning line antialiasing off and on.
Turning GLLines off and on.
Manually calling the line.begin and line.end around my calls to draw.
Omitting all calls to line.begin and line.end.
Ensuring that my calls to line.draw are not inside a sprite.begin / sprite.end block.
Calling line.draw inside a sprite.begin / sprite.end block.
Rendering 4 lines, or rendering 300.
Turning off all sprite and text rendering, and just leaving the line rendering for 4 lines (to see if this was some sort of mode-changing issue).
Most combinations of the above.
In general, none of these had a significant impact on performance. #3 reduced processor usage by maybe 2%, but even then it's still 8% or more higher than it should be. The strangest thing is that #7 from above had absolutely zero impact on performance -- it was just as slow with 4 lines as it was with 300. The only thing that I can figure is that this is being run in software for some reason, and/or that it is causing the graphics card to continually switch back and forth between some sort of drawing modes.
Matrix Approach:
If anyone knows of any fix to the above issue, then I'd love to hear it!
But I'm under the assumption that this might just be an issue inside of directx in general, so I've been pursuing another route -- making my own sprite-based line. Essentially, I've got a 1px white image, and I'm using the diffuse color and transforms to draw the lines. This works, performance-wise -- drawing 300 of the "lines" like this puts me in the 3-6% processor utilization performance range that I'm looking for on my quad core.
I have two problems with my pixel-stretch line technique, which I'm hoping that someone more knowledgeable about transforms can help me with. Here's my current code for a horizontal line:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 ) / 2.0f;
Matrix m = Matrix.Transformation2D( new Vector2( X1 + width, Y ), 0f, new Vector2( width, HalfHeight ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1 + width, Y, 0 ), Color );
}
This works, insofar as it draws lines of mostly the right size at mostly the right location on the screen. However, things appear shifted to the right, which is strange. I'm not quite sure if my matrix approach is right at all: I just want to scale a 1x1 sprite by some amount of pixels horizontally, and a different amount vertically. Then I need to be able to position them -- by the center point is fine, and I think that's what I'll have to do, but if I could position it by upper-left that would be even better. This seems like a simple problem, but my knowledge of matrices is pretty weak.
This would get purely-horizontal and purely-vertical lines working for me, which is most of the battle. I could live with just that, and use some other sort of graphic in locations which I am currently using angled lines. But it would be really nice if there was a way for me to draw lines that are angled using this stretched-pixel approach. In other words, draw a line from 1,1 to 7,19, for instance. With matrix rotation, etc, it seems like this is feasible, but I don't know where to even begin other than guessing and checking, which would take forever.
Any and all help is much appreciated!

it sounds like a pipeline stall. You're switching some mode between rendering sprites and rendering lines, that forces the graphics card to empty its pipeline before starting with the new primitive.
Before you added the lines, were those sprites all you rendered, or were there other elements on-screen, using other modes already?

Okay, I've managed to get horizontal lines working after much experimentation. This works without the strange offsetting I was seeing before; the same principle will work for vertical lines as well. This has vastly better performance than the line class, so this is what I'll be using for horizontal and vertical lines. Here's the code:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 );
Matrix m = Matrix.Transformation2D( new Vector2( X1, Y - HalfHeight ), 0f, new Vector2( width, HalfHeight * 2 ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1, Y - HalfHeight, 0 ), Color );
}
I'd like to have a version of this that would work for lines at angles, too (as mentioned above). Any suggestions for that?

Related

Efficient way to draw circles

I'm drawing circles in the following way:
for each pixel px
{
if (isInside(px)) px.color = white
else px.color = black
}
bool isInside(pixel p)
{
for each circle cir
{
if (PixelInsideCircle(p, cir)) return true
}
return false
}
bool PixelInsideCircle(pixel p, circle cir)
{
float x = p.pos.x - cir.pos.x, y = p.pos.y - cir.pos.y;
return x*x + y*y <= cir.radius*cir.radius
}
Here's the result:
There are around 50 circles. Any way to optimize it? I'm using unity3d. I'm filling the RenderTexture using compute shader and directly drew (Graphics.Blit) to the camera. I'm drawing only circles and I want to increase the circles from 50 to 1000. I've tried to use aabb and kd tree but could not figure out how to correctly implement it, using tree only worsen the performance. Thought to use intersection test for every column but not sure if it's a good idea. I'm making this for android and ios. Any help ?
I do not code in/with unity/C#/DirectX but if you insist on filling by pixels see
Is there a more efficient way of texturing a circle?
for some ideas on easing the math ...
I would not use compute shaders but render QUADS (AABB) for each circle instead using Vertex+Fragment shaders.
As next step I would try to use Geometry shader that emits triangle fan around your circle (so the ratio between filled and empty space is better) this also require just center and radius instead of AABB so you can use POINTS instead of QUADS see:
rendering cubics in GLSL
Its doing similar things (but its in GLSL). Also I noted you have:
return (p.pos.x - cir.pos.x)^2 + (p.pos.y - cir.pos.y)^2 - (cir.radius)^2 <= 0
try to change it to:
return (p.pos.x - cir.pos.x)^2 + (p.pos.y - cir.pos.y)^2 <= (cir.radius)^2
its one less operation. Also (cir.radius)^2 should be passed to Fragment from Vertex (or Geometry) so it does not need to be computed on per pixel basis
Using compute shaders and checking distances is probably the faster way.
In the worst case, for 1000 circles, it would execute "PixelInsideCircle" 1000 times and in the best case just once per pixel. When a pixel is found inside a circle it leaves the loop and return white.
This is faster than any other solution on CPU (quadtree) + GPU (compute shaders). Let your GPU run everything in a single loop per pixel.
Only the pixel number (width * height) * circles will affect the performance. You could go for a smaller texture (50~99%) and upscale on the Blit, its even better for mobile since screens are smaller.
Also other solutions using meshes, circle textures would be bad as mobile GPUs are memory bandwidth bound, passing more commands and data around is worse than calculating on the GPU itself.
You can try replacing your "PixelInsideCircle" with HLSL: distance() or length(), (they'll probaly have a internal Sqrt though), but since they are internal functions, maybe they are faster. Just test it.
Do you run this once like a map generator or its run every frame ?

Graphics.Transform is massively inefficient, what can I do about this?

I am writing a particle engine and have noticed it is massively slower than it should be (I've written highly un-optimized 3D C++ particle engines that can render 50k particles at 60 fps, this one drops to 32 fps at around 1.2k..), I did some analysis on the code assuming the rendering of the particles or the rotations were the most CPU intensive operation, however I discovered that in fact these two little properties of the graphics object are actually eating up over 70% of my performance....
public void RotateParticle(Graphics g, RectangleF r,
RectangleF rShadow, float angle,
Pen particleColor, Pen particleShadow)
{
//Create a matrix
Matrix m = new Matrix();
PointF shadowPoint = new PointF(rShadow.Left + (rShadow.Width / 1),
rShadow.Top + (rShadow.Height / 1));
PointF particlePoint = new PointF(r.Left + (r.Width / 1),
r.Top + (r.Height / 2));
//Angle of the shadow gets set to the angle of the particle,
//that way we can rotate them at the same rate
float shadowAngle = angle;
m.RotateAt(shadowAngle, shadowPoint);
g.Transform = m;
//rotate and draw the shadow of the Particle
g.DrawRectangle(particleShadow, rShadow.X, rShadow.Y, rShadow.Width, rShadow.Height);
//Reset the matrix for the next draw and dispose of the first matrix
//NOTE: Using one matrix for both the shadow and the partice causes one
//to rotate at half the speed of the other.
g.ResetTransform();
m.Dispose();
//Same stuff as before but for the actual particle
Matrix m2 = new Matrix();
m2.RotateAt(angle, particlePoint);
//Set the current draw location to the rotated matrix point
//and draw the Particle
g.Transform = m2;
g.DrawRectangle(particleColor, r.X, r.Y, r.Width, r.Height);
m2.Dispose();
}
What is killing my performance is specifically these lines:
g.Transform = m;
g.Transform = m2;
A little background, the graphics object is getting grabbed from painteventargs, it is then rendering particles to the screen in a render particles method, which calls this method to do any rotations, multi-threading isn't a solution as the graphics object cannot be shared between multiple threads. Here is a link to the code analysis I ran just so you can see what is happening as well:
https://gyazo.com/229cfad93b5b0e95891eccfbfd056020
I am kinda thinking this is something that can't really be helped because it looks like the property itself is destroying the performance and not anything I've actually done (though I'm sure there's room for improvement), especially since the dll the class calls into is using the most cpu power. Anyways, any help would be greatly appreciated in trying to optimize this...maybe I'll just enable/disable rotation to increase performance, we'll see...
Well, you should scratch your head a while over the profile results you see. There is something else going on when you assign the Transform property. Something you can reason out by noting that ResetTransform() does not cost anything. Doesn't make sense of course, that method also changes the Transform property.
And do note that it should be DrawRectangle() that should be the expensive method since that is the one that actually puts the pedal to the metal and generates real drawing commands. We can't see what it cost from your screenshot, can't be more than 30%. That is not nearly enough.
I think what you see here is an obscure feature of GDI/plus, it batches drawing commands. In other words, internally it generates a list of drawing commands and does not pass them to the video driver until it has to. The native winapi has an function that explicitly forces that list to be flushed, it is GdiFlush(). That is however not exposed by the .NET Graphics class, it is done automagically.
So a pretty attractive theory is that GDI+ internally calls GdiFlush() when you assign the Transform property. So the cost you are seeing is actually the cost of a previous DrawRectangle() call.
You need to get ahead by giving it more opportunity to batch. Very strongly favor the Graphics class method that let you draw a large number of items. In other words, don't draw each individual particle but draw many. You'll like DrawRectangles(), DrawLines(), DrawPath(). Unfortunately no DrawPolygons(), the one you really like, technically you could pinvoke PolyPolygon() but that's hard to get going.
If my theory is incorrect then do note that you don't need Graphics.Transform. You can also use Matrix.TransformPoints() and Graphics.DrawPolygon(). Whether you can truly get ahead is a bit doubtful, the Graphics class doesn't use GPU acceleration directly so it never competes that well with DirectX.
I'm not sure if the following would help, but it's worth trying. Instead of allocating/assigning/disposing new Matrix, use the preallocated Graphics.Transform via Graphics methods - RotateTransform, ScaleTransform, TranslateTransform (and make sure to always ResetTransform when done).
The Graphics does not contain a direct equivalent of Matrix.RotateAt method, but it's not hard to make one
public static class GraphicsExtensions
{
public static void RotateTransformAt(this Graphics g, float angle, PointF point)
{
g.TranslateTransform(point.X, point.Y);
g.RotateTransform(angle);
g.TranslateTransform(-point.X, -point.Y);
}
}
Then you can update your code like this and see if that helps
public void RotateParticle(Graphics g, RectangleF r,
RectangleF rShadow, float angle,
Pen particleColor, Pen particleShadow)
{
PointF shadowPoint = new PointF(rShadow.Left + (rShadow.Width / 1),
rShadow.Top + (rShadow.Height / 1));
PointF particlePoint = new PointF(r.Left + (r.Width / 1),
r.Top + (r.Height / 2));
//Angle of the shadow gets set to the angle of the particle,
//that way we can rotate them at the same rate
float shadowAngle = angle;
//rotate and draw the shadow of the Particle
g.RotateTransformAt(shadowAngle, shadowPoint);
g.DrawRectangle(particleShadow, rShadow.X, rShadow.Y, rShadow.Width, rShadow.Height);
g.ResetTransform();
//Same stuff as before but for the actual particle
g.RotateTransformAt(angle, particlePoint);
g.DrawRectangle(particleColor, r.X, r.Y, r.Width, r.Height);
g.ResetTransform();
}
Can you create an off-screen buffer to draw your particle, and have OnPaint simply render your off screen buffer? If you need to periodically update your screen, you can invalidate your OnScreen control/canvas, say using a Timer
Bitmap bmp;
Graphics gOff;
void Initialize() {
bmp = new Bitmap(width, height);
gOff = bmp.FromImage();
}
private void OnPaint(object sender, System.Windows.Forms.PaintEventArgs e) {
e.Graphics.DrawImage(bmp, 0, 0);
}
void RenderParticles() {
foreach (var particle in Particles)
RotateParticle(gOff, ...);
}
On another note, any reason to create a matrix object every time you call RotateParticle? I haven't tried it, but the MSDN docs seem to suggest that get and set on Graphics.Transform will always create a copy. So you can keep a Matrix object at say class level and use it for transform. Just make sure to call Matrix.Reset() before using it. This might get you some performance improvement.

Make a sphere with equidistant vertices

I'm trying to make a spherical burst of rays for the purpose of checking collision, but having specific interactions happen based upon what or where each ray hit. Hence why I'm using rays rather then something simpler such as OverlapSphere.
The reason I'm looking for how to make a sphere is because I can use the same math for my rays, by having them go to the vertices of where the sphere would be. But every way I can find for making a sphere has the lines get closer the near to the poles, which makes sense, as its pretty easy to do. But as you can imagine, its not that useful for my current project.
TL;DR:
How do I make a sphere with equidistant vertices? If its not perfectly equidistant its fine, it just needs to pretty close. If this happens, it would be great if you could give how much the difference would be, and where, if applicable.
Extra notes:
I've looked at this and this, but the math is way over my head, so what I've been looking for might've just been staring me in the face this whole time.
You could use an icosphere. As the vertices are distributed on equilateral triangles, your vertices are guaranteed to be equidistant.
To construct the icosphere, first you make an icosahedron and then split the faces recursively in smaller triangles as explained in this article.
Are you aware that the sphere given to you by Unity is in fact designed
with this exact goal in mind?
ie, the entire raison d'etre of the sphere built-in to Unity is that the points are fairly smoothly space ...... roughly equidistant, as you phrase it.
To bring up such a sphere in Unity, just do this:
You can then instantly get access to the verts, as you know
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vv = mesh.vertices;
int kVerts=vv.Length
for (int i=0; i<kVerts; ++i)
Debug.Log ... vv[i]
Note you can easily check "which part of the sphere" they are on by (for example) checking how far they are from your "cities" (or whatever) or just check (for example) the z values to see which hemisphere they are in .. et cetera.
Furthermore...
Please note. Regarding your overall reason for wanting to do this:
but having specific interactions happen based upon what or where each ray hit
Note that it could not be easier to do this using PhysX. (The completely built-in game physics in Unity.) Indeed, I have never, ever, looked at a collision without doing something "specific" depending on "where it hit!"
You can for example get the point where the contact was with http://docs.unity3d.com/ScriptReference/RaycastHit-point.html
It's worth noting it is absolutely inconceivable one could write something approaching the performance of PhysX in casual programming.
I hope this makes things easier!
slice the sphere into N circles
compute perimeter of it
divide it by the same angle that create the slice
this gives you the number of vertexes
and also angle step inside circle
cast rays
This is how I coded it in C++ + OpenGL:
// draw unit sphere points (r=1 center=(0,0,0)) ... your rays directions
int ia,na,ib,nb;
double x,y,z,r;
double a,b,da,db;
na=16; // number of slices
da=M_PI/double(na-1); // latitude angle step
for (a=-0.5*M_PI,ia=0;ia<na;ia++,a+=da) // slice sphere to circles in xy planes
{
r=cos(a); // radius of actual circle in xy plane
z=sin(a); // height of actual circle in xy plane
nb=ceil(2.0*M_PI*r/da);
db=2.0*M_PI/double(nb); // longitude angle step
if ((ia==0)||(ia==na-1)) { nb=1; db=0.0; } // handle edge cases
for (b=0.0,ib=0;ib<nb;ib++,b+=db) // cut circle to vertexes
{
x=r*cos(b); // compute x,y of vertex
y=r*sin(b);
// this just draw the ray direction (x,y,z) as line in OpenGL
// so you can ignore this
// instead add the ray cast of yours
double w=1.2;
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
}
This is how it looks like:
R,G,B lines are the sphere coordinate system axises X,Y,Z
White-ish lines are your Vertexes (White) + direction (Gray)
[Notes]
do not forget to include math.h
and replace the OpenGL stuff with yours
If you want 4, 6, 8, 12 or 20 vertices then you can have exactly equidistant vertices as the Platonic solid which all fit inside a sphere. The actual coordinates of these should be easy to get. For other numbers of vertices you can use other polyhedra and scale the verties so they lie on a sphere. If you need lots of points then a geodesic dome might be a good base. The C60 bucky-ball could be a good base with 60 points. For most of these you should be able to find 3D models from which you can extract coordinates.
I think the easiest way to control points on a sphere is by using spherical coordinates. Then you can control position of points around the sphere by using two angles (rho and phi) and the radius.
Example code for filling points uniformly around a rotating sphere (for fun):
var time = 1; // Increment this variable every frame to see the rotation
var count = 1000;
for (int i = 0; i < count; i++)
{
var rho = time + i;
var phi = 2 * Math.PI * i / count;
var x = (float)(radius * Math.Sin(phi) * Math.Cos(rho));
var z = (float)(radius * Math.Sin(phi) * Math.Sin(rho));
var y = (float)(radius * Math.Cos(phi));
Draw(x, y, z); // your drawing code for rendering the point
}
As some answers have already suggested, use an icosahedron based solution. The source for this is quite easy to come by (and I have written my own several times) but I find the excellent Primitives Pro plugin extremely handy under many other circumstances, and always use their sphere instead of the built-in Unity one.
Link to Primitives Pro component
Primitives Pro options

a texture that repeats across the world, based on X, Y coordinates

I'm using XNA/MonoGame to draw some 2D polygons for me. I'd like a Texture I have to repeat on multiple polygons, based on their X and Y coordinates.
here's an example of what I mean:
I had thought that doing something like this would work (assuming a 256x256 pixel texture)
verticies[0].TextureCoordinate = new Vector2(blockX / 256f, (blockY + blockHeight) / 256f);
verticies[1].TextureCoordinate = new Vector2(blockX / 256f, blockY / 256f);
verticies[2].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, (blockY + blockHeight) / 256f);
verticies[3].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, blockY / 256f);
// each block is draw with a TriangleStrip, hence the odd ordering of coordinates.
// the blocks I'm drawing are not on a fixed grid; their coordinates and dimensions are in pixels.
but the blocks end up "textured" with long-horizontal lines that look like the texture has been extremely stretched.
(to check if the problem had to do with TriangleStrips, I tried removing the last vertex and drawing a TriangleList of 1 - this had the same result on the texture, and the expected result of drawing only one half of my blocks.)
what's the correct way to achieve this effect?
my math was correct, but it seems that other code was wrong, and I was missing at least one important thing.
maybe-helpful hints for other people trying to achieve this effect and running into trouble:
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
^ you need that code. but importantly, your SamplerState and other settings will get reset when you draw sprites (SpriteBatch's Begin()), so especially if you're abstracting your polygon-rendering code into little helper functions, be mindful of when and where you call them! ex:
spriteBatch.Begin();
// drawing sprites
MyFilledPolygonDrawer(args);
// drawing sprites
spriteBatch.End();
if you do this (assuming MyFilledPolygonDrawer uses 3D methods), you'll need to change all the settings (such as SamplerState) before you draw in 3D, and possibly after (depending on what settings you use for 2D rendering), all of which comes with a little overhead (and makes your code more fragile - you're more likely to screw up :P)
one way to avoid this is to draw all your 3D stuff and 2D stuff separately (all one, then all the other).
(in my case, I haven't got my code completely separated out in this way, but I was able to at least reduce some switching between 2D and 3D by using 2D methods to draw solid-color rectangles - Draw Rectangle in XNA using SpriteBatch - and 3D stuff only for less-regular and/or textured shapes.)

Is the behaviour of DrawLine predictable with out of range parameters?

Can DrawLine handle coordinates outside the defined area?
For example myGraphics.DrawLine(MyPen, -20, -80, 20, 90);
I would expect this to produce a line correctly as though it had used an infinite canvas but plotting only the section within my graphic.
My code is as follows. I am plotting movement from coordinates recorded in a database. Occasionally the subject moves further than expected, beyond the edges of my bitmap. I do not check for this occurrence as I was relying on DrawLine to handle it.
Bitmap Border = new Bitmap(5000, 5000);
Border.SetResolution(254, 254);
Graphics MyGraphics= Graphics.FromImage(Border);
Pen MyPen = new Pen(Color.Black, 1);
for (Int32 Point = 1; Point <= Points; Point++)
{
XCoord2 = XCoord1;
YCoord2 = YCoord1;
XCoord1 = *READ FROM DATABASE*
YCoord1 = *READ FROM DATABASE*
if (Point > 1)
{
MyGraphics.DrawLine(MyPen, XCoord1, YCoord1, XCoord2, YCoord2);
}
}
In reality, my plots work most of the time. However I do get an occasional graphic with missing lines or with an obscure line originating from a strange coordinate.
In summary, should the behaviour of DrawLine predictable with unusual parameters. Should I introduce some trigonometry to force the plots to always be within my grid?
The actual limits are a billion positive or negative see this past question (that used .net )
What are the hard bounds for drawing coordinates in GDI+?
My guess is that your database pulls are wrong, this can happen if you are using a string data storage and forcing that to be parsed.
Add a thread.sleep() and have it Debug.WriteLine the new pulls (or just breakpoint things), likely a value is getting in there that is either odd or getting parsed oddly
After more experimentation, I finally cured my problem with...
SolidBrush WhiteBrush = new SolidBrush(Color.White);
myGraphics.FillRectangle(WhiteBrush,0,0,5000,5000);
i.e. I gave my graphics a solid white background before I drew any lines. Before I was drawing a black line on a NULL background. I have no idea why this would affect anything, but it did!

Categories