I'm drawing circles in the following way:
for each pixel px
{
if (isInside(px)) px.color = white
else px.color = black
}
bool isInside(pixel p)
{
for each circle cir
{
if (PixelInsideCircle(p, cir)) return true
}
return false
}
bool PixelInsideCircle(pixel p, circle cir)
{
float x = p.pos.x - cir.pos.x, y = p.pos.y - cir.pos.y;
return x*x + y*y <= cir.radius*cir.radius
}
Here's the result:
There are around 50 circles. Any way to optimize it? I'm using unity3d. I'm filling the RenderTexture using compute shader and directly drew (Graphics.Blit) to the camera. I'm drawing only circles and I want to increase the circles from 50 to 1000. I've tried to use aabb and kd tree but could not figure out how to correctly implement it, using tree only worsen the performance. Thought to use intersection test for every column but not sure if it's a good idea. I'm making this for android and ios. Any help ?
I do not code in/with unity/C#/DirectX but if you insist on filling by pixels see
Is there a more efficient way of texturing a circle?
for some ideas on easing the math ...
I would not use compute shaders but render QUADS (AABB) for each circle instead using Vertex+Fragment shaders.
As next step I would try to use Geometry shader that emits triangle fan around your circle (so the ratio between filled and empty space is better) this also require just center and radius instead of AABB so you can use POINTS instead of QUADS see:
rendering cubics in GLSL
Its doing similar things (but its in GLSL). Also I noted you have:
return (p.pos.x - cir.pos.x)^2 + (p.pos.y - cir.pos.y)^2 - (cir.radius)^2 <= 0
try to change it to:
return (p.pos.x - cir.pos.x)^2 + (p.pos.y - cir.pos.y)^2 <= (cir.radius)^2
its one less operation. Also (cir.radius)^2 should be passed to Fragment from Vertex (or Geometry) so it does not need to be computed on per pixel basis
Using compute shaders and checking distances is probably the faster way.
In the worst case, for 1000 circles, it would execute "PixelInsideCircle" 1000 times and in the best case just once per pixel. When a pixel is found inside a circle it leaves the loop and return white.
This is faster than any other solution on CPU (quadtree) + GPU (compute shaders). Let your GPU run everything in a single loop per pixel.
Only the pixel number (width * height) * circles will affect the performance. You could go for a smaller texture (50~99%) and upscale on the Blit, its even better for mobile since screens are smaller.
Also other solutions using meshes, circle textures would be bad as mobile GPUs are memory bandwidth bound, passing more commands and data around is worse than calculating on the GPU itself.
You can try replacing your "PixelInsideCircle" with HLSL: distance() or length(), (they'll probaly have a internal Sqrt though), but since they are internal functions, maybe they are faster. Just test it.
Do you run this once like a map generator or its run every frame ?
Related
I am working on writing an application that contains line plots of large datasets.
My current strategy is to load up my data for each channel into 1D vertex buffers.
I then use a vertex shader when drawing to assemble my buffers into vertices (so I can reuse one of my buffers for multiple sets of data)
This is working pretty well, and I can draw a few hundred million data-points, without slowing down too much.
To stretch things a bit further I would like to reduce the number of points that actually get drawn, though simple reduction (I.e. draw every n points) as there is not much point plotting 1000 points that are all represented by a single pixel)
One way I can think of doing this is to use a geometry shader and only emit every N points but I am not sure if this is the best plan of attack.
Would this be the recommended way of doing this?
You can do this much simpler by adjusting the stride of all vertex attributes to N times the normal one.
I'm trying to make a spherical burst of rays for the purpose of checking collision, but having specific interactions happen based upon what or where each ray hit. Hence why I'm using rays rather then something simpler such as OverlapSphere.
The reason I'm looking for how to make a sphere is because I can use the same math for my rays, by having them go to the vertices of where the sphere would be. But every way I can find for making a sphere has the lines get closer the near to the poles, which makes sense, as its pretty easy to do. But as you can imagine, its not that useful for my current project.
TL;DR:
How do I make a sphere with equidistant vertices? If its not perfectly equidistant its fine, it just needs to pretty close. If this happens, it would be great if you could give how much the difference would be, and where, if applicable.
Extra notes:
I've looked at this and this, but the math is way over my head, so what I've been looking for might've just been staring me in the face this whole time.
You could use an icosphere. As the vertices are distributed on equilateral triangles, your vertices are guaranteed to be equidistant.
To construct the icosphere, first you make an icosahedron and then split the faces recursively in smaller triangles as explained in this article.
Are you aware that the sphere given to you by Unity is in fact designed
with this exact goal in mind?
ie, the entire raison d'etre of the sphere built-in to Unity is that the points are fairly smoothly space ...... roughly equidistant, as you phrase it.
To bring up such a sphere in Unity, just do this:
You can then instantly get access to the verts, as you know
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vv = mesh.vertices;
int kVerts=vv.Length
for (int i=0; i<kVerts; ++i)
Debug.Log ... vv[i]
Note you can easily check "which part of the sphere" they are on by (for example) checking how far they are from your "cities" (or whatever) or just check (for example) the z values to see which hemisphere they are in .. et cetera.
Furthermore...
Please note. Regarding your overall reason for wanting to do this:
but having specific interactions happen based upon what or where each ray hit
Note that it could not be easier to do this using PhysX. (The completely built-in game physics in Unity.) Indeed, I have never, ever, looked at a collision without doing something "specific" depending on "where it hit!"
You can for example get the point where the contact was with http://docs.unity3d.com/ScriptReference/RaycastHit-point.html
It's worth noting it is absolutely inconceivable one could write something approaching the performance of PhysX in casual programming.
I hope this makes things easier!
slice the sphere into N circles
compute perimeter of it
divide it by the same angle that create the slice
this gives you the number of vertexes
and also angle step inside circle
cast rays
This is how I coded it in C++ + OpenGL:
// draw unit sphere points (r=1 center=(0,0,0)) ... your rays directions
int ia,na,ib,nb;
double x,y,z,r;
double a,b,da,db;
na=16; // number of slices
da=M_PI/double(na-1); // latitude angle step
for (a=-0.5*M_PI,ia=0;ia<na;ia++,a+=da) // slice sphere to circles in xy planes
{
r=cos(a); // radius of actual circle in xy plane
z=sin(a); // height of actual circle in xy plane
nb=ceil(2.0*M_PI*r/da);
db=2.0*M_PI/double(nb); // longitude angle step
if ((ia==0)||(ia==na-1)) { nb=1; db=0.0; } // handle edge cases
for (b=0.0,ib=0;ib<nb;ib++,b+=db) // cut circle to vertexes
{
x=r*cos(b); // compute x,y of vertex
y=r*sin(b);
// this just draw the ray direction (x,y,z) as line in OpenGL
// so you can ignore this
// instead add the ray cast of yours
double w=1.2;
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
}
This is how it looks like:
R,G,B lines are the sphere coordinate system axises X,Y,Z
White-ish lines are your Vertexes (White) + direction (Gray)
[Notes]
do not forget to include math.h
and replace the OpenGL stuff with yours
If you want 4, 6, 8, 12 or 20 vertices then you can have exactly equidistant vertices as the Platonic solid which all fit inside a sphere. The actual coordinates of these should be easy to get. For other numbers of vertices you can use other polyhedra and scale the verties so they lie on a sphere. If you need lots of points then a geodesic dome might be a good base. The C60 bucky-ball could be a good base with 60 points. For most of these you should be able to find 3D models from which you can extract coordinates.
I think the easiest way to control points on a sphere is by using spherical coordinates. Then you can control position of points around the sphere by using two angles (rho and phi) and the radius.
Example code for filling points uniformly around a rotating sphere (for fun):
var time = 1; // Increment this variable every frame to see the rotation
var count = 1000;
for (int i = 0; i < count; i++)
{
var rho = time + i;
var phi = 2 * Math.PI * i / count;
var x = (float)(radius * Math.Sin(phi) * Math.Cos(rho));
var z = (float)(radius * Math.Sin(phi) * Math.Sin(rho));
var y = (float)(radius * Math.Cos(phi));
Draw(x, y, z); // your drawing code for rendering the point
}
As some answers have already suggested, use an icosahedron based solution. The source for this is quite easy to come by (and I have written my own several times) but I find the excellent Primitives Pro plugin extremely handy under many other circumstances, and always use their sphere instead of the built-in Unity one.
Link to Primitives Pro component
Primitives Pro options
I'm using XNA/MonoGame to draw some 2D polygons for me. I'd like a Texture I have to repeat on multiple polygons, based on their X and Y coordinates.
here's an example of what I mean:
I had thought that doing something like this would work (assuming a 256x256 pixel texture)
verticies[0].TextureCoordinate = new Vector2(blockX / 256f, (blockY + blockHeight) / 256f);
verticies[1].TextureCoordinate = new Vector2(blockX / 256f, blockY / 256f);
verticies[2].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, (blockY + blockHeight) / 256f);
verticies[3].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, blockY / 256f);
// each block is draw with a TriangleStrip, hence the odd ordering of coordinates.
// the blocks I'm drawing are not on a fixed grid; their coordinates and dimensions are in pixels.
but the blocks end up "textured" with long-horizontal lines that look like the texture has been extremely stretched.
(to check if the problem had to do with TriangleStrips, I tried removing the last vertex and drawing a TriangleList of 1 - this had the same result on the texture, and the expected result of drawing only one half of my blocks.)
what's the correct way to achieve this effect?
my math was correct, but it seems that other code was wrong, and I was missing at least one important thing.
maybe-helpful hints for other people trying to achieve this effect and running into trouble:
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
^ you need that code. but importantly, your SamplerState and other settings will get reset when you draw sprites (SpriteBatch's Begin()), so especially if you're abstracting your polygon-rendering code into little helper functions, be mindful of when and where you call them! ex:
spriteBatch.Begin();
// drawing sprites
MyFilledPolygonDrawer(args);
// drawing sprites
spriteBatch.End();
if you do this (assuming MyFilledPolygonDrawer uses 3D methods), you'll need to change all the settings (such as SamplerState) before you draw in 3D, and possibly after (depending on what settings you use for 2D rendering), all of which comes with a little overhead (and makes your code more fragile - you're more likely to screw up :P)
one way to avoid this is to draw all your 3D stuff and 2D stuff separately (all one, then all the other).
(in my case, I haven't got my code completely separated out in this way, but I was able to at least reduce some switching between 2D and 3D by using 2D methods to draw solid-color rectangles - Draw Rectangle in XNA using SpriteBatch - and 3D stuff only for less-regular and/or textured shapes.)
I need to create a equilateral triangular grid that fits a given geometry.
I have an image containing the geometry, it might include holes or thin paths. and i need to create a grid similar to this image:
The circles are variable in diameter, and need to cover the entire geometry. the points does not have to be on the geometry.
You can think of the triangular grid as being an oblique rectangular grid
This enables you to store the state of each circle in a 2-dimensional matrix, for instance, and to use simple nested loops for processing. Of cause then you will have to translate these logical coordinates to the geometry plane coordinates for drawing.
const double Sin30 = 0.5;
static readonly double Cos30 = Math.Cos(30*Math.PI/180);
for (int xLogical = 0; xLogical < NX; xLogical++) {
for (int yLogical = 0; yLogical < NY; yLogical++) {
double xGeo = GridDistance * xLogical * Cos30;
double yGeo = GridDistance * (yLogical + xLogical * Sin30);
...
}
}
I am assuming this is to create a 2D meshing tool. If it is, and it is homework, I suggest doing it yourself as you will get alot out of it. If it is not a meshing problem what I will have to say should help you regardless...
To do this, use the grid node centres to generate your equilaterals. If you don't have the centre points to start with you will need to look at first selecting an orientation for your object and then creating these (rectangular based) grid nodes (you will have to work out a way of testing whether these points actually lie inside your object boundaries). You can then construct your equilateral triangles using these points. Note. You again will have to deal with edge detection to get half decent accuracy.
To go a bit further that just equilaterals, and get a more accurate mesh, you will have to look into anisotropic mesh adaptation (AMA) using triangulation. This will be a lot harder than the basic approach outlined above - but fun!
Check out this link to a 2D tet-mesh generator using AMA. The paper this code is based on is:
V. Dolejsi: Anisotropic mesh adaptation for finite volume and finite element methods on triangular meshes
Computing and Visualisation in Science, 1:165-178, 1998.
Background: I'm using the SlimDX C# wrapper for DirectX, and am drawing many 2D sprites using the Sprite class (traditionally from the Direct3DX extension in the underlying dlls). I'm drawing multiple hundreds of sprites to the screen at once, and the performance is awesome -- on my quad core, it's using something like 3-6% of the processor for my entire game, including logic for 10,000+ objects, ai routines on a second thread, etc, etc. So clearly the sprites are being drawing using full hardware acceleration, and everything is as it should be.
Issue: The problem comes when I start introducing calls to the Line class. As soon as I draw 4 lines (for a drag selection box), processor usage skyrockets to 13-19%. This is with only four lines!
Things I have tried:
Turning line antialiasing off and on.
Turning GLLines off and on.
Manually calling the line.begin and line.end around my calls to draw.
Omitting all calls to line.begin and line.end.
Ensuring that my calls to line.draw are not inside a sprite.begin / sprite.end block.
Calling line.draw inside a sprite.begin / sprite.end block.
Rendering 4 lines, or rendering 300.
Turning off all sprite and text rendering, and just leaving the line rendering for 4 lines (to see if this was some sort of mode-changing issue).
Most combinations of the above.
In general, none of these had a significant impact on performance. #3 reduced processor usage by maybe 2%, but even then it's still 8% or more higher than it should be. The strangest thing is that #7 from above had absolutely zero impact on performance -- it was just as slow with 4 lines as it was with 300. The only thing that I can figure is that this is being run in software for some reason, and/or that it is causing the graphics card to continually switch back and forth between some sort of drawing modes.
Matrix Approach:
If anyone knows of any fix to the above issue, then I'd love to hear it!
But I'm under the assumption that this might just be an issue inside of directx in general, so I've been pursuing another route -- making my own sprite-based line. Essentially, I've got a 1px white image, and I'm using the diffuse color and transforms to draw the lines. This works, performance-wise -- drawing 300 of the "lines" like this puts me in the 3-6% processor utilization performance range that I'm looking for on my quad core.
I have two problems with my pixel-stretch line technique, which I'm hoping that someone more knowledgeable about transforms can help me with. Here's my current code for a horizontal line:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 ) / 2.0f;
Matrix m = Matrix.Transformation2D( new Vector2( X1 + width, Y ), 0f, new Vector2( width, HalfHeight ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1 + width, Y, 0 ), Color );
}
This works, insofar as it draws lines of mostly the right size at mostly the right location on the screen. However, things appear shifted to the right, which is strange. I'm not quite sure if my matrix approach is right at all: I just want to scale a 1x1 sprite by some amount of pixels horizontally, and a different amount vertically. Then I need to be able to position them -- by the center point is fine, and I think that's what I'll have to do, but if I could position it by upper-left that would be even better. This seems like a simple problem, but my knowledge of matrices is pretty weak.
This would get purely-horizontal and purely-vertical lines working for me, which is most of the battle. I could live with just that, and use some other sort of graphic in locations which I am currently using angled lines. But it would be really nice if there was a way for me to draw lines that are angled using this stretched-pixel approach. In other words, draw a line from 1,1 to 7,19, for instance. With matrix rotation, etc, it seems like this is feasible, but I don't know where to even begin other than guessing and checking, which would take forever.
Any and all help is much appreciated!
it sounds like a pipeline stall. You're switching some mode between rendering sprites and rendering lines, that forces the graphics card to empty its pipeline before starting with the new primitive.
Before you added the lines, were those sprites all you rendered, or were there other elements on-screen, using other modes already?
Okay, I've managed to get horizontal lines working after much experimentation. This works without the strange offsetting I was seeing before; the same principle will work for vertical lines as well. This has vastly better performance than the line class, so this is what I'll be using for horizontal and vertical lines. Here's the code:
public void DrawLineHorizontal( int X1, int X2, int Y, float HalfHeight, Color Color )
{
float width = ( X2 - X1 );
Matrix m = Matrix.Transformation2D( new Vector2( X1, Y - HalfHeight ), 0f, new Vector2( width, HalfHeight * 2 ),
Vector2.Zero, 0, Vector2.Zero );
sprite.Transform = m;
sprite.Draw( this.tx, Vector3.Zero, new Vector3( X1, Y - HalfHeight, 0 ), Color );
}
I'd like to have a version of this that would work for lines at angles, too (as mentioned above). Any suggestions for that?