OpenGL Fastest way of rotating single objects - c#

I have an unknown amount of polygons, all facing the camera (-z) with different positions. I want to rotate each polygon around its center by different angles. Would it be faster to use glRotate and glTranslate, or to calculate the rotation myself, or to do something else?

Related

How to convert UV position to local position and vice-versa?

I'm working in a project that has a layer system represented by several planes in front of each other. These planes receive different textures, which are projected in a render texture with an orthographic camera to generate composite textures.
This project is being build on top of another system (a game), so I have some restrictions and requirements in order to make my project work as expected to fit properly in this game. One of the requirements refers to the decals, which has their position and scale represented by a single Vector4 coordinate. I believe that this Vector4, represents 4 vertex positions in both X and Y axis (2 for X and 2 for Y). For a better understanding, see the image below:
It happens that these Vector4 coordinates seems to be related with the UV of the texture where they belong, cause they only have positive values between 0 and 1. I'm facing a hard time trying to fit this coordinate together with my project, cause Unity position system uses the traditional cartesian plane with positive and negative values rather than the normalized UV coordinates. So if I use the original Vector4 coordinates of the game, the decals get wrongly positioned and vice versa (I'm using the original coordinates as base, but my system is meant to generate stuff to be used within the game, so these decal's coordinates must match the game standards).
Considering all this, how could I convert the local/global position used by Unity as UV position used the game?
Anyway, I tried my best to explain my doubt, not sure if it has an easy solution or not. I figured out this Vector4 stuff only from observation, so feel free to suggest other ideas if you think I'm wrong about it.
EDIT #1 - Despite some paragraphs, I'm afraid my intensions can be more clear, so complementing, the whole point is to find a way to position the decals using the Vector4 coordinates in a way that they get in the right positions. The layer system contains bigger planes, which has the full size of the texture plus smaller ones, representing the decals, varying in size. I believe that the easiest solution would be use one of these bigger planes as the "UV area", which would have the normalized positions mentioned. But I don't know how would I do that...

Detecting positions of two moving fingers

I'm trying to detect the moving objects with my webcam, where I want to detect the position on my two fingers moving, so I can scale an image according to the move of my fingers, like if it was a touch screen, but I want to achieve that with camera and detecting moving fingers, so if I move my two fingers to each other the image get smaller, and if I move them away from each other the image get biger.
Here is my code:
MotionDetector detector;
BlobCountingObjectsProcessing motionProcessing;
motionProcessing = new BlobCountingObjectsProcessing();
detector = new MotionDetector(new TwoFramesDifferenceDetector(), motionProcessing);
What I get is many rectangles around each finger. How can I recognize each finger separately?
Thanks alot.
Use RANSAC to fit two lines through the centroids of the rectangles, one for each finger. The difference in slopes between the two lines will give you an indication of how far apart they are. So the gradient of slope differences will tell you how to scale the image and by how much.

XNA Collision Detection with BoundingBoxes

We're building an FPS in XNA, and we're in the process of implementing collision detection. Our walls are rectangular, so the BoundingBox class seemed like a good option, but it's axis aligned. We'd really like to have non-axis aligned walls. Most of the discussion we've seen says to use BoundingSpheres, but this doesn't seem like the best option because the walls are really just rotated rectangles.
We've managed to model the character's movement as rays, and we know that we should be able to translate these rays from world space to the axis aligned box space using a rotation matrix. Unfortunately we think that something is amiss with this transformation because we seem to be colliding with invisible (or somehow larger) walls, and our ray generation works for floor intersections (which don't rotate [yet anyway]). The only reason we're using rays is because they're easy to generate, should be easy to transform into the box's space, and we can use XNA classes like the Axis Aligned BoundingBoxes for the actual collision detection.
We've implemented it like this:
internal float? collide(Ray ray) {
Matrix transform = Matrix.Invert(Transform);
ray.Direction = Vector3.Transform(ray.Direction, transform);
ray.Position = Vector3.Transform(ray.Position, transform);
return new BoundingBox(new Vector3(-.5f, -.5f, -.5f), new Vector3(.5f, .5f, .5f)).Intersects(ray);
}
public Matrix Transform { get { return Matrix.CreateScale(size) * rotate * Matrix.CreateTranslation(pos); } }
Rotate is passed in the constructor and is formed with a call to Matrix.CreateRotationY(theta). The ray passed into collide is in world space. Transform is also the matrix applied to our model for rendering. This model is a cube which goes from (-.5, -.5, -.5) to (.5, .5, .5).
In any event, our question boils down a few things: Is there a better way to do collision detection for non-axis aligned boxes against rays? Is there a better way to do it than using rays? Are we just idiots with something obviously wrong with our code? If possible, we'd like to rely as much as possible on XNA's classes (or at least code that's already been written), and not have to write much more than a wrapper class.
Well, if your trying to rotate rectangles (2d) and keep collision detection, then I have an answer. When I learned Xna, I used rectangles. You could have lots of little rectangles in your texture rectangle and detect collision for each collision.

Draw 2D Curve in XNA

Is there any way to generate a Curve class and then draw that curve in 2D on the screen in XNA?
I want to basically randomly generate some terrain using the Curve and then draw it. Hoping that I can then use that curve to detect collision with the ground.
It sounds like what you want is the 2D equivalent of a height-map. I'd avoid making a true "curve" and simply approximate one with line segments.
So basically you'll have an array or list of numbers that represent the height of your terrain at a series evenly spaced (horizontally) points. When you need a height between two points, you simply linearly interpolate between the two.
To generate it - you could set a few points randomly, and then do some form of smooth interpolation to set the rest. (It really depends on what kind of curve you want.)
To render it you could then just use a triangle strip. Each point in your height-map will have two vertices associated with it - one at the bottom of the screen, the other at the height of that point in the height-map.
To do collision detection - the easiest way is to have your objects be a single point (it sounds like you're making a artillery game like Scorched Earth) - simply take the X position of your object, get the Y position of your terrain at that X position, if the Y position of your object is below the terrain, set it so that it is on the terrain's surface.
That's the rough guide, anyway :)

Subdividing 3D mesh into arbitrarily sized pieces

I have a mesh defined by 4 points in 3D space. I need an algorithm which will subdivide that mesh into subdivisions of an arbitrary horizontal and vertical size. If the subdivision size isn't an exact divisor of the mesh size, the edge pieces will be smaller.
All of the subdivision algorithms I've found only subdivide meshes into exact powers of 2. Does anyone know of one that can do what I want?
Failing that, my thoughts about a possible implementation is to rotate the mesh so that it is flat on the Z axis, subdivide in 2D and then translate back into 3D. That's because my mind finds 3D hard ;) Any better suggestions?
Using C# if that makes any difference.
If you only have to work with a rectangle in 3D, then you simply need to obtain the two edge vectors and then you can generate all the interior points of the subdivided rectangle. For example, say your quad is defined by (x0,y0),...,(x3,y3), in order going around the quad. The edge vectors relative to point (x0,y0) are u = (x1-x0,y1-y0) and v = (x3-x0,y3-y0).
Now, you can generate all the interior points. Suppose you want M edges along the first edge, and N along the second, then the interior points are just
(x0,y0) + i/(M -1)* u + j/(N-1) * v
where i and j go from 0 .. M-1 and 0 .. N-1, respectively. You can figure out which vertices need to be connected together by just working it out on paper.
This kind of uniform subdivision works fine for triangular meshes as well, but each edge must have the same number of subdivided edges.
If you want to subdivide a general mesh, you can just do this to each individual triangle/quad. This kind of uniform subdivision results in poor quality meshes since all the original flat facets remain flat. If you want something more sophisticated, you can look at Loop subidivision, Catmull-Clark, etc. Those are typically constrained to power-of-two levels, but if you research the original formulations, I think you can derive subdivision stencils for non-power-of-two divisions. The theory behind that is a bit more involved than I can reasonably describe here.
Now that you've explained things a bit more clearly, I don't see your problem: you have a rectangle and you want to divide it up into rectangular tiles. So the mesh points you want are regularly spaced in both orthogonal directions. In 2D this is trivial, surely ? In 3D it's also trivial though the maths is a little trickier.
Off the top of my head I would guess that transforming from 3D to 2D (and aligning the rectangle with the coordinate axes at the same time) then calculating the mesh points, then transforming back to 3D is probably about as simple (and CPU-time consuming) as working it all out in 3D in the first place.
Yes, using C# means that I'm not able to propose a code to help you.
Comment or edit you question if I've missed the point.

Categories