How to make a perfect cube - c#

In Unity, I create a cube with scale 1,1,1. Position is 0,1,0.
Then I placed it above a plane which is 15,1,5000. Position is 0,0,0.
I checked if the cube is below 1 in Y-axis, this will mean to me that the cube fall on the plane. I can control this cube by going left or right. If I go to left, there's no issue. If I go to right, my position becomes 0.9999998~. This makes my checking of falling become true even though the cube is still on the plane. Somehow, the cube seems not be be a perfect cube. Hope someone can enlighten me on why is this happening. Thanks!

This may not be the answer you want, but - in poor words - computers' arithmetic is finite (search for floating point arithmetic). So, the "perfect cube" that you're looking for does not exist in the finite representation a machine could perform.
Moreover, Unity has its own physics engine that (like all physics engines) approximates the calculus of real world during each operation (translation, rotation, scaling).
The only way in which you can overcome the problem is by doing comparisons not with exact values (0, 1) but with ranges.
To maintain "order" in the coordinate system of your scene you could also - at fixed intervals - "adjust" your values, so, for example, manually setting the coordinate value to 1 if it is between 0.95 and 1.05 (adjust the values with your world's coordinate system, of course).
Related note: in your comment you say "But my point is that why it seems like the cube is not perfect 1x1x1. Somehow it's like 1x1x0.9999998". The fact is that a VR system, like Unity, does not maintain the objects' size in memory, but their vertices' coordinates. You feel like the object's dimensions have changed due to the translation, but this is not true in a strict way: it's just a finite approximation of the vertices' values for their X, Y, Z.

Related

Finding average rotation from an array of different points

My name is Stanley and I need help :).
I am attempting to find the average rotation from an array of positions. (Stay with me here). If you are confused with what I mean, just like I am myself, then here is an example. I am making a boating game and there are raycast hits that find four points at each corner of the boat. In order to make the floating look realistic, I made it so that the average y position of all four points would be the y position of the boat. But the average rotation I cannot seem to figure out.
I have done some tests if there is a point at 0,0,0 and 1,1,0 and 0,1,1 the average rotation using xyz coordinates is -25,-25,50 and I can't seem to figure out the math behind it. (I eyeballed the final rotation in unity and it looks pretty spot on, that is how I got that number.) If anyone has seen anything about this online like an equation or way of solving this it would be a huge help.
Thanks Everyone
Stan.
I dont know whether I understood you correctly, but how exactly do u get an average rotation of (-25, -25, 50)? What I would do is (if the number of points are always three) create a plane, calculate the normal of that plane and trying to figure out what combination of rotation matrices lead to the corresponding components.
If your three points are (0,0,0), (1,1,0), (0,1,1) the corresponding plane's normal would be (-1, 1, -1), and from that you could deduce what the rotations must be in order to get a reference vector (lets say (1,0,0)) that satisfies R_X(a) * R_Y(b) * R_Z(c) * (1,0,0) = (-1, 1, -1)
But I guess thats not what you want, do you?

Unity Coordinate Limitations and its impact

Some objects which I have placed at a position (-19026.65,58.29961, 1157) from the origin (0,0,0) are rendering with issues, the problem is referred to as spatial Jitter (SJ) ref. Like You can check its detail here or You can see the below image. The objects are rendering with black spots/lines or maybe it is Mesh flickering. (Actually, I can't describe the problem, maybe the picture will help you to understand it)
I have also tried to change the camera near and far clipping but it was useless. Why I am getting this? Maybe my object and camera are far away from the origin.
Remember:
I have a large environment and some of my game objects (where the problem is) are at (-19026.65,58.29961, 1157) position. And I guess this is the problem that Object and Camera are very far from the origin (0,0,0). I found numerous discussions which is given below
GIS Terrain and Unity
Unity Coordinates bound Question at unity
Unity Coordinate and Scale - Post
Infinite Runner and Unity Coordinates
I didn't find that what is the minimum or maximum limit to place the Object in unity so that works correctly.
Since the world origin is a vector 3(0,0,0), the max limit that you can place an object would be 3.402823 × 10^38 since it is a floating point. However, as you are finding, this does not necessarily mean that placing something here will insure it works properly. Your limitation will be bound by what other performance factors your have in your game. If you need to have items placed at this point in the world space,consider building objects at runtime based on where the camera is. This will allow the performance to work at different points from the origin.
Unity suggests: not recommended to go any further than 100,000 units away from the center, the editor will warn you. If you notice in today's gaming world, many games move the world around the player rather than the player around the world.
To quote Dave Newson's site Read Here:
Floating Point Accuracy
Unity allows you to place objects anywhere
within the limitations of the float-based coordinate system. The
limitation for the X, Y and Z Position Transform is 7 significant
digits, with a decimal place anywhere within those 7 digits; in effect
you could place an object at 12345.67 or 12.34567, for just two
examples.
With this system, the further away from the origin (0.000000 -
absolute zero) you get, the more floating-point precision you lose.
For example, accepting that one unit (1u) equals one meter (1m), an
object at 1.234567 has a floating point accuracy to 6 decimal places
(a micrometer), while an object at 76543.21 can only have two decimal
places (a centimeter), and is thus less accurate.
The degradation of accuracy as you get further away from the origin
becomes an obvious problem when you want to work at a small scale. If
you wanted to move an object positioned at 765432.1 by 0.01 (one
centimeter), you wouldn't be able to as that level of accuracy doesn't
exist that far away from the origin.
This may not seem like a huge problem, but this issue of losing
floating point accuracy at greater distances is the reason you start
to see things like camera jitter and inaccurate physics when you stray
too far from the origin. Most games try to keep things reasonably
close to the origin to avoid these problems.

How to draw a line in 3D space across a grid with a start point and direction vector

I'm working on a first person 3D game. The levels are entirely cube based, walls/floors/etc are all just tiled cubes (1x1x1).
I'm currently creating a ray using the camera position and the rotation of the camera to get the direction. I'm wanting to then ray cast out to the first cube that is not empty (or when the ray falls off the grid). Quite often, these are direction vectors such as 0,0,1 or 1,0,0.
I'm not having much luck in finding a Bresenham Line Drawing Algorithm that works with a direction vector rather than a start/end point. Especially considering the direction vector is not going to house integers only.
So, for a specific question, I guess I'm asking if anyone can explain if I'm even coming close to going about this the right way and if someone might go into detail about how it should be done regardless.
Bresenham won't help you here, I'm afraid...what you need are Ray/Line-Plane intersection algorithms:
Line-Plane intersection on Wikipedia
Line-Plane intersection on Wolfram
Ray-Plane intersection on SigGraph
In very rough mathy-pseudocode:
(caveat:It's been a long time since I've done 3d graphics)
// Ray == origin point + some distance in a direction
myRay = Porg + t*Dir;
// Plane == Some point on cube face * normal of cube face (facing out),
// at some distance from origin
myPlane = Pcube * Ncubeface - d;
// Straight shot: from ray origin to point on cube direction
straightTo = (Pcube - Porg);
With these two equations, you can infer some things:
If the dot product of 'straightTo' and the plane normal is zero (call this "angA"), your origin point is inside the face of the cube.
If the dot product of the ray direction and the plane normal is close to 0 (call this "angB"), the ray is running parallel to the face of the cube - that is, not intersecting (unless you count if the origin is in the cube face, above).
If (-angA / angB) < 0, your ray is pointing away from the cube face.
There's other stuff, too, but I'm already pressing the limits of my meager memory. :)
EDIT:
There might be a "shortcut", now that I think it though a bit...this is all assuming you're using a 3-d array-like structure for your "map".
Ok, so bear with me here, thinking and typing on my phone - what if you used the standard old Bresenham delta-error algorithm, but "fixed" it into 2D?
So let's say:
We are at position (5, 5, 5) in a (10x10x10) "box"
We are pointing (1 0 0) (i.e., +X axis)
A ray cast from the upper-left corner of our view frustrum is still just a line; the definitions for "x" and "y" change, is all
"X" in this case would be (mental visualization)...say along the axis parallel to the eye line, but level with the cast line; that is, if we were looking at a 2D image that was (640x480), "center" was (0,0) and the upper left corner was (-320,-240), this "X axis line" would be a line cast through the point (-320,0) into infinity.
"Y" would likewise be a projection of the normal "Y" axis, so...pretty much the same, unless we're tilting our heads.
Now, the math would get hairy as hell when trying to figure out what the next deltaX value would be, but once you'd figured out the formula, it'd be basically constant time calculations (and now that I think about it, the "X axis" is just going to be the vector <1 0 0> projected through whatever your camera projection is, right?
Sorry for the rambling, I'm on the train home. ;)

Simple way to calculate point of intersection between two polygons in C#

I've got two polygons defined as a list of Vectors, I've managed to write routines to transform and intersect these two polygons (seen below Frame 1). Using line-intersection I can figure out whether these collide, and have written a working Collide() function.
This is to be used in a variable step timed game, and therefore (as shown below) in Frame 1 the right polygon is not colliding, it's perfectly normal for on Frame 2 for the polygons to be right inside each other, with the right polygon having moved to the left.
My question is, what is the best way to figure out the moment of intersection? In the example, let's assume in Frame 1 the right polygon is at X = 300, Frame 2 it moved -100 and is now at 200, and that's all I know by the time Frame 2 comes about, it was at 300, now it's at 200. What I want to know is when did it actually collide, at what X value, here it was probably about 250.
I'm preferably looking for a C# source code solution to this problem.
Maybe there's a better way of approaching this for games?
I would use the separating axis theorem, as outlined here:
Metanet tutorial
Wikipedia
Then I would sweep test or use multisampling if needed.
GMan here on StackOverflow wrote a sample implementation over at gpwiki.org.
This may all be overkill for your use-case, but it handles polygons of any order. Of course, for simple bounding boxes it can be done much more efficiently through other means.
I'm no mathematician either, but one possible though crude solution would be to run a mini simulation.
Let us call the moving polygon M and the stationary polygon S (though there is no requirement for S to actually be stationary, the approach should work just the same regardless). Let us also call the two frames you have F1 for the earlier and F2 for the later, as per your diagram.
If you were to translate polygon M back towards its position in F1 in very small increments until such time that they are no longer intersecting, then you would have a location for M at which it 'just' intersects, i.e. the previous location before they stop intersecting in this simulation. The intersection in this 'just' intersecting location should be very small — small enough that you could treat it as a point. Let us call this polygon of intersection I.
To treat I as a point you could choose the vertex of it that is nearest the centre point of M in F1: that vertex has the best chance of being outside of S at time of collision. (There are lots of other possibilities for interpreting I as a point that you could experiment with too that may have better results.)
Obviously this approach has some drawbacks:
The simulation will be slower for greater speeds of M as the distance between its locations in F1 and F2 will be greater, more simulation steps will need to be run. (You could address this by having a fixed number of simulation cycles irrespective of speed of M but that would mean the accuracy of the result would be different for faster and slower moving bodies.)
The 'step' size in the simulation will have to be sufficiently small to get the accuracy you require but smaller step sizes will obviously have a larger calculation cost.
Personally, without the necessary mathematical intuition, I would go with this simple approach first and try to find a mathematical solution as an optimization later.
If you have the ability to determine whether the two polygons overlap, one idea might be to use a modified binary search to detect where the two hit. Start by subdividing the time interval in half and seeing if the two polygons intersected at the midpoint. If so, recursively search the first half of the range; if not, search the second half. If you specify some tolerance level at which you no longer care about small distances (for example, at the level of a pixel), then the runtime of this approach is O(log D / K), where D is the distance between the polygons and K is the cutoff threshold. If you know what point is going to ultimately enter the second polygon, you should be able to detect the collision very quickly this way.
Hope this helps!
For a rather generic solution, and assuming ...
no polygons are intersecting at time = 0
at least one polygon is intersecting another polygon at time = t
and you're happy to use a C# clipping library (eg Clipper)
then use a binary approach to deriving the time of intersection by...
double tInterval = t;
double tCurrent = 0;
int direction = +1;
while (tInterval > MinInterval)
{
tInterval = tInterval/2;
tCurrent += (tInterval * direction);
MovePolygons(tCurrent);
if (PolygonsIntersect)
direction = +1;
else
direction = -1;
}
Well - you may see that it's allways a point of one of the polygons that hits the side of the other first (or another point - but thats after all almost the same) - a possible solution would be to calculate the distance of the points from the other lines in the move-direction. But I think this would end beeing rather slow.
I guess normaly the distances between frames are so small that it's not importand to really know excactly where it hit first - some small intersections will not be visible and after all the things will rebound or explode anyway - don't they? :)

Collision detection using MeshGeometry3D

I am creating a CAD like program, creating modelvisual3D objects. How do i do collision detection between my objects(modelvisual3d) using MeshGeometry3D. Do i have to compare every triangle in the moving object against the still standing objects?
What will be my best way to do collision detection?
It depends on how precise your collision detection needs to be.
There is no built-in collision detection in WPF's 3D library. If you need high precision, you'll need to compare every triangle.
That being said, you can start with comparing bounding boxes and/or bounding spheres. This is always a good first step, since it can quickly eliminate most cases. If you don't need precision collision detection, this alone may be fine.
To add to Reed's answer (based on my answer here):
After you've eliminated most of your objects via the bounding box/sphere to bounding box/sphere test you should test the triangles of your test object(s) against the other object's bounding box/sphere first before checking triangle/triangle collisions. This will eliminate a lot more cases.
To rule out a collision you'll have to check all the triangles in the test object, but to find a case where you'll need to go down to the triangle/triangle case you only need to find the first triangle that interacts with the bounding box/sphere of the other object.
Look at the SAT theorem (Separating Axes Theorem), it's the fastest and easiest one out there.
The theory about this is that if you can draw a line which separates the triangles, then they're not colliding.
As is said, first do an AABB earlier detection, and when two objects collide, test each polygon of object A against each polygon of object B.
Starting in 2D, to test if two polygons collide, you get the extents of them in the possible axes (in this case X and Y), if those extents intersect, then the poligons are colliding.
On this page you can find a very good explanation on how it works and how to apply it:
http://www.metanetsoftware.com/technique/tutorialA.html
To apply it to 3D simply use the edges of each polygon as the separating axes.
If the extents on those axes intersect, then the polygons are colliding.
Also, this method resolves collission for moving objects, giving also the momentum of collision (resolve the relative angular velocity, substracting velocity B from velocity A, this way the problem is reduced to a moving object and a static one, and add the velocity in the axis you are testing to the extent of the polygon A, if they intersect, rest the original extent of the polygon and you will get the momentum of collission).
Another option would be to use BulletSharp, a C# wrapper of the well-known Bullet Physics Engine. In this case, you would need to write functions to create a (concave) collision shape from a MeshGeometry3D.
In my experience, it works pretty well, even though dynamic collision between concave shapes is not supported. You'll need to use convex decompsition, as a workaround.

Categories