Some objects which I have placed at a position (-19026.65,58.29961, 1157) from the origin (0,0,0) are rendering with issues, the problem is referred to as spatial Jitter (SJ) ref. Like You can check its detail here or You can see the below image. The objects are rendering with black spots/lines or maybe it is Mesh flickering. (Actually, I can't describe the problem, maybe the picture will help you to understand it)
I have also tried to change the camera near and far clipping but it was useless. Why I am getting this? Maybe my object and camera are far away from the origin.
Remember:
I have a large environment and some of my game objects (where the problem is) are at (-19026.65,58.29961, 1157) position. And I guess this is the problem that Object and Camera are very far from the origin (0,0,0). I found numerous discussions which is given below
GIS Terrain and Unity
Unity Coordinates bound Question at unity
Unity Coordinate and Scale - Post
Infinite Runner and Unity Coordinates
I didn't find that what is the minimum or maximum limit to place the Object in unity so that works correctly.
Since the world origin is a vector 3(0,0,0), the max limit that you can place an object would be 3.402823 × 10^38 since it is a floating point. However, as you are finding, this does not necessarily mean that placing something here will insure it works properly. Your limitation will be bound by what other performance factors your have in your game. If you need to have items placed at this point in the world space,consider building objects at runtime based on where the camera is. This will allow the performance to work at different points from the origin.
Unity suggests: not recommended to go any further than 100,000 units away from the center, the editor will warn you. If you notice in today's gaming world, many games move the world around the player rather than the player around the world.
To quote Dave Newson's site Read Here:
Floating Point Accuracy
Unity allows you to place objects anywhere
within the limitations of the float-based coordinate system. The
limitation for the X, Y and Z Position Transform is 7 significant
digits, with a decimal place anywhere within those 7 digits; in effect
you could place an object at 12345.67 or 12.34567, for just two
examples.
With this system, the further away from the origin (0.000000 -
absolute zero) you get, the more floating-point precision you lose.
For example, accepting that one unit (1u) equals one meter (1m), an
object at 1.234567 has a floating point accuracy to 6 decimal places
(a micrometer), while an object at 76543.21 can only have two decimal
places (a centimeter), and is thus less accurate.
The degradation of accuracy as you get further away from the origin
becomes an obvious problem when you want to work at a small scale. If
you wanted to move an object positioned at 765432.1 by 0.01 (one
centimeter), you wouldn't be able to as that level of accuracy doesn't
exist that far away from the origin.
This may not seem like a huge problem, but this issue of losing
floating point accuracy at greater distances is the reason you start
to see things like camera jitter and inaccurate physics when you stray
too far from the origin. Most games try to keep things reasonably
close to the origin to avoid these problems.
Related
I have 9 stereo camera rigs that are essentially identical. I am calibrating them all with the same methodology:
Capture 25 images of an 8x11 chessboard (the same one for all rigs) in varying positions and orientations
Detect the corners for all images using FindChessboardCorners and refine them using CornerSubPix
Calibrate each camera intrinsics individually using CalibrateCamera
Calibrate the extrinsics using StereoCalibrate passing the CameraMatrix and DistortionCoeffs from #3 and using the FixIntrinsics flag
Compute the rectification transformations using StereoRectify
Then, with a projector using structured light, I place a sphere (the same one for all rigs) of known radius (16 mm) in front of the rigs and measure the sphere using:
Use image processing to match a large number of features between the two cameras in the distorted images
Use UndistortPoints to get their undistorted image locations
Use TriangulatePoints to get the points in homogeneous coordinates
Use ConvertFromHomogeneous to get the points in world coordinates
On two of the rigs, the sphere measurement comes out highly accurate (RMSE 0.034 mm). However, on the other seven rigs, the measurement comes out with an unacceptable RMSE 0.15 mm (5x). Also, the inaccuracy of each of the measurements seems to be skewed vertically. Its as if the sphere is measured "spherical" in the horizontal direction, but slightly skewed vertically with a peak pointing slightly downward.
I have picked my methodology apart for a few weeks and tried almost every variation I can think of. However, after recalibrating the devices multiple times and recapturing sphere measurements multiple times, the same two devices remain spot-on and the other seven devices keep giving the exact same error. Nothing about the calibration results of the 7 incorrect rigs stands out as "erroneous" in comparison to the results of the 2 good rigs other than the sphere measurement. Also, I cannot find anything about the rigs that are significantly different hardware-wise.
I am pulling my hair out at this point and am turning to this fine community to see if anyone notices anything I'm missing in my above described calibration procedure. I've tried every variation I can think of in each step of the above process. However, the process seems valid since it works for 2 of the 9 devices.
Thank you!
In Unity, I create a cube with scale 1,1,1. Position is 0,1,0.
Then I placed it above a plane which is 15,1,5000. Position is 0,0,0.
I checked if the cube is below 1 in Y-axis, this will mean to me that the cube fall on the plane. I can control this cube by going left or right. If I go to left, there's no issue. If I go to right, my position becomes 0.9999998~. This makes my checking of falling become true even though the cube is still on the plane. Somehow, the cube seems not be be a perfect cube. Hope someone can enlighten me on why is this happening. Thanks!
This may not be the answer you want, but - in poor words - computers' arithmetic is finite (search for floating point arithmetic). So, the "perfect cube" that you're looking for does not exist in the finite representation a machine could perform.
Moreover, Unity has its own physics engine that (like all physics engines) approximates the calculus of real world during each operation (translation, rotation, scaling).
The only way in which you can overcome the problem is by doing comparisons not with exact values (0, 1) but with ranges.
To maintain "order" in the coordinate system of your scene you could also - at fixed intervals - "adjust" your values, so, for example, manually setting the coordinate value to 1 if it is between 0.95 and 1.05 (adjust the values with your world's coordinate system, of course).
Related note: in your comment you say "But my point is that why it seems like the cube is not perfect 1x1x1. Somehow it's like 1x1x0.9999998". The fact is that a VR system, like Unity, does not maintain the objects' size in memory, but their vertices' coordinates. You feel like the object's dimensions have changed due to the translation, but this is not true in a strict way: it's just a finite approximation of the vertices' values for their X, Y, Z.
I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.
In FarseerPhysics engine / XNA, what is ConvertUnits.ToDisplayUnits(); ?
Farseer (or rather, Box2D, which it is derived from) is tuned to work best with objects that range from 0.1 to 10 units in weight, and 0.1 and 10 units in size (width or height). If you use objects outside this range, your simulation may not be as stable as it otherwise could be.
Most of the time this works well for "regular" sized objects you might find in a game (cars, books, etc), as measured in meters and kilograms. However this is not mandatory and you can, in fact, choose any scale. (For example: games involving marbles, or aeroplanes, might use a scale other than meters/kilograms).
Most games have various spaces. For example: "Model" space, "Projection" space, "View" space, "World" space, "Screen" space, "Client" space. Some are measured in pixels, others in plain units. And in general games use matrices to convert vertices from one space to another. Most obviously when taking a world measured in units, and displaying it on a screen measured in pixels.
XNA's SpriteBatch simplifies this a fair bit, by default, by having the world space be the same as client space. One world unit = one pixel.
Normally you should have your world space defined to be identical to the space your physics world exists in. But this would be a problem when using SpriteBatch's default space - as you could then not have a physics object larger than 10 pixels, without going outside range that Farseer is tuned for.
Farseer's[1] solution is to have two different world spaces - the game space and the physics space. And use the ConvertUnits class everywhere it needs to convert between these two systems.
I personally find this solution pretty damn awful, as it is highly error-prone (as you have to get the conversion correct in multiple places spread throughout your code).
For any modestly serious game development effort, I would recommend using a unified world space, designed around what Farseer requires. And then either use a global transform to SpriteBatch.Begin, or something completely other than SpriteBatch, to render that world to screen.
However, for simple demos, ConvertUnits does the job. And it lets you keep the nice SpriteBatch property that one pixel in an unscaled sprite's texture = one pixel on screen.
[1]: last time I checked, ConvertUnits was part of the Farseer samples, and not part of the physics library itself.
I haven't dealt with that particular chunk of code, but most games that have a virtual space (the game world) will have a function similar to 'ToDisplayUnits', and it's function is convert the game world's physical units to the display units in XNA.
An example would be meters to pixels, or meters to x,y screen coordinates.
Having this is good, because it allows you do all your math in physics units and keep all abstract, and then translate stuff to the game screen separately.
Farseer uses the MKS (metre, kilogram, second) units of measure. They provide methods to convert display units of measure to MKS units of measure and vice versa. ToSimUnits() and ToDisplayUnits().
I am working on a project with a robot that has to find its way to an object and avoid some obstacles when going to that object it has to pick up.
The problem lies in that the robot and the object the robot needs to pick up are both one pixel wide in the pathfinder. In reality they are a lot bigger. Often the A* pathfinder chooses to place the route along the edges of the obstacles, sometimes making it collide with them, which we do not wish to have to do.
I have tried to add some more non-walkable fields to the obstacles, but it does not always work out very well. It still collides with the obstacles, also adding too many points where it is not allowed to walk, results in that there is no path it can run on.
Do you have any suggestions on what to do about this problem?
Edit:
So I did as Justin L suggested by adding a lot of cost around the obstacles which results in the folling:
Grid with no path http://sogaard.us/uploades/1_grid_no_path.png
Here you can see the cost around the obstacles, initially the middle two obstacles should look just like the ones in the corners, but after running our pathfinder it seems like the costs are overridden:
Grid with path http://sogaard.us/uploades/1_map_grid.png
Picture that shows things found on the picture http://sogaard.us/uploades/2_complete_map.png
Picture above shows what things are found on the picture.
Path found http://sogaard.us/uploades/3_path.png
This is the path found which as our problem also was before is hugging the obstacle.
The grid from before with the path on http://sogaard.us/uploades/4_mg_path.png
And another picture with the cost map with the path on.
So what I find strange is why the A* pathfinder is overriding these field costs, which are VERY high.
Would it be when it evaluates the nodes inside the open list with the current field to see whether the current fields path is shorter than the one inside the open list?
And here is the code I am using for the pathfinder:
Pathfinder.cs: http://pastebin.org/343774
Field.cs and Grid.cs: http://pastebin.org/343775
Have you considered adding a gradient cost to pixels near objects?
Perhaps one as simple as a linear gradient:
C = -mx + b
Where x is the distance to the nearest object, b is the cost right outside the boundary, and m is the rate at which the cost dies off. Of course, if C is negative, it should be set to 0.
Perhaps a simple hyperbolic decay
C = b/x
where b is the desired cost right outside the boundary, again. Have a cut-off to 0 once it reaches a certain low point.
Alternatively, you could use exponential decay
C = k e^(-hx)
Where k is a scaling constant, and h is the rate of decay. Again, having a cut-off is smart.
Second suggestion
I've never applied A* to a pixel-mapped map; nearly always, tiles.
You could try massively decreasing the "resolution" of your tiles? Maybe one tile per ten-by-ten or twenty-by-twenty set of pixels; the tile's cost being the highest cost of a pixel in the tile.
Also, you could try de-valuing the shortest-distance heuristic you are using for A*.
You might try to enlarge the obstacles taking size of the robot into account. You could round the corners of the obstacles to address the blocking problem. Then the gaps that are filled are too small for the robot to squeeze through anyway.
I've done one such physical robot. My solution was to move one step backward whenever there is a left and right turn to do.
The red line is as I understand your problem. The Black line is what I did to resolve the issue. The robot can move straight backward for a step then turn right.