Farseer ConvertUnits? - c#

In FarseerPhysics engine / XNA, what is ConvertUnits.ToDisplayUnits(); ?

Farseer (or rather, Box2D, which it is derived from) is tuned to work best with objects that range from 0.1 to 10 units in weight, and 0.1 and 10 units in size (width or height). If you use objects outside this range, your simulation may not be as stable as it otherwise could be.
Most of the time this works well for "regular" sized objects you might find in a game (cars, books, etc), as measured in meters and kilograms. However this is not mandatory and you can, in fact, choose any scale. (For example: games involving marbles, or aeroplanes, might use a scale other than meters/kilograms).
Most games have various spaces. For example: "Model" space, "Projection" space, "View" space, "World" space, "Screen" space, "Client" space. Some are measured in pixels, others in plain units. And in general games use matrices to convert vertices from one space to another. Most obviously when taking a world measured in units, and displaying it on a screen measured in pixels.
XNA's SpriteBatch simplifies this a fair bit, by default, by having the world space be the same as client space. One world unit = one pixel.
Normally you should have your world space defined to be identical to the space your physics world exists in. But this would be a problem when using SpriteBatch's default space - as you could then not have a physics object larger than 10 pixels, without going outside range that Farseer is tuned for.
Farseer's[1] solution is to have two different world spaces - the game space and the physics space. And use the ConvertUnits class everywhere it needs to convert between these two systems.
I personally find this solution pretty damn awful, as it is highly error-prone (as you have to get the conversion correct in multiple places spread throughout your code).
For any modestly serious game development effort, I would recommend using a unified world space, designed around what Farseer requires. And then either use a global transform to SpriteBatch.Begin, or something completely other than SpriteBatch, to render that world to screen.
However, for simple demos, ConvertUnits does the job. And it lets you keep the nice SpriteBatch property that one pixel in an unscaled sprite's texture = one pixel on screen.
[1]: last time I checked, ConvertUnits was part of the Farseer samples, and not part of the physics library itself.

I haven't dealt with that particular chunk of code, but most games that have a virtual space (the game world) will have a function similar to 'ToDisplayUnits', and it's function is convert the game world's physical units to the display units in XNA.
An example would be meters to pixels, or meters to x,y screen coordinates.
Having this is good, because it allows you do all your math in physics units and keep all abstract, and then translate stuff to the game screen separately.

Farseer uses the MKS (metre, kilogram, second) units of measure. They provide methods to convert display units of measure to MKS units of measure and vice versa. ToSimUnits() and ToDisplayUnits().

Related

Confusing inaccuracy in Emgu CV stereo calibration

I have 9 stereo camera rigs that are essentially identical. I am calibrating them all with the same methodology:
Capture 25 images of an 8x11 chessboard (the same one for all rigs) in varying positions and orientations
Detect the corners for all images using FindChessboardCorners and refine them using CornerSubPix
Calibrate each camera intrinsics individually using CalibrateCamera
Calibrate the extrinsics using StereoCalibrate passing the CameraMatrix and DistortionCoeffs from #3 and using the FixIntrinsics flag
Compute the rectification transformations using StereoRectify
Then, with a projector using structured light, I place a sphere (the same one for all rigs) of known radius (16 mm) in front of the rigs and measure the sphere using:
Use image processing to match a large number of features between the two cameras in the distorted images
Use UndistortPoints to get their undistorted image locations
Use TriangulatePoints to get the points in homogeneous coordinates
Use ConvertFromHomogeneous to get the points in world coordinates
On two of the rigs, the sphere measurement comes out highly accurate (RMSE 0.034 mm). However, on the other seven rigs, the measurement comes out with an unacceptable RMSE 0.15 mm (5x). Also, the inaccuracy of each of the measurements seems to be skewed vertically. Its as if the sphere is measured "spherical" in the horizontal direction, but slightly skewed vertically with a peak pointing slightly downward.
I have picked my methodology apart for a few weeks and tried almost every variation I can think of. However, after recalibrating the devices multiple times and recapturing sphere measurements multiple times, the same two devices remain spot-on and the other seven devices keep giving the exact same error. Nothing about the calibration results of the 7 incorrect rigs stands out as "erroneous" in comparison to the results of the 2 good rigs other than the sphere measurement. Also, I cannot find anything about the rigs that are significantly different hardware-wise.
I am pulling my hair out at this point and am turning to this fine community to see if anyone notices anything I'm missing in my above described calibration procedure. I've tried every variation I can think of in each step of the above process. However, the process seems valid since it works for 2 of the 9 devices.
Thank you!

Is there a built in way of efficiently finding if a vector is in a vector array and if so what # variable it is

Hello I am creating a procedurally generated cave generation script and I have gotten down all the perlin noise garbage out of the way and am trying to transform the vertices into a mesh . I understand that I need to declare the faces for it and need some form of marching cubes algorithm. For me to know which direction to render the face in I need my script to be aware of all the vertices around it by searching through the vertices. Is there any way my script can efficiently search through a vector3 array to find if a vector3 is in there and if so what place in the array is the Vector3 in?
If you're using a triangulation lookup table based implementation of marching cubes, you could store a normal vector alongside the face in the same table entry. A video by Sebastian Lague mentions using such a table. I'm not exactly sure where he downloaded it from, but he includes it in his repo which is MIT licensed. Video, Table (EDIT: The order of a triangle's vertices alone may be sufficient to define its direction, and you may not need an explicit normal vector)
Also heads up: Old Perlin noise tends to be visibly grid aligned. Most times I see it used, appear to be because a library provided it or because it was mentioned in a tutorial, and not because it was actually the best choice for the application. Simplex-type noises generally produce less grid-aligned results. It's straightforward to import external noise into Unity. It looks like you might need to anyway, if your implementation depends on 3D noise. Here are the noises from my repo that use an open implementation for 3D, a tailored gradient table, and rotated evaluators that are good for terrain. There are a lot of other options out there too, though they may not have these aspects in particular. Note that the range is [-1,1] not [0,1] so if your current noise is [0, 1], you might need to do noise(...) * 0.5 + 0.5 to correct that. Choose the 2F version if you have a lot of octaves, or the 2S version if you have one octave or are doing ridged noise.

Unity Coordinate Limitations and its impact

Some objects which I have placed at a position (-19026.65,58.29961, 1157) from the origin (0,0,0) are rendering with issues, the problem is referred to as spatial Jitter (SJ) ref. Like You can check its detail here or You can see the below image. The objects are rendering with black spots/lines or maybe it is Mesh flickering. (Actually, I can't describe the problem, maybe the picture will help you to understand it)
I have also tried to change the camera near and far clipping but it was useless. Why I am getting this? Maybe my object and camera are far away from the origin.
Remember:
I have a large environment and some of my game objects (where the problem is) are at (-19026.65,58.29961, 1157) position. And I guess this is the problem that Object and Camera are very far from the origin (0,0,0). I found numerous discussions which is given below
GIS Terrain and Unity
Unity Coordinates bound Question at unity
Unity Coordinate and Scale - Post
Infinite Runner and Unity Coordinates
I didn't find that what is the minimum or maximum limit to place the Object in unity so that works correctly.
Since the world origin is a vector 3(0,0,0), the max limit that you can place an object would be 3.402823 × 10^38 since it is a floating point. However, as you are finding, this does not necessarily mean that placing something here will insure it works properly. Your limitation will be bound by what other performance factors your have in your game. If you need to have items placed at this point in the world space,consider building objects at runtime based on where the camera is. This will allow the performance to work at different points from the origin.
Unity suggests: not recommended to go any further than 100,000 units away from the center, the editor will warn you. If you notice in today's gaming world, many games move the world around the player rather than the player around the world.
To quote Dave Newson's site Read Here:
Floating Point Accuracy
Unity allows you to place objects anywhere
within the limitations of the float-based coordinate system. The
limitation for the X, Y and Z Position Transform is 7 significant
digits, with a decimal place anywhere within those 7 digits; in effect
you could place an object at 12345.67 or 12.34567, for just two
examples.
With this system, the further away from the origin (0.000000 -
absolute zero) you get, the more floating-point precision you lose.
For example, accepting that one unit (1u) equals one meter (1m), an
object at 1.234567 has a floating point accuracy to 6 decimal places
(a micrometer), while an object at 76543.21 can only have two decimal
places (a centimeter), and is thus less accurate.
The degradation of accuracy as you get further away from the origin
becomes an obvious problem when you want to work at a small scale. If
you wanted to move an object positioned at 765432.1 by 0.01 (one
centimeter), you wouldn't be able to as that level of accuracy doesn't
exist that far away from the origin.
This may not seem like a huge problem, but this issue of losing
floating point accuracy at greater distances is the reason you start
to see things like camera jitter and inaccurate physics when you stray
too far from the origin. Most games try to keep things reasonably
close to the origin to avoid these problems.

Icosphere versus Cubemapped Sphere

I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.

Representing a Gameworld that is Irregularly shaped

I am working on a project where the game world is irregularly shaped (Think of the shape of a lake). this shape has a grid with coordinates placed over it. The game world is only on the inside of the shape. (Once again, think Lake)
How can I efficiently represent the game world? I know that many worlds are basically square, and work well in a 2 or 3 dimension array. I feel like if I use an array that is square, then I am basically wasting space, and increasing the amount of time that I need to iterate through the array. However, I am not sure how a jagged array would work here either.
Example shape of gameworld
X
XX
XX X XX
XXX XXX
XXXXXXX
XXXXXXXX
XXXXX XX
XX X
X
Edit:
The game world will most likely need each valid location stepped through. So I would a method that makes it easy to do so.
There's computational overhead and complexity associated with sparse representations, so unless the bounding area is much larger than your actual world, it's probably most efficient to simply accept the 'wasted' space. You're essentially trading off additional memory usage for faster access to world contents. More importantly, the 'wasted-space' implementation is easier to understand and maintain, which is always preferable until the point where a more complex implementation is required. If you don't have good evidence that it's required, then it's much better to keep it simple.
You could use a quadtree to minimize the amount of wasted space in your representation. Quad trees are good for partitioning 2-dimensional space with varying granularity - in your case, the finest granularity is a game square. If you had a whole 20x20 area without any game squares, the quad tree representation would allow you to use only one node to represent that whole area, instead of 400 as in the array representation.
Use whatever structure you've come up with---you can always change it later. If you're comfortable with using an array, use it. Stop worrying about the data structure you're going to use and start coding.
As you code, build abstractions away from this underlying array, like wrapping it in a semantic model; then, if you realize (through profiling) that it's waste of space or slow for the operations you need, you can swap it out without causing problems. Don't try to optimize until you know what you need.
Use a data structure like a list or map, and only insert the valid game world coordinates. That way the only thing you are saving are valid locations, and you don't waste memory saving the non-game world locations since you can deduce those from lack of presence in your data structure.
The easiest thing is to just use the array, and just mark the non-gamespace positions with some special marker. A jagged array might work too, but I don't use those much.
You could present the world as an (undirected) graph of land (or water) patches. Each patch then has a regular form and the world is the combination of these patches. Every patch is a node in the graph and has has graph edges to all its neighbours.
That is probably also the most natural representation of any general world (but it might not be the most efficient one). From an efficiency point of view, it will probably beat an array or list for a highly irregular map but not for one that fits well into a rectangle (or other regular shape) with few deviations.
An example of a highly irregular map:
x
x x
x x x
x x
x xxx
x
x
x
x
There’s virtually no way this can be efficiently fitted (both in space ratio and access time) into a regular shape. The following, on the other hand, fits very well into a regular shape by applying basic geometric transformations (it’s a parallelogram with small bits missing):
xxxxxx x
xxxxxxxxx
xxxxxxxxx
xx xxxx
One other option that could allow you to still access game world locations in O(1) time and not waste too much space would be a hashtable, where the keys would be the coordinates.
Another way would be to store an edge list - a line vector along each straight edge. Easy to check for inclusion this way and a quad tree or even a simple location hash on each vertice can speed lookup of info. We did this with a height component per edge to model the walls of a baseball stadium and it worked beautifully.
There is a big issue that nobody here addressed: the huge difference between storing it on disk and storing it in memory.
Assuming you are talking about a game world as you said, this means it's going to be very large. You're not going to store the whole thing in memory in once, but instead you will store the immediate vicinity in memory and update it as the player walks around.
This vicinity area should be as simple, easy and quick to access as possible. It should definitely be an array (or a set of arrays which are swapped out as the player moves). It will be referenced often and by many subsystems of your game engine: graphics and physics will handle loading the models, drawing them, keeping the player on top of the terrain, collisions, etc.; sound will need to know what ground type the player is currently standing on, to play the appropriate footstep sound; and so on. Rather than broadcast and duplicate this data among all the subsystems, if you just keep it in global arrays they can access it at will and at 100% speed and efficiency. This can really simplify things (but be aware of the consequences of global variables!).
However, on disk you definitely want to compress it. Some of the given answers provide good suggestions; you can serialize a data structure such as a hash table, or a list of only filled-in locations. You could certainly store an octree as well. In any case, you don't want to store blank locations on disk; according to your statistic, that would mean 66% of the space is wasted. Sure there is a time to forget about optimization and make it Just Work, but you don't want to distribute a 66%-empty file to end users. Also keep in mind that disks are not perfect random-access machines (except for SSDs); mechanical hard drives should still be around another several years at least, and they work best sequentially. See if you can organize your data structure so that the read operations are sequential, as you stream more vicinity terrain while the player moves, and you'll probably find it to be a noticeable difference. Don't take my word for it though, I haven't actually tested this sort of thing, it just makes sense right?

Categories