Apologies for the lack of example code, I'm currently in the brainstorming phase of the problem and having trouble finding a proper solution.
As I have stated in my title, I want to find out what the intersection area of two polygon are.
To be more specific, I have two ARPlane's that may overlap each other on the x-z plane but be on different y-levels (imagine stairs with an overhang). I can get the area boundaries of these ARPlanes easily. My first idea to simplify the process is to remove the y-component so as to have them on the same plane and turn this into a 2D problem.
From here onward, I'm unsure of how to proceed. I could not find any methods that calculated the intersection areas of two polygons. I have a few solutions that look promising if I can get the planes aligned neatly (such that the +x direction points from the center of one of the planes to the other), but I cannot move them in any way so I must modify what the local "forward" for a plane is. Even then, I don't think the ARPlane has a direction vector in the first place as they are not GameObjects, so I am unsure if this is a viable option as a path to follow. ARPlane class for quick reference.
One other way is to turn the planes so that they're in alignment with world x axis. This looks promising over the other methods but as I previously stated, I cannot turn the actual ARPlanes. I must make a copy of them and turn the copies while keeping their relative rotations and positions the same.
So far these have been the methods I could come up with but could not develop fully due to unity restrictions. My question, then, is whether there is a way to get around the issues of these problems; failing that, whether there is an alternative solution to the issue that can be recommended.
Below is an example use case of the tool. As can be seen, some stair threads have an overhang that covers a portion of the previous thread's surface (second and third figure). Each stair thread will be scanned and then processed to find their usable surface. The area covered by the overhang is not a usable surface. This usable area is defined by the placements of a staircase thread (A), and the very next thread right above it (B); so then the usable area will be surface_area_of_A - xz_crossSection_of_AB
Related
Some objects which I have placed at a position (-19026.65,58.29961, 1157) from the origin (0,0,0) are rendering with issues, the problem is referred to as spatial Jitter (SJ) ref. Like You can check its detail here or You can see the below image. The objects are rendering with black spots/lines or maybe it is Mesh flickering. (Actually, I can't describe the problem, maybe the picture will help you to understand it)
I have also tried to change the camera near and far clipping but it was useless. Why I am getting this? Maybe my object and camera are far away from the origin.
Remember:
I have a large environment and some of my game objects (where the problem is) are at (-19026.65,58.29961, 1157) position. And I guess this is the problem that Object and Camera are very far from the origin (0,0,0). I found numerous discussions which is given below
GIS Terrain and Unity
Unity Coordinates bound Question at unity
Unity Coordinate and Scale - Post
Infinite Runner and Unity Coordinates
I didn't find that what is the minimum or maximum limit to place the Object in unity so that works correctly.
Since the world origin is a vector 3(0,0,0), the max limit that you can place an object would be 3.402823 × 10^38 since it is a floating point. However, as you are finding, this does not necessarily mean that placing something here will insure it works properly. Your limitation will be bound by what other performance factors your have in your game. If you need to have items placed at this point in the world space,consider building objects at runtime based on where the camera is. This will allow the performance to work at different points from the origin.
Unity suggests: not recommended to go any further than 100,000 units away from the center, the editor will warn you. If you notice in today's gaming world, many games move the world around the player rather than the player around the world.
To quote Dave Newson's site Read Here:
Floating Point Accuracy
Unity allows you to place objects anywhere
within the limitations of the float-based coordinate system. The
limitation for the X, Y and Z Position Transform is 7 significant
digits, with a decimal place anywhere within those 7 digits; in effect
you could place an object at 12345.67 or 12.34567, for just two
examples.
With this system, the further away from the origin (0.000000 -
absolute zero) you get, the more floating-point precision you lose.
For example, accepting that one unit (1u) equals one meter (1m), an
object at 1.234567 has a floating point accuracy to 6 decimal places
(a micrometer), while an object at 76543.21 can only have two decimal
places (a centimeter), and is thus less accurate.
The degradation of accuracy as you get further away from the origin
becomes an obvious problem when you want to work at a small scale. If
you wanted to move an object positioned at 765432.1 by 0.01 (one
centimeter), you wouldn't be able to as that level of accuracy doesn't
exist that far away from the origin.
This may not seem like a huge problem, but this issue of losing
floating point accuracy at greater distances is the reason you start
to see things like camera jitter and inaccurate physics when you stray
too far from the origin. Most games try to keep things reasonably
close to the origin to avoid these problems.
I want to slice a 3D model relative to an infinite plane(In WPF). I'm checking if edges intersect with the infinite plane. If true, I'll create a new point at the intersection position, so I'm getting a couple of points that I want to generate a cap on so that the model is closed after slicing. For example, if this is the cross section, the result would be as follows:
Note: The triangulation ain't important. I just need triangles.
I also need to detect the holes as follows(holes are marked in red):
If it is impossible to do it the way I think(It seems to be so), the how should I do it? How do developers cap an object after being sliced?
There is also too much confusion. For example, The first picture's result may be:
What am I missing??
EDIT:
After some research, I knew one thing that I am missing:
The input is now robust, and I need the exact same output. How do I accomplish that??
In the past, I have done this kind of thing using a BSP.
Sorry to be so vague, but its not a a trivial problem!
Basically you convert your triangle mesh into the BSP representation, add your clipping plane to the BSP, and then convert it back into triangles.
As code11 said already you have too few data to solve this, the points are not enough.
Instead of clipping edges to produce new points you should clip entire triangles, which would give you new edges. This way, instead of a bunch of points you'd have a bunch of connected edges.
In your example with holes, with this single modification you'd get a 3 polygons - which is almost what you need. Then you will need to compute only the correct triangulation.
Look for CSG term or Constructive Solid Geometry.
EDIT:
If the generic CSG is too slow for you and you have clipped edges already then I'd suggest to try an 'Ear Clipping' algorithm.
Here's some description with support for holes:
https://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf
You may try also a 'Sweep Line' approach:
http://sites-final.uclouvain.be/mema/Poly2Tri/
And similar question on SO, with many ideas:
Polygon Triangulation with Holes
I hope it helps.
Building off of what zwcloud said, your point representation is ambiguous. You simply don't have enough points to determine where any concavities/notches actually are.
However, if you can solve that by obtaining additional points (you need midpoints of segments I think), you just need to throw the points into a shrinkwrap algorithm. Then at least you will have a cap.
The holes are a bit more tricky. Perhaps you can get away with just looking at the excluded points from the output of the shrinkwrap calculation and trying to find additional shapes in that, heuristically favoring points located near the centroid of your newly created polygon.
Additional thought: If you can limit yourself to convex polygons with only one similarly convex hole, the problem will be much easier to solve.
I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
I have a List of 2D points. What's an efficient way of iterating through the points in order to determine whether the list of points are in a straight line, or curved (and to what degree). I'd like to avoid simply getting slopes between smaller subsets. How would I go about doing this?
Thanks for any help
Edit: Thanks for the response. To clarify, I don't need it to be numerically accurate, but I'd like to determine if the user has created a curved shape with their mouse and, if so, how sharp the curve is. The values are not too important, as long as it's possible to determine the difference between a sharp curve and a slightly softer one.
If you simply want to know if all your points fit more or less on a curve of degree d, simply apply Lagrange interpolation on the endpoints and d-2 equally spaced points from inside your array. This will give you a polynomial of degree d.
Once you have your curve, simply iterate over the array and see how far away from the curve each point is. If they're farther than a threshold, your data doesn't fit your degree d polynomial.
Edit: I should mention that iterating through values of d is a finite process. Once d reaches the number of points you have, you'll get a perfect fit because of how Lagrange interpolation works.
To test if it's a straight line, compute the correlation coefficient. I'm sure that's covered on wikipedia.
To test if it's curved is more involved. You need to know what kind of curves you expect, and fit against those.
Here is a method to calculate angle: Calculate Angle between 2 points using C#
Simply calculate angle between each and every point in your list and create list of angles, then compare if angles list values are different. If they are not different then it means it's straight line, otherwise it's curve...
If it's a straight line then angle between all points has to be a same.
The question is really hazy here: "I'd like to avoid simply getting slopes between smaller substes"
You probably want interpolation a-la B-splines. They use two points and two extra control points if memory serves me. Implementations are ubiquitous since way back (at least 1980's). This should get you underway
Remember that you'll probably need to add control points to make the curve meet the endpoints. One trick to make sure those are reached is to simply duplicate the endpoints as extra controlpoints.
Cheers
Update Added link to codeproject
it would appear that what I remember from back in the 80's could have been Bezier curves - a predecessor of sorts.
I am working on a project with a robot that has to find its way to an object and avoid some obstacles when going to that object it has to pick up.
The problem lies in that the robot and the object the robot needs to pick up are both one pixel wide in the pathfinder. In reality they are a lot bigger. Often the A* pathfinder chooses to place the route along the edges of the obstacles, sometimes making it collide with them, which we do not wish to have to do.
I have tried to add some more non-walkable fields to the obstacles, but it does not always work out very well. It still collides with the obstacles, also adding too many points where it is not allowed to walk, results in that there is no path it can run on.
Do you have any suggestions on what to do about this problem?
Edit:
So I did as Justin L suggested by adding a lot of cost around the obstacles which results in the folling:
Grid with no path http://sogaard.us/uploades/1_grid_no_path.png
Here you can see the cost around the obstacles, initially the middle two obstacles should look just like the ones in the corners, but after running our pathfinder it seems like the costs are overridden:
Grid with path http://sogaard.us/uploades/1_map_grid.png
Picture that shows things found on the picture http://sogaard.us/uploades/2_complete_map.png
Picture above shows what things are found on the picture.
Path found http://sogaard.us/uploades/3_path.png
This is the path found which as our problem also was before is hugging the obstacle.
The grid from before with the path on http://sogaard.us/uploades/4_mg_path.png
And another picture with the cost map with the path on.
So what I find strange is why the A* pathfinder is overriding these field costs, which are VERY high.
Would it be when it evaluates the nodes inside the open list with the current field to see whether the current fields path is shorter than the one inside the open list?
And here is the code I am using for the pathfinder:
Pathfinder.cs: http://pastebin.org/343774
Field.cs and Grid.cs: http://pastebin.org/343775
Have you considered adding a gradient cost to pixels near objects?
Perhaps one as simple as a linear gradient:
C = -mx + b
Where x is the distance to the nearest object, b is the cost right outside the boundary, and m is the rate at which the cost dies off. Of course, if C is negative, it should be set to 0.
Perhaps a simple hyperbolic decay
C = b/x
where b is the desired cost right outside the boundary, again. Have a cut-off to 0 once it reaches a certain low point.
Alternatively, you could use exponential decay
C = k e^(-hx)
Where k is a scaling constant, and h is the rate of decay. Again, having a cut-off is smart.
Second suggestion
I've never applied A* to a pixel-mapped map; nearly always, tiles.
You could try massively decreasing the "resolution" of your tiles? Maybe one tile per ten-by-ten or twenty-by-twenty set of pixels; the tile's cost being the highest cost of a pixel in the tile.
Also, you could try de-valuing the shortest-distance heuristic you are using for A*.
You might try to enlarge the obstacles taking size of the robot into account. You could round the corners of the obstacles to address the blocking problem. Then the gaps that are filled are too small for the robot to squeeze through anyway.
I've done one such physical robot. My solution was to move one step backward whenever there is a left and right turn to do.
The red line is as I understand your problem. The Black line is what I did to resolve the issue. The robot can move straight backward for a step then turn right.