I have a list of line segments in no particular order.
I want to find all enclosed spaces (polygons) formed by the segments. Is there an efficient algorithm or method that I could use to do this?
The following image illustrates the problem. How can I detect the green polygons, given the black line segments?
One way would be to build a graph as follows:
the nodes are the intersection points of the edges
there's an edge between nodes i and j iff points i and j are on the same edge
Once you've built the graph:
Run the Connected Components Algorithm on it, and check for connected components of size > 2
Run a convex hull algorithm on the intersection points within such a component
Edit modified from original due to excellent point by FooBar.
This is indeed a classical problem in computational geometry.
Your are looking for the faces of an arrangement of your segments.
For a discussion of this topics with (too many) details, and source code, there is this wonderful library: CGAL .
Note that you'll have to decide what you do for e.g. a polygon inside another, many polygons intertwined, etc.
Ami's answer is a good pointer in the correct direction, but here are the more detailed steps you might need to know about:
Take the list of line segments and build a collection of vertexes. Since check each individual segment for intersections with each other segment is basically a N^2 operation when done via brute force, you might want to build a quad-tree and use that to reduce the number of checks you are performing. If n is small or you have a lot of cpu time to burn, just brute force it, otherwise you need to be clever about your collision detection. Here is a library that implements quad-tree collision detection: https://github.com/hroger1030/SpatialTrees
With your list of nodes and edges, you have the pieces to build a graph. you can call your lines nodes and the intersections the edges or vice-versa, it kind of doesn't matter. The important thing is you can now just go find all the cycles on the graph where the number of nodes in the cycle > 2.
I just so happens that I wrote an implementation of Tarjan Cycle detection in c#: Tarjan cycle detection help C#. This might be an alternative to the suggested Connected Components Algorithm.
Related
I want to slice a 3D model relative to an infinite plane(In WPF). I'm checking if edges intersect with the infinite plane. If true, I'll create a new point at the intersection position, so I'm getting a couple of points that I want to generate a cap on so that the model is closed after slicing. For example, if this is the cross section, the result would be as follows:
Note: The triangulation ain't important. I just need triangles.
I also need to detect the holes as follows(holes are marked in red):
If it is impossible to do it the way I think(It seems to be so), the how should I do it? How do developers cap an object after being sliced?
There is also too much confusion. For example, The first picture's result may be:
What am I missing??
EDIT:
After some research, I knew one thing that I am missing:
The input is now robust, and I need the exact same output. How do I accomplish that??
In the past, I have done this kind of thing using a BSP.
Sorry to be so vague, but its not a a trivial problem!
Basically you convert your triangle mesh into the BSP representation, add your clipping plane to the BSP, and then convert it back into triangles.
As code11 said already you have too few data to solve this, the points are not enough.
Instead of clipping edges to produce new points you should clip entire triangles, which would give you new edges. This way, instead of a bunch of points you'd have a bunch of connected edges.
In your example with holes, with this single modification you'd get a 3 polygons - which is almost what you need. Then you will need to compute only the correct triangulation.
Look for CSG term or Constructive Solid Geometry.
EDIT:
If the generic CSG is too slow for you and you have clipped edges already then I'd suggest to try an 'Ear Clipping' algorithm.
Here's some description with support for holes:
https://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf
You may try also a 'Sweep Line' approach:
http://sites-final.uclouvain.be/mema/Poly2Tri/
And similar question on SO, with many ideas:
Polygon Triangulation with Holes
I hope it helps.
Building off of what zwcloud said, your point representation is ambiguous. You simply don't have enough points to determine where any concavities/notches actually are.
However, if you can solve that by obtaining additional points (you need midpoints of segments I think), you just need to throw the points into a shrinkwrap algorithm. Then at least you will have a cap.
The holes are a bit more tricky. Perhaps you can get away with just looking at the excluded points from the output of the shrinkwrap calculation and trying to find additional shapes in that, heuristically favoring points located near the centroid of your newly created polygon.
Additional thought: If you can limit yourself to convex polygons with only one similarly convex hole, the problem will be much easier to solve.
I am trying to implement a pathfinding algorithm, but I think I'm running into terminology issues, in that I'm not quite sure how to explain what I need the algorithm to do.
I have a regular grid of nodes, and I am trying to find all nodes within a certain "Manhattan Distance".
Finding the nodes within, say, 5, is simple enough.
But I am interested in a "Weighted Manhattan Distance", where certain squares "cost" twice as much (or more) to enter. For instance, if orange squares cost 2 to enter, and purple squares cost 10, the graph I'm interested in looks like this:
Firstly, is there a term for this? It's hard to look up info on things when you're not entirely sure what they're called in the first place.
Secondly, how can I calculate which nodes fall within my parameters? I'm not looking for a full solution, necessarily, just some hints to get started; when I realized my implementation would require three Dictionarys, I began to think there might be an easier way of handling things.
For terminology, you're basically asking for all points within a certain distance on an arbitrary (positive) weighted graph. The use of differing weights means it no longer corresponds to a specific metric such as Manhattan distance.
As for algorithms, Dijkstra's algorithm is probably what you want. The basic idea is to maintain the minimum cost to each square that you've found so far, and a priority queue of the best squares to explore next.
Unlike traditional Dijkstra's where you keep going until you find the minimal path to every square, you'll want to stop adding nodes to the queue if the distance to them is too long. Once you're done, you'll have a list of all squares whose shortest path from the starting square is at most x, which sounds like what you want.
Eric Lippert provides an excellent blog-series on writing an A-* path finding algorithm in C# here:
Part 1:http://blogs.msdn.com/b/ericlippert/archive/2007/10/02/path-finding-using-a-in-c-3-0.aspx
Part 2: http://blogs.msdn.com/b/ericlippert/archive/2007/10/04/path-finding-using-a-in-c-3-0-part-two.aspx
Part 3: http://blogs.msdn.com/b/ericlippert/archive/2007/10/08/path-finding-using-a-in-c-3-0-part-three.aspx
Part 4: http://blogs.msdn.com/b/ericlippert/archive/2007/10/10/path-finding-using-a-in-c-3-0-part-four.aspx
You are probably best to go with Dijkstra's algorithm with weighted graph, like described here:
http://www.csl.mtu.edu/cs2321/www/newLectures/29_Weighted_Graphs_and_Dijkstra's_Algorithm.html
(There is algorithm description near the middle of the page.)
Manhattan distance in your case probably just means you don't want the diagonal paths in the graph.
Was wondering if anyone has knowledge on implementing pathfinding, but using scent. The stronger the scent in the nodes surrounding, is the way the 'enemy' moves towards.
Thanks
Yes, I did my university final project on the subject.
One of the applications of this idea is for finding the shortest path.
The idea is that the 'scent', as you put it, will decay over time. But the shortest path between two points will have the strongest scent.
Have a look at this paper.
What did you want to know exactly??
Not quite clear what the question is in particular - but this just seems like another way of describing the Ant colony optimization problem:
In computer science and operations
research, the ant colony optimization
algorithm (ACO) is a probabilistic
technique for solving computational
problems which can be reduced to
finding good paths through graphs.
Well, think about it for a minute.
My idea would to divide the game field into sections of 32x32 (or whatever size your character is). Then run some checks every x seconds (so if they stay still the tiles around them will have more 'scent') to figure out how strong a scent is on any given tile. Some examples might be: 1) If you cross over the tile, add 3; 2) if you crossed over an adjacent tile, add 1.
Then add things like degradation over time, reduce every tile by 1 every x seconds until it hits zero.
The last thing you will need to worry about is using AI to track this path. I would recommend just putting the AI somewhere, and telling it to find a node with a scent, then goto an adjacent node with a higher/equal value scent. Also worry about crossing off paths taken. If the player goes up a path, then back down it another direction, make sure the AI does always just take the looped back path.
The last thing to look at with the AI would be to add a bit of error. Make the AI take the wrong path every once in a while. Or lose the trail a little more easily.
Those are the key points, I'm sure you can come up with some more, with some more brainstorming.
Every game update (or some other, less frequent time frame), increase the scent value of nodes near to where the target objects (red blobs) are.
Decrease all node scent values by some fall-off amount to zero.
In the yellow blob's think/move function get available nodes to move to. Move towards the node with the highest scent value.
Depending on the number of nodes the 'decrease all node scent values' could do with optomisation, e.g. maybe maintaining a a list of non-zero nodes to be decreased.
I see a big contradiction between scent model and pathfinding. For a hunter in the nature finding the path by scent means finding exactly the path used by the followed subject. And in games pathfinding means finding the fastest path between two points. It is not the same.
1. While modelling the scent you will count the scent concentration in the point as the SUM of the surrounding concentrations multiplied by different factors. And searching for the fastest path from the point means taking the MINIMUM of the times counted for surrounding points, multiplied by the different parametres.
2. Counting the scent you should use recursive model - scent goes in all directions, including backward. In the case of the pathfinding, if you have found the shortest paths for points surrounding the target, they won't change.
3 Level of scent can rise and fall. In pathfinding, while searching for minimum, the result can never rise.
So, the scent model is really much more complicated than your target. Of course, what I have said, is true only for the standard situation and you can have something very special...
I have a List of 2D points. What's an efficient way of iterating through the points in order to determine whether the list of points are in a straight line, or curved (and to what degree). I'd like to avoid simply getting slopes between smaller subsets. How would I go about doing this?
Thanks for any help
Edit: Thanks for the response. To clarify, I don't need it to be numerically accurate, but I'd like to determine if the user has created a curved shape with their mouse and, if so, how sharp the curve is. The values are not too important, as long as it's possible to determine the difference between a sharp curve and a slightly softer one.
If you simply want to know if all your points fit more or less on a curve of degree d, simply apply Lagrange interpolation on the endpoints and d-2 equally spaced points from inside your array. This will give you a polynomial of degree d.
Once you have your curve, simply iterate over the array and see how far away from the curve each point is. If they're farther than a threshold, your data doesn't fit your degree d polynomial.
Edit: I should mention that iterating through values of d is a finite process. Once d reaches the number of points you have, you'll get a perfect fit because of how Lagrange interpolation works.
To test if it's a straight line, compute the correlation coefficient. I'm sure that's covered on wikipedia.
To test if it's curved is more involved. You need to know what kind of curves you expect, and fit against those.
Here is a method to calculate angle: Calculate Angle between 2 points using C#
Simply calculate angle between each and every point in your list and create list of angles, then compare if angles list values are different. If they are not different then it means it's straight line, otherwise it's curve...
If it's a straight line then angle between all points has to be a same.
The question is really hazy here: "I'd like to avoid simply getting slopes between smaller substes"
You probably want interpolation a-la B-splines. They use two points and two extra control points if memory serves me. Implementations are ubiquitous since way back (at least 1980's). This should get you underway
Remember that you'll probably need to add control points to make the curve meet the endpoints. One trick to make sure those are reached is to simply duplicate the endpoints as extra controlpoints.
Cheers
Update Added link to codeproject
it would appear that what I remember from back in the 80's could have been Bezier curves - a predecessor of sorts.
Given an elevation map consisting of lat/lon/elevation pairs, what is the fastest way to find all points above a given elevation level (or better yet, just the the 2D concave hull)?
I'm working on a GIS app where I need to render an overlay on top of a map to visually indicate regions that are of higher elevation; it's determining this polygon/region that has me stumped (for now). I have a simple array of lat/lon/elevation pairs (more specifically, the GTOPO30 DEM files), but I'm free to transform that into any data structure that you would suggest.
We've been pointed toward Triangulated Irregular Networks (TINs), but I'm not sure how to efficiently query that data once we've generated the TIN. I wouldn't be surprised if our problem could be solved similarly to how one would generate a contour map, but I don't have any experience with it. Any suggestions would be awesome.
It sounds like you're attempting to create a polygonal representation of the boundary of the high land.
If you're working with raster data (sampled on a rectangular grid), try this.
Think of your grid as an assembly of right triangles.
Let's say you have a 3x3 grid of points
a b c
d e f
g h k
Your triangles are:
abd part of the rectangle abed
bde the other part of the rectangle abed
bef part of the rectangle bcfe
cef the other part of the rectangle bcfe
dge ... and so on
Your algorithm has these steps.
Build a list of triangles that are above the elevation threshold.
Take the union of these triangles to make a polygonal area.
Determine the boundary of the polygon.
If necessary, smooth the polygon boundary to make your layer look ok when displayed.
If you're trying to generate good looking contour lines, step 4 is very hard to to right.
Step 1 is the key to this problem.
For each triangle, if all three vertices are above the threshold, include the whole triangle in your list. If all are below, forget about the triangle. If some vertices are above and others below, split your triangle into three by adding new vertices that lie precisely on the elevation line (by interpolating elevation). Include the one or two of those new triangles in your highland list.
For the rest of the steps you'll need a decent 2d geometry processing library.
If your points are not on a regular grid, start by using the Delaunay algorithm (which you can look up) to organize your pointss in into triangles. Then follow the same algorith I mentioned above. Warning. This is going to look kind of sketchy if you don't have many points.
Assuming you have the lat/lon/elevation data stored in an array (or three separate arrays) you should be able to use array querying techniques to select all of the points where the elevation is above a certain threshold. For example, in python with numpy you can do:
indices = where(array > value)
And the indices variable will contain the indices of all elements of array greater than the threshold value. Similar commands are available in various other languages (for example IDL has the WHERE() command, and similar things can be done in Matlab).
Once you've got this list of indices you could create a new binary array where each place where the threshold was satisfied is set to 1:
binary_array[indices] = 1
(Assuming you've created a blank array of the same size as your original lat/long/elevation and called it binary_array.
If you're working with raster data (which I would recommend for this type of work), you may find that you can simply overlay this array on a map and get a nice set of regions appearing. However, if you need to convert the areas above the elevation threshold to vector polygons then you could use one of many inbuilt GIS methods to convert raster->vector.
I would use a nested C-squares arrangement, with each square having a pre-calculated maximum ground height. This would allow me to scan at a high level, discarding any squares where the max height is not above the search height, and drilling further into those squares where parts of the ground were above the search height.
If you're working to various set levels of search height, you could precalculate the convex hull for the various predefined levels for the smallest squares that you decide to use (or all the squares, for that matter.)
I'm not sure whether your lat/lon/alt points are on a regular grid or not, but if not, perhaps they could be interpolated to represent even 100' ft altitude increments, and uniform
lat/lon divisions (bearing in mind that that does not give uniform distance divisions). But if that would work, why not precompute a three dimensional array, where the indices represent altitude, latitude, and longitude respectively. Then when the aircraft needs data about points at or above an altitude, for a specific piece of terrain, the code only needs to read out a small part of the data in this array, which is indexed to make contiguous "voxels" contiguous in the indexing scheme.
Of course, the increments in longitude would not have to be uniform: if uniform distances are required, the same scheme would work, but the indexes for longitude would point to a nonuniformly spaced set of longitudes.
I don't think there would be any faster way of searching this data.
It's not clear from your question if the set of points is static and you need to find what points are above a given elevation many times, or if you only need to do the query once.
The easiest solution is to just store the points in an array, sorted by elevation. Finding all points in a certain elevation range is just binary search, and you only need to sort once.
If you only need to do the query once, just do a linear search through the array in the order you got it. Building a fancier data structure from the array is going to be O(n) anyway, so you won't get better results by complicating things.
If you have some other requirements, like say you need to efficiently list all points inside some rectangle the user is viewing, or that points can be added or deleted at runtime, then a different data structure might be better. Presumably some sort of tree or grid.
If all you care about is rendering, you can perform this very efficiently using graphics hardware, and there is no need to use a fancy data structure at all, you can just send triangles to the GPU and have it kill fragments above or below a certain elevation.