Algorithm to generate equally distributed points in a polygon - c#

I am looking for an algorithm to generate equally distributed points inside a polygon.
Here is the scenario:
I have a polygon specified by the coordinates of the points at the corners (x, y) for each point. And I have the number of points to generate inside the polygon.
For example lets say I have a polygon containing 5 points: (1, 1) ; (1, 2) ; (2, 3) ; (3, 2) ; and (3, 1)
And I need to generate 20 equally distanced points inside that polygon.
Note: Some polygons may not support equally distributed points, but I'm looking to distribute the points in a way to cover all the region of the polygon with as much consistency as possible. (what i mean is I don't want a part with a lot more points than another)
Is there an algorithm to do so? or maybe a library
I am working on a C# application, but any language is ok, since I only need the algorithm and I can translate it.
Thanks a lot for any help

The simple approach I use is:
Triangulate the polygon. Ear clipping is entirely adequate, as all you need is a dissection of the polygon into a set of non-overlapping triangles.
Compute the area of each triangle. Sample from each triangle proportionally to the area of that triangle relative to the whole. This costs only a single uniform random number per sample.
Once a point is determined to have come from a given triangle, sample uniformly over the triangle. This is itself easier than you might think.
So really it all comes down to how do you sample within a triangle. This is easily enough done. A triangle is defined by 3 vertices. I'll call them P1, P2, P3.
Pick ANY edge of the triangle. Generate a point (P4) that lies uniformly along that edge. Thus if P1 and P2 are the coordinates of the corresponding end points, then P will be a uniformly sampled point along that edge, if r has uniform distribution on the interval [0,1].
P4 = (1-r)*P1 + r*P2
Next, sample along the line segment between P3 and P4, but do so non-uniformly. If s is a uniform random number on the interval [0,1], then
P5 = (1-sqrt(s))*P3 + sqrt(s)*P4
r and s are independent pseudo-random numbers of course. Then P5 will be randomly sampled, uniform over the triangle.
The nice thing is it needs no rejection scheme to implement, so long, thin polygons are not a problem. And for each sample, the cost is only in the need to generate three random numbers per event. Since ear clipping is rather simply done and an efficient task, the sampling will be efficient, even for nasty looking polygons or non-convex polygons.

An easy way to do this is this:
Calculate the bounding box
Generate points in that box
Discard all points not in the polygon of interest
This approach generates a certain amount of wasted points. For a triangle, it is never more than 50%. For arbitrary polygons this can be arbitrarily high so you need to see if it works for you.
For arbitrary polys you can decompose the polygon into triangles first which allows you to get to a guaranteed upper bound of wasted points: 50%.
For equally distanced points, generate points from a space-filling curve (and discard all points that are not in the polygon).

You can use Lloyd’s algorithm:
https://en.m.wikipedia.org/wiki/Lloyd%27s_algorithm

You can try the {spatialEco} package (https://cran.r-project.org/web/packages/spatialEco/index.html)
and apply the function sample.poly (https://www.rdocumentation.org/packages/spatialEco/versions/1.3-2/topics/sample.poly)
You can try this code:
library(rgeos)
library(spatialEco)
mypoly = readWKT("POLYGON((1 1,5 1,5 5,1 5,1 1))")
plot(mypoly)
points = sample.poly(mypoly, n= 20, type = "regular")
#points2 = sample.poly(mypoly, n= 20, type = "stratified")
#another type which may answer your problem
plot(points, col="red", add=T)

The easy answer comes from an easier question: How to generate a given number of randomly distributed points from the uniform distribution that will all fit inside a given polygon?
The easy answer is this: find the bounding box of your polygon (let's say it's [a,b] x [c,d]), then keep generating pairs of real numbers, one from U(a,b), the other from U(b,c), until you have n coordinate pairs that fit inside your polygon. This is simple to program, but, if your polygon is very jagged, or thin and skewed, very wasteful and slow.
For a better answer, find the smallest rotated rectangular bounding box, and do the above in transformed coordinates.

Genettic algorithms can do it rather quickly
Reffer to GENETIC ALGORITHMS FOR GRAPH LAYOUTS WITH GEOMETRIC CONSTRAINTS
You can use Force-Directed Graph for that...
Look at http://en.wikipedia.org/wiki/Force-based_algorithms_(graph_drawing)
it defiantly can throw you a bone.
I didn't try it ever,
but i remmember there is a possiblity to set a Fix for some Vertices in the Graph
Your Algorithm will eventually be like
Create a Graph G = Closed Path of the Vertices in V
Fix the Vertecies in place
Add N Verticies to the Graph and Fully connect them with Edges with equal tension value 1.0
Run_force_graph(G)
Scale Graph to bounded Box of
Though it wont be absolute because some convex shapes may produce wiered results (take a Star)
LASTLY: didn't read , but it seems relevant by the title and abstract
take a look at Consistent Graph Layout for Weighted Graphs
Hope this helps...

A better answer comes from a better question. Suppose you want to put a set of n watchtowers to cover a polygon. You could see this as an optimization problem: find the 2n coordinates of the n points that will minimize a cost function (or maximize a value function) that fits your goal. One possible cost function could calculate, for each point, the distance to its closest neighbor or the boundary of the polygon, whichever is less, and calculate the variance of this sequence as a measure of "non-uniformity". You could use a random set of n points, obtained as above, as your initial solution.
I've seen such a "watchtower problem" in some book. Algorithms, calculus, or optimization.
#Youssef: sorry about the delay; a friend came, and a network hiccuped.
#others: have some patience, don't be so trigger-happy.

Related

How do I find the control points for a Bezier curve?

I need to implement connections in the form of curved lines in C# (Unity). I would like to get the result as similar as possible to the implementation in Miro.com (see screenshot).
After attaching the curve, I calculate the path of the cubic Bezier curve. For this first segment, the anchor points and offsets from the objects it connects are used. There are no problems at this stage.
Problem: When dividing the curve into segments by clicking and dragging one of the blue points of the segment (see screenshot), it is split in two in the middle. At the junction of two new curves, a new interactive (movable) point is formed for which the tangent and coordinates of the control points are unknown. I need to find the position of these control points every time the position of the interactive points changes (white points in the picture below). Moreover, the curve should not drastically change its position when dividing, not form loops, have different lengths of control point vectors (I'm not sure here) and behave as adequately as possible (like on the board in Miro).
By control points I mean 2 invisible guide points for the Bezier segment.
In black I painted the known control points, and in red those that I need to find. (Pn - interactive points, Cn - control points)
The algorithms I have tried to find them give incorrect distances and directions of control points.
The following algorithms were tested:
Interpolation from Tacent - jumps of the curve when separating, inappropriate direction and amount of indentation of control points;
Chaikin's algorithm - curve jumps during separation, creates loops;
"Custom" interpolation based on guesses (takes into account the distance to the center of the segment between the start and end points of the segment, as well as the direction between the start and end points) - has all the same problems, but looks slightly better than those above.
I suspect the solution is to chordally interpolate the points using a Catmull-Rom spline and translate the result to points for a Bezier curve. However, there are still problems with implementation.
The curves from 3DMax also look very similar. In their documentation, I found only a mention of the parametric curve.
Methods that I did not use (or did not work):
Catmull-Rom interpolation;
B-spline interpolation;
Hermitian interpolation;
De Casteljau's algorithm (although it seems not for this)
I would be immensely grateful for any help, but I ask for as much detail as possible.
Find helpful sources to understand bezier curves here and here.
To do what you want, I would give a try to the Catmull-Rom approach which I believe is much more simple than Bezier's, which is the one used in the itween asset, that is free, and you got plenty of funtionality implemented.
If you want to stick to the bezier curves and finding the control points, I will tell you what I would do to find them.
For the case of 2 control point bezier curve:
P = (1-t)P1 + tP2
To get to know the control points P1(x1,y1) and P2(x2,y2), you need to apply the equation in a known point of your curve. Take into account that the 2D equation is vectorial, so each points provides 2 equations one for x and one for y, and you got 4 unknows, x and y for each point.
So for the first node of the curve (t=0), you would have:
Px = (1-0)P1x + 0*P2x
Py = (1-0)P1y + 0*P2y
For the last point (t=1)
Px = (1-1)P1x + 1*P2x
Py = (1-1)P1y + 1*P2y
With these 4 equations I would try to achieve the control points P1 and P2. You can do it with t=0 and t=1 which are the supposed points you know of your curve and the ones that simplify the math due to the t values, but you should be able to use any as long as you know the points coords in the curve for determined t.
If the curve is a 3 control point bezier, you would need 6 equations for the 3 control points and so on.
I think that the best approach is to compound the curve of cuadratic curves composition, and calculate the control points for each chunk, but I am not sure about this.
Once maths are understood and control points achieved, In case that was successful I would try to implement that in the code.

Creating a curve between two points each with normalized vectors

So I need a write method to create a curve between two points, with each point having a normalized vector pointing in an arbitrary direction. I have been trying to devise such a method but haven't been able to wrap my head around the math.
Here, since a picture is worth a thousand words this is what I need:
In the picture, the vectors are perpendicular to the red lines. I believe the vectors need to be weighted the same with a weight equivalent to the distance between the points. It needs to be so that when two points are on top of each other pointing in opposite directions it still all looks like one smooth curve (top curve in the picture). Also, I need to integrate the curves to find their lengths. I don't know why I haven't been able to think of how to calculate all of this but I haven't.
Also I'm using csharp the language doesn't really matter.
Cubic Bezier will indeed achieve the requested effect. You need four control points per curve segment. Two define the endpoints and two others the directions of the tangents at the endpoints. There are two degrees of freedom left, telling how far the control points can be along the tangents.
The arc length cannot be computed analytically and you will need numerical methods. This other question gives you useful information.

Converting a polyline to a polygonal shape with width

While I was trying to parse and convert a Gerber RS274X file to a GDSII file, I encountered a certain problem.
If you stroke a solid circle along a certain path (a polyline),
what you get is a solid shape, which can be subsequently converted
to a closed polygon. My question would be is there a library or
reliable algorithm to automate this process,where the input would be a
string of points signifying the polyline, the output would be the
resulting polygon.
Below is an image I uploaded to explain my problem.
The shape you seek can be calculated by placing a desired number of evenly spaced points in a circle around each input point, and then finding the convex hull for each pair of circles on a line segment.
The union of these polygons will make up the polygon you want.
There are a number of algorithms that can find the convex hull for a set of points, and also libraries which provide implementations.
The algorithm you are talking about is called "Minkowski sum" (in your case, of polyline and of a polygon, approximating a circle). In you case the second summand (the circle) is convex and it means the Minkowski sum can be computed rather efficiently using so called polygon convolution.
You did not specify the language you use. In C++ Minkowski sum is available as part of Boost.Polygon or as part of CGAL.
To use them you will probably need to convert your polyline into a (degenerated) polygon by traversing it twice: forward, then backward.
Union of convex hulls proposed by #melak47 will produce correct result but much less efficiently.

Quickly find and render terrain above a given elevation

Given an elevation map consisting of lat/lon/elevation pairs, what is the fastest way to find all points above a given elevation level (or better yet, just the the 2D concave hull)?
I'm working on a GIS app where I need to render an overlay on top of a map to visually indicate regions that are of higher elevation; it's determining this polygon/region that has me stumped (for now). I have a simple array of lat/lon/elevation pairs (more specifically, the GTOPO30 DEM files), but I'm free to transform that into any data structure that you would suggest.
We've been pointed toward Triangulated Irregular Networks (TINs), but I'm not sure how to efficiently query that data once we've generated the TIN. I wouldn't be surprised if our problem could be solved similarly to how one would generate a contour map, but I don't have any experience with it. Any suggestions would be awesome.
It sounds like you're attempting to create a polygonal representation of the boundary of the high land.
If you're working with raster data (sampled on a rectangular grid), try this.
Think of your grid as an assembly of right triangles.
Let's say you have a 3x3 grid of points
a b c
d e f
g h k
Your triangles are:
abd part of the rectangle abed
bde the other part of the rectangle abed
bef part of the rectangle bcfe
cef the other part of the rectangle bcfe
dge ... and so on
Your algorithm has these steps.
Build a list of triangles that are above the elevation threshold.
Take the union of these triangles to make a polygonal area.
Determine the boundary of the polygon.
If necessary, smooth the polygon boundary to make your layer look ok when displayed.
If you're trying to generate good looking contour lines, step 4 is very hard to to right.
Step 1 is the key to this problem.
For each triangle, if all three vertices are above the threshold, include the whole triangle in your list. If all are below, forget about the triangle. If some vertices are above and others below, split your triangle into three by adding new vertices that lie precisely on the elevation line (by interpolating elevation). Include the one or two of those new triangles in your highland list.
For the rest of the steps you'll need a decent 2d geometry processing library.
If your points are not on a regular grid, start by using the Delaunay algorithm (which you can look up) to organize your pointss in into triangles. Then follow the same algorith I mentioned above. Warning. This is going to look kind of sketchy if you don't have many points.
Assuming you have the lat/lon/elevation data stored in an array (or three separate arrays) you should be able to use array querying techniques to select all of the points where the elevation is above a certain threshold. For example, in python with numpy you can do:
indices = where(array > value)
And the indices variable will contain the indices of all elements of array greater than the threshold value. Similar commands are available in various other languages (for example IDL has the WHERE() command, and similar things can be done in Matlab).
Once you've got this list of indices you could create a new binary array where each place where the threshold was satisfied is set to 1:
binary_array[indices] = 1
(Assuming you've created a blank array of the same size as your original lat/long/elevation and called it binary_array.
If you're working with raster data (which I would recommend for this type of work), you may find that you can simply overlay this array on a map and get a nice set of regions appearing. However, if you need to convert the areas above the elevation threshold to vector polygons then you could use one of many inbuilt GIS methods to convert raster->vector.
I would use a nested C-squares arrangement, with each square having a pre-calculated maximum ground height. This would allow me to scan at a high level, discarding any squares where the max height is not above the search height, and drilling further into those squares where parts of the ground were above the search height.
If you're working to various set levels of search height, you could precalculate the convex hull for the various predefined levels for the smallest squares that you decide to use (or all the squares, for that matter.)
I'm not sure whether your lat/lon/alt points are on a regular grid or not, but if not, perhaps they could be interpolated to represent even 100' ft altitude increments, and uniform
lat/lon divisions (bearing in mind that that does not give uniform distance divisions). But if that would work, why not precompute a three dimensional array, where the indices represent altitude, latitude, and longitude respectively. Then when the aircraft needs data about points at or above an altitude, for a specific piece of terrain, the code only needs to read out a small part of the data in this array, which is indexed to make contiguous "voxels" contiguous in the indexing scheme.
Of course, the increments in longitude would not have to be uniform: if uniform distances are required, the same scheme would work, but the indexes for longitude would point to a nonuniformly spaced set of longitudes.
I don't think there would be any faster way of searching this data.
It's not clear from your question if the set of points is static and you need to find what points are above a given elevation many times, or if you only need to do the query once.
The easiest solution is to just store the points in an array, sorted by elevation. Finding all points in a certain elevation range is just binary search, and you only need to sort once.
If you only need to do the query once, just do a linear search through the array in the order you got it. Building a fancier data structure from the array is going to be O(n) anyway, so you won't get better results by complicating things.
If you have some other requirements, like say you need to efficiently list all points inside some rectangle the user is viewing, or that points can be added or deleted at runtime, then a different data structure might be better. Presumably some sort of tree or grid.
If all you care about is rendering, you can perform this very efficiently using graphics hardware, and there is no need to use a fancy data structure at all, you can just send triangles to the GPU and have it kill fragments above or below a certain elevation.

How can I calculate individual point masses?

I am working on a C# 2d soft body physics engine and I need to assign masses to an object's vertices given: a list of vertices (x,y positions), the total mass for the object, and the center of mass.
The center of mass is given as:
where,
R = center of mass
M = total mass
mj = mass of vertex j
rj = position of vertex j
I need an algorithm that can approximate each mj given R, M, and rj.
edit: I just want to clarify that I am aware that there are an infinite set of solutions. I am looking for a quick algorithm that finds a set of mj's (such that they are each sufficiently close to mj = M/[number of vertices] and where "sufficiently" is defined as some small floating point threshold).
Also, each object will consist of about 5 to 35 points.
You can compute the CM of a uniformly dense polygon as follows: number the N vertices from 0..N-1, and treat them cyclicly, so that vertex N wraps to vertex 0:
total_area = sum[i=0..N-1]( X(p[i],p[i+1])/2 )
CM = sum[i=0..N-1]( (p[i]+p[i+1])*X(p[i],p[i+1])/6 ) / total_area
where X(p,q)= p.x*q.y - q.x*p.y [basically, a 2D cross product]
If the polygon is convex, the CM will be inside the polygon, so you can reasonably start out by slicing up the area in triangles like a pie, with the CM at the hub. You should be able to weight each vertex of a triangle with a third of its mass, without changing the CM -- however, this would still leave a third of the total mass at the CM of the entire polygon. Nonetheless, scaling the mass transfer by 3/2 should let you split the mass of each triangle between the two "external" vertices. As a result,
area[i] = X( (p[i]-CM), (p[i+1]-CM) ) / 2
(this is the area of the triangle between the CM and vertices i and i+1)
mass[i] = (total_mass/total_area) * (area[i-1] + area[i])/2
Note that this kind of mass transfer is profoundly "unphysical" -- if nothing else, if treated literally, it would screw up the moment of inertia something fierce. However, if you need to distribute the mass among the vertices (like for some kind of cheesy explosion), and you don't want to disrupt the CM in doing so, this should do the trick.
Finally, a couple of warnings:
if you don't use the actual CM for this, it won't work right
it is hazardous to use this on concave objects; you risk ending up with negative masses
The center of mass R will constantly be changing as the vertices move. So, if you have 10 vertices, store the values from 10 consecutive "frames" - this will give you 10 equations for your 10 unknowns (assuming that the masses don't change over time).
Count the degrees of freedom: for points in D dimensional space you have D+1 equations[+] and n unknowns for n separate particles. If n>D+1 you are sunk (unless you have more information than you have told us about: symmetry constraints, higher order moments, etc...).
edit: My earlier version assumed you had the m_is and were looking for the r_is. It is slightly better when you have the r_is and want the m_is.
[+] The one you list above (which is actual D separate equation) and M = \sum m_j
Arriu said:
Oh sorry I misunderstood your question. I thought you were asking if I was modeling objects such as a torus, doughnut, or ring (objects with cutouts...). I am modeling bodies with just outer shells (like balloons or bubbles). I don't require anything more complex than that.
Now we are getting somewhere. You do know something more.
You can approximate the surface area of the object by breaking it into triangles between adjacent points. This total area gives you mean mass density. Now find the DoF deficit, and assign that many r_is (drawn at random, I guess) an initial mass based on the mean density and 1/3 of the area of each triangle it is a party to. Then solve the remaining system analytically. If the problem is ill-conditioned you can either draw a new set of assigned points, or attempt a random walk on the masses that you have already guessed at.
I would flip the problem around. That is, given a density and the position of the object (which is of course naturally still the center of mass of the object and three vectors corresponding to the orientation of the object, see Euler's angles), at each vertex associate a volume with that element (which would change with resolution and could be fractional for positions at the edge of the object) and multiply the density (d_j) with the associated volume (v_j), m_j=v_j * d_j. This approach should naturally reproduce the center of the mass of the object again.
Perhaps I didn't understand your problem, but consider that this would ultimately yield the correct mass ( Mass = sum(m_j) = sum(v_j * d_j) ) and at worst this approach should yield a verification of your result.

Categories