Algorithm to create area filled points on Map [duplicate] - c#

This question already has answers here:
Polygon enclosing a set of points
(4 answers)
Closed 7 years ago.
I have been working with a huge Map project in .NET. During working at that project, I encountered a problem. I have a plenty points with my coordinates (latitude,longitude), and I need to draw a polygon which could "fill" my points. My question is, how to choose points being outside to draw polygon by them. I attached a image file to clarify my problem -> http://postimg.org/image/fwsf0v285/

It sounds like you want a polygon that surrounds the data points. There are two algorithms available.
First is a "Convex Hull". This is a classic algorithm of computation geometry and will produce the polygon you have drawn. Imagine your data points are nails in a board, it produces a polygon resembling an elastic band that has been put around the outer data points. Convex Hulls are fast (or should be).
Convex Hulls really are common, I know I've coded them up to work on the Earth's surface (ie. to take into account the Earth's curvature). Here's a simpler Euclidean example (from stackoverflow) that should get you started:
Translating concave hull algorithm to c#
The alternative is an "Alpha Shape" which is sometimes known as a "Concave Hull". Basically it allows for concave (inward) curves in the side of the polygon that are larger than a specified dimension ('alpha'). For example a 'C' of data points should produce a 'C shape', whilst a convex hull would give you something more like an 'O' as will cut off the concave hollow.
Alpha shapes are much more involved as they require the calculation of a Delauney Triangulation.

Related

C# winforms - How can I get all the points on the Bezier curve or the equation drawn with DrawBeziers? [duplicate]

This question already has answers here:
How can I tell if a point is on a quadratic Bézier curve within a given tolerance?
(3 answers)
Closed 7 months ago.
I'm doing some computer graphics program. I need to draw a bezier curve and then determine if a point is on this curve. I've defined 4 points(2 endpoints, 2 control points) and plotted a bezier curve using DrawBeziers. So how can I determine if a point is on the drawn curve?
Should I get all the points on the Bezier curve and check if a point in all points of the curve?
Should I get the equation of the curve and check a point can make the
equation true?
Should I use DrawBeziers method to draw it?
How can I implement this feature?
I'm looking forward to everyone's answers, it would be better if the answers could be explained in detail. Thanks in advance.
I would assume that the goal is to check if the user clicked on the curve or not. The mathematical curve will be infinitely thin, so what you really are after is to compute the distance to the curve. The link provides the math. Once you have the distance, just check if it is lower than the line thickness.
A potentially more accurate way would be to draw the curve, and any other graphics to a internal bitmap, using a unique color for each object, and without any anti-aliasing. That will give you an easy way to lookup the clicked object, while supporting overlapping shapes in a natural way. Also consider using a wider line-thickness when drawing to make it easier to select objects. But it will be somewhat costly to redraw the internal bitmap.
Another alternative would be to convert your spline into a polyline, and check the distance to each line segment. This is probably least accurate, but should be very simple to implement if you already types for line segments.

Algorithm to determine back sides of a polygon [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
In a 2D scene, given a polygon and a source point, how can I determine the back sides of the polygon, or the sides facing away from the source point?
Edit: In the example picture, the circle represents the sight (or light) source point. The polygon can have any number of sides. I'm looking for example code to identify the sides of the polygon opposite the source point.
Example picture
Update: I ran across this page that describes what I'd like to do under the "Finding the boundary points" section, but it still doesn't provide example code. Dynamic 2D Soft Shadows
This question is very confusing. In your example you have a concave poly, but you link to a page about solving the problem only on convex polys. You say that you don't know what algorithm to use, but the page you linked to gives the algorithm:
For every edge:
Find normal for edge
Classify edge as front facing or back facing
Determine if either edge points are boundary points or not.
You say that the step you're stuck on is "classify edges as front facing or back facing", but once you know the normal and the observer point, you know whether the edge is front-facing or back facing! As the page you linked to says:
a dot product is performed with this vector and the vector to the light. If this is greater than zero, the edge is front facing.
That is: if the normal is pointing towards the observer then it is facing towards the observer; that's how we define "facing towards".
This question is confusing and would benefit greatly from you actually writing some code, and then showing us what code you wrote. Obviously you're stuck somewhere, but it is very difficult for us to say where you are stuck or how to unstick you.
My advice is that you start with what you know, which is the algorithm:
For every edge:
Find normal for edge
Classify edge as front facing or back facing
Determine if either edge points are boundary points or not.
Now, translate that word-for-word into C#:
foreach(Edge edge in myPolygon.Edges())
{
var normal = GetNormalOfEdge(edge);
var classification = Classify(normal, observer);
var eitherBoundary = IsBoundary(edge.Start) || IsBoundary(edge.End);
}
Now start filling out the details: what are the types of those locals? What are the type signatures of those helper methods? Which helper methods can you implement? Which ones are you stuck on? And so on.
Remember, if you have a concept, make a type to represent it. Don't have a type to represent classifications? Invent one. Now you've got a helpful tool that will enable you to solve harder problems. If you have an operation, make a method to represent it, and again, now you've got a tool that you can build on. Work slowly and methodically, building up a library of helpful types and methods. Test them independently so you know they are reliable.

Mapping a 3D point to 2D context [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I already read some articles and even questions in stack overflow. I didn't found what I wanted. I may didn't looked carefully so point me to correct articles/questions if you know any.
Any way, what I want to do is clear. I know camera's position (x',y',z') and I have camera rotation matrix (Matrix3). I also have camera aspect ratio (W/H) and output size (W,H). My POINT is at (x,y,z) and I want a code or (an algorithm so I can write the code) to calculate its position at screen (screen's size is same as camera output's size) as (x'',y'').
Do you know any useful article? I is important for article or algorithm to support camera's rotation matrix.
Thank you all.
well you need to specify the projection type (ortogonal,perspective... ?) first
transform any point (x,y,z) to camera space
substract the camera position then apply inverse of camera direction (coordinate system) matrix. Z axis of camera is usually the viewing direction. If you use 4x4 homogenous matrix then the substraction is already in it so do not do it twice!
apply projection
Orthogonal projection is just scale matrix. Perspective projections are more complex so google for them. This is where aspect ratio is applied and also FOV (field of view) view angles.
clip to screen and Z-buffer space
now you have x,y,z in projected camera space. To actually obtain screen coordinates with perspective you have to divide by z or w coordinate (depends on math and projection used) so for your 3x3 matrices
xscr=x/z;
yscr=y/z;
that is why z-near for projections must be > 0! (otherwise could cause division by zero)
render or process pixel (x,y)
For more info see: Mathematically compute a simple graphics pipeline
[Notes]
If you look at OpenGL tutorials/references or any 3D vector math for rendering you will find tons of stuff. Google homogenous transform matrices or homogenous coordinates.
I'm not entirely sure of what it is that you are trying to achieve, however I'm thinking that you are attempting to make the surface of one plane (a screen) line up to be a relative size to another plane. To calculate this ratio you should look into Gaussian Surfaces. And a lot of trig. Hope this helps.
I do not think you have enough information to perform the calculation!
You can think of your camera as a pinhole camera. It consists of an image plane, and a point through which all light striking the image plane comes. Usually, the image plane is rectangular, and the point through which all incoming light comes is on the normal of the image plane starting from the center of the image plane.
With these restrictions you need the following:
- center position of the camera
- two vectors (perpendicular to each other) defining the size and attitude of the image plane
- distance of the point from the camera plane (this could be called the focal distance, even though it strictly speaking is not the same thing)
(There are really several ways to express these quantities, such as , , .)
If you only have the position and size of the image plane in pixels and camera rotation, you are missing the scale factor. In real world this is equivalent to knowing where you hold a camera and where you point it at, but not knowing the focal length (zoom setting).
There is a lot of literature available, this one popped up first with a search:
http://www.cse.psu.edu/~rcollins/CSE486/lecture12_6pp.pdf
Maybe that helps you to find the correct search terms.

Recognizing rectangles from varying lines in c# [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Given an image with an undefined amount of rectangles that are separated by predefined lines with undefined coordinates (the lines in the image only represent the coordinates where the predefined lines should be).
Every rectangle should become a separate System.Drawing.Bitmap and put into an array of Bitmaps.
The rectangles will always be rectangle shaped and will all have the same dimensions (so if you can find one proper rectangle, you may assume that the rest of the rectangles are the same).
You may assume that all the lines in all images are a predefined fixed width (e.g. 5 pixels)
The grid will always be parallel & perpendicular to the sides of the image.
All lines will go from top to bottom, or side to side, even if it doesn't look like it in the image.
The amount of rectangles is undefined (not always 4x4 as in the images)
These images are meant to find the rectangles, which will then be cut from the original image. But if I can cut these images in the proper rectangles, I should be able to do the same for the original image.
I can imagine that this question is rather hard to understand; I've had a hard time trying to explain. All questions are more than welcome.
I'm not quite sure what your question really is, so I'm assuming you are searching for an algorithm to detect your rectangles.
From the images it looks like you can separate the border lines of the rectangles with some kind of binarization filter from the background texture in the image.
I would try a Hough transformation on your images to detect the rectangles and look for similar sized rectangles in the Hough space to narrow down the results. The Hough Transform can be easily implemented and is not very complicated. But I guess a bit of googling will get you a sample code as well.

Calculate Kinect X,Y,Z position when a zoom lens is applied [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am working on a college project using two kinect sensors. We are taking the X and Z coordinates from both kinects and converting them into "real world" X and Z coordinates with an offset and some basic math. Everything works great without the zoom lens but when the zoom lens is added the coordinate system get's distorted.
We are using this product http://www.amazon.com/Zoom-Kinect-Xbox-360/dp/B0050SYS5A
We seem to be going from a 57 degree view to a 113 degree view when switching the to the zoom lens. What would be a good way of trying to calculate this change in the coordinate system. How can we convert these distorted X and Z coordinates to the "real world" coordinates.
The sensors are places next to each other, at a 0 degree angle, looking at the same wall with some of their view fields overlapping. The overlap get's greater with the zoom lenses.
Thanks for any answers or ideas!
If you can take pictures via the kinect, you should be able to use a checkerboard pattern and camera calibration tool (e.g. GML calibration toolbox) to deduce the camera parameters/distortion of each lens system.
You should then be able to transform the data from each camera and its respective lens system to world coordinates.
If your measurements of the relative orientation and position of the cameras (and math) are correct the coordinates should roughly agree.

Categories