I have some random points. I need to find the best fit line in 3D space in C#.
Currently I have the algorithm for 2D and it works perfectly. Check the below link,
Algorithm for scatter plot 'best-fit' line
I need to find it in 3D in this same manner. Is there any algorithm for this in C# ?
Related
This question already has answers here:
How can I tell if a point is on a quadratic Bézier curve within a given tolerance?
(3 answers)
Closed 7 months ago.
I'm doing some computer graphics program. I need to draw a bezier curve and then determine if a point is on this curve. I've defined 4 points(2 endpoints, 2 control points) and plotted a bezier curve using DrawBeziers. So how can I determine if a point is on the drawn curve?
Should I get all the points on the Bezier curve and check if a point in all points of the curve?
Should I get the equation of the curve and check a point can make the
equation true?
Should I use DrawBeziers method to draw it?
How can I implement this feature?
I'm looking forward to everyone's answers, it would be better if the answers could be explained in detail. Thanks in advance.
I would assume that the goal is to check if the user clicked on the curve or not. The mathematical curve will be infinitely thin, so what you really are after is to compute the distance to the curve. The link provides the math. Once you have the distance, just check if it is lower than the line thickness.
A potentially more accurate way would be to draw the curve, and any other graphics to a internal bitmap, using a unique color for each object, and without any anti-aliasing. That will give you an easy way to lookup the clicked object, while supporting overlapping shapes in a natural way. Also consider using a wider line-thickness when drawing to make it easier to select objects. But it will be somewhat costly to redraw the internal bitmap.
Another alternative would be to convert your spline into a polyline, and check the distance to each line segment. This is probably least accurate, but should be very simple to implement if you already types for line segments.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am new to image processing so please forgive my ignorance. I am trying the come up with a way to get the co-ordinates of a sub image inside that of its containing larger image. For example, I have a large image of the New York skyline and one of just the Empire State building. The large picture is always a high quality image, the small picture is supplied by a user's camera scanning a printed version of the larger image. There the quality, scale and colors of the smaller image will not perfectly match those of the larger one. What I am looking to get is X, Y coordinates from the top-left corner of the larger image, to the top-left corner of the smaller image as if the smaller image were a puzzle piece placed in the larger image. It would be much appreciated of someone could point me in the right direction. Thanks
EDIT
Thank you for the feedback. I have come to realize that this might be a very difficult task. I ended taking a different approach. I will be embedding recognizable shapes in the aforementioned print media and use OpenCvSharp (a free C# wrapper around OpenCV) to detect them.
to just give you one possible direction,
What you are might be facing here is a flavor of pattern detection and/or recognition (aka machine learning), I suggest look for ready implementations as this is complicated task.
The basic idea is that you train or teach an algorithm about features of objects of interest and then the algorithm searches in images for anything that matches your pattern.
There are many algorithms out there; each will have its own approach. As a starting point, You could try to look at what well known image processing framework can offer - OpenCV:
http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html
EDIT :
OpenCV wrapper for .NET C# as OpenCV is C++ project
http://www.emgu.com/wiki/index.php/Main_Page
This is a very hard and big project to do.
BTW, You can get color of a pixel by GetPixel() method.
Following code creates a 200x200 image and get color of 100,100 coordination of that image.
Bitmap bmp = new Bitmap(200,200);
Color c = bmp.GetPixel(100,100);
For surfing image efficiently you must use pointer(unsafe code) not GetPixel() method unless the performance will be too slow.
This question already has answers here:
Polygon enclosing a set of points
(4 answers)
Closed 7 years ago.
I have been working with a huge Map project in .NET. During working at that project, I encountered a problem. I have a plenty points with my coordinates (latitude,longitude), and I need to draw a polygon which could "fill" my points. My question is, how to choose points being outside to draw polygon by them. I attached a image file to clarify my problem -> http://postimg.org/image/fwsf0v285/
It sounds like you want a polygon that surrounds the data points. There are two algorithms available.
First is a "Convex Hull". This is a classic algorithm of computation geometry and will produce the polygon you have drawn. Imagine your data points are nails in a board, it produces a polygon resembling an elastic band that has been put around the outer data points. Convex Hulls are fast (or should be).
Convex Hulls really are common, I know I've coded them up to work on the Earth's surface (ie. to take into account the Earth's curvature). Here's a simpler Euclidean example (from stackoverflow) that should get you started:
Translating concave hull algorithm to c#
The alternative is an "Alpha Shape" which is sometimes known as a "Concave Hull". Basically it allows for concave (inward) curves in the side of the polygon that are larger than a specified dimension ('alpha'). For example a 'C' of data points should produce a 'C shape', whilst a convex hull would give you something more like an 'O' as will cut off the concave hollow.
Alpha shapes are much more involved as they require the calculation of a Delauney Triangulation.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I already read some articles and even questions in stack overflow. I didn't found what I wanted. I may didn't looked carefully so point me to correct articles/questions if you know any.
Any way, what I want to do is clear. I know camera's position (x',y',z') and I have camera rotation matrix (Matrix3). I also have camera aspect ratio (W/H) and output size (W,H). My POINT is at (x,y,z) and I want a code or (an algorithm so I can write the code) to calculate its position at screen (screen's size is same as camera output's size) as (x'',y'').
Do you know any useful article? I is important for article or algorithm to support camera's rotation matrix.
Thank you all.
well you need to specify the projection type (ortogonal,perspective... ?) first
transform any point (x,y,z) to camera space
substract the camera position then apply inverse of camera direction (coordinate system) matrix. Z axis of camera is usually the viewing direction. If you use 4x4 homogenous matrix then the substraction is already in it so do not do it twice!
apply projection
Orthogonal projection is just scale matrix. Perspective projections are more complex so google for them. This is where aspect ratio is applied and also FOV (field of view) view angles.
clip to screen and Z-buffer space
now you have x,y,z in projected camera space. To actually obtain screen coordinates with perspective you have to divide by z or w coordinate (depends on math and projection used) so for your 3x3 matrices
xscr=x/z;
yscr=y/z;
that is why z-near for projections must be > 0! (otherwise could cause division by zero)
render or process pixel (x,y)
For more info see: Mathematically compute a simple graphics pipeline
[Notes]
If you look at OpenGL tutorials/references or any 3D vector math for rendering you will find tons of stuff. Google homogenous transform matrices or homogenous coordinates.
I'm not entirely sure of what it is that you are trying to achieve, however I'm thinking that you are attempting to make the surface of one plane (a screen) line up to be a relative size to another plane. To calculate this ratio you should look into Gaussian Surfaces. And a lot of trig. Hope this helps.
I do not think you have enough information to perform the calculation!
You can think of your camera as a pinhole camera. It consists of an image plane, and a point through which all light striking the image plane comes. Usually, the image plane is rectangular, and the point through which all incoming light comes is on the normal of the image plane starting from the center of the image plane.
With these restrictions you need the following:
- center position of the camera
- two vectors (perpendicular to each other) defining the size and attitude of the image plane
- distance of the point from the camera plane (this could be called the focal distance, even though it strictly speaking is not the same thing)
(There are really several ways to express these quantities, such as , , .)
If you only have the position and size of the image plane in pixels and camera rotation, you are missing the scale factor. In real world this is equivalent to knowing where you hold a camera and where you point it at, but not knowing the focal length (zoom setting).
There is a lot of literature available, this one popped up first with a search:
http://www.cse.psu.edu/~rcollins/CSE486/lecture12_6pp.pdf
Maybe that helps you to find the correct search terms.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am working on a college project using two kinect sensors. We are taking the X and Z coordinates from both kinects and converting them into "real world" X and Z coordinates with an offset and some basic math. Everything works great without the zoom lens but when the zoom lens is added the coordinate system get's distorted.
We are using this product http://www.amazon.com/Zoom-Kinect-Xbox-360/dp/B0050SYS5A
We seem to be going from a 57 degree view to a 113 degree view when switching the to the zoom lens. What would be a good way of trying to calculate this change in the coordinate system. How can we convert these distorted X and Z coordinates to the "real world" coordinates.
The sensors are places next to each other, at a 0 degree angle, looking at the same wall with some of their view fields overlapping. The overlap get's greater with the zoom lenses.
Thanks for any answers or ideas!
If you can take pictures via the kinect, you should be able to use a checkerboard pattern and camera calibration tool (e.g. GML calibration toolbox) to deduce the camera parameters/distortion of each lens system.
You should then be able to transform the data from each camera and its respective lens system to world coordinates.
If your measurements of the relative orientation and position of the cameras (and math) are correct the coordinates should roughly agree.