Emgu Camera Calibration - c#

I am using emgu/opencv to find the position of some flat blobs. I can currently find their positions in pixels and would like to convert this to world coordinates (in/mm). I have looked at emgu's camera calibration example, but I am having trouble actually applying it to get what I want. Using the example, I believe I can get the intrinsic matrix, but I am not really sure what to do with it. My camera is fixed and is looking down at the fixed plane the blobs are on. Any help would be appreciated. Thank you.

It sounds a little like you just want to know the size of flat objects that are located on the same plane as a calibration pattern. If that´s the case this is quite easy if the camera is looking normal onto the plane (no perspective distortion). You can then just calculate a pixel to millimeter conversion factor from the calibration pattern and you are done.
If that´s not the case and you want to fully calibrate your camera, you can either move your camera or if you only have one image use a 3D- calibration pattern. This is to avoid that all points are coplanar which leads to a degenerated solution.

Related

Map Projection Points to Real world Points

I have searched far and wide for an answer to this problem, and I cannot find one so I am asking here.
The Problem:
I have a laser projecting down on a surface from overhead and I want to project some specific size shapes on this surface. In order to do this I need to 'calibrate' the laser to ground it in the real world.
The laser projects in its own coordinate system ranging from -32000 to 32000 in the x and y directions. I have targets setup on my surface in a rough rectangle (see image below for more details). The targets are set up in terms of millimeters and are their own coordinate system.
I need to be able to take points in millimeters and get them into this range of -32000 to 32000 accurately in an array of scenarios.
Example:
What is the most accurate way of determining the laser space coordinates of the desired point?
Problem 2:
The projection space is not guaranteed to be flat. It could be tilted in any direction. For example, if the bottom (in relation to the example picture) is raised, the real world coordinates stay the same in 2-D, but the measured laser coordinates become more of a Trapezoid. See Image below
If anyone has encountered/solved a similar problem or can even begin to point me in the right direction for a solution, it would be greatly appreciated.
Thank you!
I had the same issue on my post right here: https://stackoverflow.com/a/52480400/9130280
As an example I asked my question for pictures, because it was easier to explain but I applied the solution for device positioning on a surface. It is close to what you are trying to do.
Basically, you have to use OpenCvSharp 3 library (from nuget).
First you have to get a homography matrix. The only coordinates you have to know are the edges. So you fill up two arrays with the edges and then you use:
homographyMatrix = OpenCvSharp.Cv2.FindHomography(originalPointsList, targetPointsList);
And then to get any point in "millimeters" to its equivalent in laser coordinates:
targetPoint = OpenCvSharp.Cv2.PerspectiveTransform(orignalPoint, homographyMatrix);
Let me know if you need more details.

XNA 2D Transformation

I have actually the following map (isometric projection), and I can move/zoom/rotate without problem with matrix transformations (SpriteBatch): picture.
And I wanted to know if it was possible (if so, how), to get the following result, without referring to 3D: picture.
All suggestions are welcome. Thank you in advance. :)
Its going to be a huge pain in the ass, especially but I think it is at least possible if you don't change the viewing angle.
Some ideas:
Make each tile its own little image unit.
The more far back the tile is from the camera, lower its layer priority when you draw it so that it gets blocked by tiles in front of it. Also you will have to figure out an algorithm that correctly sizes the tiles based on their distance. This algorithm will have to be more and more precise the closer you want to get the tiles, but there should be some mathematical/geometric formula that can do it automatically.
You quite literally CANNOT rotate the camera at all, unless you want to have separate sprites for every single angle for every single tile.

resolution independent grid for animations?

I'm working on a game made in XNA,C# and I want to enable xml based animations.
XML will look like this
<Animation>
<AnimatedObject>
<Filename>Spaceship_Jet_01</Filename>
<Flipped>false</Flipped>
<StartPosition_X>300</StartPosition_X>
<StartPosition_Y>500</StartPosition_Y>
<GOTOPosition_X>650</GOTOPosition_X>
<GOTOPosition_Y>500</GOTOPosition_Y>
<Time>10000</Time>
</AnimatedObject>
</Animation>
This will move an object to the side, like this
http://imm.io/odc7 (sorry the X coordinate is wrong)
I noticed there will be problems, when the players display resolution is different from mine because I enter pixel precise information about where the object comes from and where it has to go.
I thought about a grid so I can tell the programm to move the object from (30,27) to (22,27) e.g.. Is this a good solution? The grid has to be independent from the resolution but the number of tiles has to be constant and I have to draw the object to the screen. That means I have to find the right pixle position of the tile at position (22,27) and then "move" the object to that tile.
Is there a better way to do that? How can I solve this with XNA?
If you use a 2D camera you won't have any problem... because calculating the new view to adapt it to the new resolution is not difficult.... and you have not to change anything of your loads methods nor logic...
You can do, but I don`t like
Work with positions in [0..1] range, is difficult to measure.
Fix the position with the new resolution factor when you load the xml... is ugly...
Pos *= NewResolutionSize/DefaultResolutionSize;

Rotating a ship 360*

In game Im trying to make, I have some ships(not space ships or so, actual ships they are in water)
If I just directly rotate them, I get absurd results.
Do I need to make 8 picture for each ship ? (considering there is 8 direction)
Are there any way that I can do it with just creating one image or at least a few, instead of 8 ?
Essentially, rotation mathematics are an interpretation of the original image.
Sure, it works depending on the complexity of the image and the relationship of straightlines and things that are perpendicular, but some things just dont work.
If you're doing a top-down 2D game with ships, I'm going to assume Sail ships here, then rotating mathematically really just isn't going to look good as the sails them selves will move and angle depending on Wind speed/direction and the angle of the ship.
Long story short ? Mathematical rotation works well for an Asteroids style triangle ship, doesn't work well for proper graphics.
Hope this helps!
If you are talking 2D graphics and are getting "absurd results" I'm assuming you're not taking into account an origin. If you have a Texture2D and give it a rotation value, it will be rotating by the default origin which is (0,0). Try setting your origin in your spritebatch.Draw call to a new Vector2(texture.width / 2, texture.height / 2) and see if that is a step in the right direction.
Another approach would be to have a spritesheet with the 8 drawings that you mention and reference a different source rectangle of the texture2D.

Detect marker in 2D image

I am hoping to obtain some some help with 2D object detection. I'll give a brief overview of the context in which this will be implemented.
There will be an image taken of the ceiling. The ceiling will have markers placed on it so the orientation of the camera can be determined. The pictures will always be taken facing straight up. My goal is to detect one of these markers in the image and determine its rotation. So rotation and scaling(to a lesser extent) will be the two primary factors used in the image detection. I will be writing the software in either C# or matlab(not quite sure yet).
For example, the marker might be an arrow like this:
An image taken of the ceiling would contain markers. The software needs to detect a single marker and determine that it has been rotated by 170 degrees.
I have no prior experience with image analysis. I know image processing is a fairly broad topic and was hoping to get some advice on which direction I should take and which techniques would be best for my application. Thanks!
I'm not directly in this field but I would tell you to start by looking into edge detection specifically. If you have a background in math/engineering the materials are pretty easy to understand:
This seemed to spark some ideas:
http://www.cfar.umd.edu/~fer/cmsc426/lectures/edge1.ppt
I'd recommend MATLAB or if you're intent on using C#, Emgu CV is pretty good.
Hough transforms are a great idea. Once you detect the edges in your image, using, say a Canny edge detector, you get an edge image (which is binary image with only 1 or 0 for values).
Then, the Hough straight line transform (essentially) spins a line about each white pixel in the edge image (the resolution of the line depends on you) using a parametrized function for the line and calculates the total number of white (valued at 1) pixels along each spun line and stores this information in a big accumulator which stores the data indexed by the parameters of the line.
alt text http://upload.wikimedia.org/wikipedia/en/a/af/Hough_space_plot_example.png
In the example above, the parametric form for a line is:
rho = x*cos(theta) + y*sin(theta)
where rho is the distance and theta is
the angle
So as you can see the, if you look at the bin at a particular orientation you can find out how many lines are oriented at that angle. Of course, you'll have to do some extra work to figure out which lines are oriented at that angle since you have 5 other lines per arrow but that shouldn't be too hard.
as always in computer vision, your first problem is image illumination and acquisition. before going further, establish how your markers will be printed on the ceiling, what their form will be, what light you will be using to see them, and what camera setup you will chose to look at the markers.
given a good material, a good light and a good camera, you may have no problem at all to process the image. for example, you can print a full arrow in a retro-reflective material, with a longer tail than your example, use a colored light and a corresponding filter on the camera. now all you have on your image is arrows... there are plenty other ways of acquiring the image that will help you there.
once you have plain arrows, a simple blob analysis (which consist of computing statistical moments of objects in the image) will give you a lot of informations: each arrow should have values almost equal for the 7 hu moments, which allows you to filter objects efficiently, also the orientation computed from the central moments will give you the angle of the arrow. blob analysis being only statistical, it is extremely fast.
Several systems have been developed to detect markers and their orientation robustly:
reacTIVision (open source) uses these types of tags to find position and orientation:
ARToolKit (open source) uses a different type of tags to extract all 6 degrees of freedom:
alt text http://www.schanes.net/docs/robot/marker.png
If your primary goal is not to learn, but to make the application work, I would suggest you use one of these. It is not a trivial task for a beginner to robustly detect the position and orientation of a random marker in an image.
On the other hand, if you are manly interested in learning, I would also direct you to ARToolKit and its publications (and their references) that explain how to robustly implement marker detection.
You will need to explore edge detection, so look into Hough filters. After that you will need to look into pattern classifiers and feature extraction.
This paper has an algorithm that appears to work without edge detection.
This book excerpt is more oriented toward the kind of symbol detection you intend, once you have done the edge detection.
A rigorous way to determine the orientation of an imaged acquired under projective geometry (most of cameras) is using the vanishing points and vanishing lines. Good news to you: your marker can be used to find this information! More good news, your image can be rectified, so the image columns (the y-axis) will correspond to the up-down direction. You will find more about this stuff in chapter 8 of Hartley and Zisserman's book, Multiple View Geometry in Computer Vision.
Also remember that probably you will need to work on the radial distortion issue, the distortion caused by the camera lens. The other guys are right about the arrow detection problem: you have to use edge detection and, after that, Hough transform or template matching. Refer to Gonzalez and Woods' book Digital Image Processing for details.

Categories