Detect Rotation of a scanned image in C# - c#

We want a c# solution to correct the scanned image because it is rotated. To solve this problem we must detect the rotation angle first and then rotate the image. This was our first thought for our problem. But then we thought image warping would be more accurate as I think it would make the scanned image like our template. Then we can process it as we know all the coordinates of our template... I searched for a free SDK or a free solution in c#. Helping me in this will be great as it is the last task in our work. Really, thanks for all.

We used the PrimeOCR product to do this. It's not free, but we couldn't find a free program that was comparable.

So, the hard part is to detect the angle of the page.
If you have full control over the template, the simplest way to do this is probably to come up with an easily-detectable symbol (e.g. a solid black circle) and stick 3 of them on the template. Then, detect them (just look for big blocks of pixels with high saturation, in the case of a solid black circle).
So, you'll then have 3 sets of coordinates. If you have a top circle, a left circle, and a right circle with all 3 circles at difference distances from one another, detecting which circle is the top circle should be pretty easy.
Then just call a rotation function. This part is easy and has been done before (e.g. http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-rotate ).
Edit:
I suggested a circle because it's easier to find the center, but a rectangle should work, too.
To be more explicit about how to actually locate the rectangles/circles, take the average Brightness value of every a × a group of pixels. If that value is greater than b, then that a × a group of pixels is part of a rectangle. a and b are varables you'll want to come up with yourself.
Use flood-fill (or, more precisely, Connected Component Labeling) group the resulting pixels together. The end result should give you your rectangles.

Related

Map Projection Points to Real world Points

I have searched far and wide for an answer to this problem, and I cannot find one so I am asking here.
The Problem:
I have a laser projecting down on a surface from overhead and I want to project some specific size shapes on this surface. In order to do this I need to 'calibrate' the laser to ground it in the real world.
The laser projects in its own coordinate system ranging from -32000 to 32000 in the x and y directions. I have targets setup on my surface in a rough rectangle (see image below for more details). The targets are set up in terms of millimeters and are their own coordinate system.
I need to be able to take points in millimeters and get them into this range of -32000 to 32000 accurately in an array of scenarios.
Example:
What is the most accurate way of determining the laser space coordinates of the desired point?
Problem 2:
The projection space is not guaranteed to be flat. It could be tilted in any direction. For example, if the bottom (in relation to the example picture) is raised, the real world coordinates stay the same in 2-D, but the measured laser coordinates become more of a Trapezoid. See Image below
If anyone has encountered/solved a similar problem or can even begin to point me in the right direction for a solution, it would be greatly appreciated.
Thank you!
I had the same issue on my post right here: https://stackoverflow.com/a/52480400/9130280
As an example I asked my question for pictures, because it was easier to explain but I applied the solution for device positioning on a surface. It is close to what you are trying to do.
Basically, you have to use OpenCvSharp 3 library (from nuget).
First you have to get a homography matrix. The only coordinates you have to know are the edges. So you fill up two arrays with the edges and then you use:
homographyMatrix = OpenCvSharp.Cv2.FindHomography(originalPointsList, targetPointsList);
And then to get any point in "millimeters" to its equivalent in laser coordinates:
targetPoint = OpenCvSharp.Cv2.PerspectiveTransform(orignalPoint, homographyMatrix);
Let me know if you need more details.

Detect mouseover of non-square part of an Image

So I am working on a Risk type game in XNA/C#. I have a map, similar this one, and I need to be able to detect mouseovers on each territory (number). If these areas were squares, it would be easy, as they could each be represented by a rectangle. However, they are different size polygons. Is there a polygon shape that behaves similar to a square? If there isn't, how would I go about doing this?
I sugest this:attach color to each number, recreate your picture in these colors: every shape will be in its particular color. Dont draw it onscreen, use it only as reference map. And when the user clicks or moves mouse over your original map, you just simply project mouse coordinates into the color map, check the color of pixel laying under the mouse and because you have each color associated to number of territory...
This is not c# specific (as I've never written anything in the language, so no idea of what apis there are), though there are 2 algorithms that come to mind for detecting if a point is inside a polygon (which can be used to detect if a mouse point is over another polygon/map shape).
One is based on raycasting, where you cast a ray in 1 direction from the (mouse) point to "infinity" (edge of the board in this case) and count the number of times it crosses the polygon's edges. If it is odd, then the point is inside the polygon, if it is even, then the point is outside of the polygon.
A wiki link to it: http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm
The other algorithm that comes to mind works only for triangles I think but it can be more simple to implement I think (taking a quick glance at your shapes, I think they can easily be broken down into triangles and some are already triangles). It is to do with checking if the point is on the same (internal) "side" of all the edges in the triangle. To find out what "side" a point is on vs an edge, you'd take create 2 vectors, the first vector would be the edge itself (made up of 2 points) and the other vector would be the first point of that edge to the input point, then calculate the cross product of those 2 vectors. The result will be negative or positive, which can be used to determine the "direction".
A link to it: http://www.blackpawn.com/texts/pointinpoly/default.html
(On that page is another algorithm that can also work for triangles)
Hit testing on a polygon is not so difficult to do in real time. You could use a KD-Tree for optimisation if the map is huge. Otherwise find a simple Contains method for a polygon and use that. I have one on another computer. Let me know if you'd like it.

How to find rotation angle of image?

I'm feeding in a Bitmap image to my C# program to be able to perform OCR to identify the characters in the image. I can do this fairly well if the image is not rotated. One of the program requirements, however, is that the program automatically determines if the image has been rotated, and that it automatically corrects these rotations.
I've tried implementing a simple method where lines are traced across the image and points which contact a character are recorded, and then performing a simple linear regression on the line points. This works to an extent, although it has not proven very accurate due to curvature of characters, etc.
I was wondering if there was a better method to solve this problem? Many thanks in advance! :)
I use gmseDeskew algorithm to deskew an image in my program. It works very well.
It's an interesting problem to be sure. I'd look for certain letters that are easier to tell rotation for. For example, a capital A or R or K should have both of the lower parts are roughly the same horizontal plane. Another option is to take letters that cannot be identified and rotate them in various ways and re-attempt to identify them. If a letter than could not be identified in the raw scan CAN be identified when you rotate it, that's a pretty big clue. Once you have identified the "correction" rotation that makes a non-recognizable character into a recognizable one, apply the same rotation value to the others.
If it recognizes lines of text, then try to blur the image so that lines are mostly solid and find direction of the lines (either with analysis of Fourier transform or by ridge detection).
If the text is formatted like a printed document (column(s) and lines of text) then you can take advantage of this.
An approach that I've often seen used for document text is to do projection profiles:
Scan a document at a specific orientation and sum up the number of "black" pixels on each scan line (creating a 1D array of counts, each index representing a Y coordinate, the profile).
Calculate the variance of the counts (profile).
Repeat for multiple angles, (can be done in a binary search fashion to reduce processing)
The angle that results in the greatest variance is the correct angle (due to the text lines creating large peaks from the printed text, and low valleys due to the absence of text between the lines)
Then after finding this angle you can adjust your image accordingly and do your awesome OCR.
It might be easier to find the vertical-ish lines that are adjacent to the text (i.e., the left margin). For each scanline, record the first black pixel. Put all of those in a linear regression, and you should get a near vertical line. Measure its angle from true vertical and you should be able to unrotate the text. You could imagine doing the same thing for the top, bottom, and right sides, too, and taking an average.
We faced a similar problem before, and we searched for an easy and quick solution, and we ended up using a commercial toolkit (leadtools). You can use it to do auto processing to the image before OCR it. You can check this help topic to know how to use this toolkit to process and scan images.

Remove Kinect depth shadow

I've recently started hacking on my Kinect and I want to remove the depth shadow. The shadow is caused by the IR emitter being positioned slightly to the side of the camera, so any close object will get a big shadow and distant object less or no shadow.
The shadow length is related to the distance between the closest and the farthest spot on each side of the shadow.
My goal is to be able to map the color image correctly onto the depth. This doesn't work without processing the shadow as this picture shows:
Does the depth shadow always come out black?
If so you could use a simple method like a temporal median to calculate the background of the image (more info here: http://www.roborealm.com/help/Temporal_Median.php) and then whenever a pixel is black, set it to the background value at that pixel location.
I did some preliminary work on this problem a few weeks ago. My code works directly on a WriteableBitmap rather than the depth data, but if you're only doing image processing, it should work. The algorithm isn't perfect and would benefit with some more tweaking. If you update the code at all, let me know; I'd be very interested to see what you're doing!
The source code is posted on my blog:
http://richardpianka.com/2011/02/trackingni-depth-correction/
I don't know how it is with c# but openni c++ has a function called xnSetViewPoint() the only problem is, you lose the top 20 or so rows of imagedata due to the transformation.
the reason is due to using two different sensors, which are placed close by each other but not exactly at the same position.
Kinect Method - MapDepthFrametoColorFrame
Get the [x,y] positions in the depth frame, and use that method to fill in
I'm sorry to say but that shadow is caused by your body blocking the inferred dots from hitting that spot of the room so it creates a black spot... Nothing you can do but change the base background to a different color other than black so it won't be a noticeable shadow
The color camera and kinect depth camera dont have the same dimensions, and origin of the infra red dots are not from the same cam, its a IR projector a few cm aside from it (as that displacement is used to calculate depth).
However the solution seams easy here, your shadow data is on the left side.
so you need to extend the last known color data before it went black.
And to fit it better move translate the color cam data to the right.

Detect marker in 2D image

I am hoping to obtain some some help with 2D object detection. I'll give a brief overview of the context in which this will be implemented.
There will be an image taken of the ceiling. The ceiling will have markers placed on it so the orientation of the camera can be determined. The pictures will always be taken facing straight up. My goal is to detect one of these markers in the image and determine its rotation. So rotation and scaling(to a lesser extent) will be the two primary factors used in the image detection. I will be writing the software in either C# or matlab(not quite sure yet).
For example, the marker might be an arrow like this:
An image taken of the ceiling would contain markers. The software needs to detect a single marker and determine that it has been rotated by 170 degrees.
I have no prior experience with image analysis. I know image processing is a fairly broad topic and was hoping to get some advice on which direction I should take and which techniques would be best for my application. Thanks!
I'm not directly in this field but I would tell you to start by looking into edge detection specifically. If you have a background in math/engineering the materials are pretty easy to understand:
This seemed to spark some ideas:
http://www.cfar.umd.edu/~fer/cmsc426/lectures/edge1.ppt
I'd recommend MATLAB or if you're intent on using C#, Emgu CV is pretty good.
Hough transforms are a great idea. Once you detect the edges in your image, using, say a Canny edge detector, you get an edge image (which is binary image with only 1 or 0 for values).
Then, the Hough straight line transform (essentially) spins a line about each white pixel in the edge image (the resolution of the line depends on you) using a parametrized function for the line and calculates the total number of white (valued at 1) pixels along each spun line and stores this information in a big accumulator which stores the data indexed by the parameters of the line.
alt text http://upload.wikimedia.org/wikipedia/en/a/af/Hough_space_plot_example.png
In the example above, the parametric form for a line is:
rho = x*cos(theta) + y*sin(theta)
where rho is the distance and theta is
the angle
So as you can see the, if you look at the bin at a particular orientation you can find out how many lines are oriented at that angle. Of course, you'll have to do some extra work to figure out which lines are oriented at that angle since you have 5 other lines per arrow but that shouldn't be too hard.
as always in computer vision, your first problem is image illumination and acquisition. before going further, establish how your markers will be printed on the ceiling, what their form will be, what light you will be using to see them, and what camera setup you will chose to look at the markers.
given a good material, a good light and a good camera, you may have no problem at all to process the image. for example, you can print a full arrow in a retro-reflective material, with a longer tail than your example, use a colored light and a corresponding filter on the camera. now all you have on your image is arrows... there are plenty other ways of acquiring the image that will help you there.
once you have plain arrows, a simple blob analysis (which consist of computing statistical moments of objects in the image) will give you a lot of informations: each arrow should have values almost equal for the 7 hu moments, which allows you to filter objects efficiently, also the orientation computed from the central moments will give you the angle of the arrow. blob analysis being only statistical, it is extremely fast.
Several systems have been developed to detect markers and their orientation robustly:
reacTIVision (open source) uses these types of tags to find position and orientation:
ARToolKit (open source) uses a different type of tags to extract all 6 degrees of freedom:
alt text http://www.schanes.net/docs/robot/marker.png
If your primary goal is not to learn, but to make the application work, I would suggest you use one of these. It is not a trivial task for a beginner to robustly detect the position and orientation of a random marker in an image.
On the other hand, if you are manly interested in learning, I would also direct you to ARToolKit and its publications (and their references) that explain how to robustly implement marker detection.
You will need to explore edge detection, so look into Hough filters. After that you will need to look into pattern classifiers and feature extraction.
This paper has an algorithm that appears to work without edge detection.
This book excerpt is more oriented toward the kind of symbol detection you intend, once you have done the edge detection.
A rigorous way to determine the orientation of an imaged acquired under projective geometry (most of cameras) is using the vanishing points and vanishing lines. Good news to you: your marker can be used to find this information! More good news, your image can be rectified, so the image columns (the y-axis) will correspond to the up-down direction. You will find more about this stuff in chapter 8 of Hartley and Zisserman's book, Multiple View Geometry in Computer Vision.
Also remember that probably you will need to work on the radial distortion issue, the distortion caused by the camera lens. The other guys are right about the arrow detection problem: you have to use edge detection and, after that, Hough transform or template matching. Refer to Gonzalez and Woods' book Digital Image Processing for details.

Categories