Hough transform alternative for a simple real-time circle detection - c#

I have just recently started to work with OpenCV and image processing in general, so please bear with me
I have the following image to work with:
The gray outline is the result of the tracking algorithm, which I drew in for debugging, so you can ignore that.
I am tracking glowing spheres, so it is easy to turn down the exposure of my camera and then filter out the surrounding noise that remains. So what I have to work with is always a black image with a white circle. Sometimes a little bit of noise makes it through, but generally that's not a problem.
Note that the spheres are mounted on a flat surface, so when held at a specific angle the bottom of the circle might be "cut off", but the Hough transform seems to handle that well enough..
Currently, I use the Hough Transform for getting position and size. However, it jitters a lot around the actual circle, even with very little motion. When in motion, it sometimes loses track entirely and does not detect any circles.
Also, this is in a real-time (30fps) environment, and i have to run two Hough circle transforms, which takes up 30% CPU load on a ryzen 7 cpu...
I have tried using binary images (removing the "smooth" outline of the circle), and changing the settings of the hough transform. With a lower dp value, it seems to be less jittery, but it then is no longer real-time due to the processing needed.
This is basically my code:
ImageProcessing.ColorFilter(hsvFrame, Frame, tempFrame, ColorProfile, brightness);
ImageProcessing.Erode(Frame, 1);
ImageProcessing.SmoothGaussian(Frame, 7);
/* Args: cannyThreshold, accumulatorThreshold, dp, minDist, minRadius, maxRadius */
var circles = ImageProcessing.HoughCircles(Frame, 80, 1, 3, Frame.Width / 2, 3, 80);
if (circles.Length > 0)
...
The ImageProcessing calls are just wrappers to the OpenCV framework (EmguCV)
Is there a better, less jittery and less performance-hungry way or algorithm to detect these kinds of (as i see it) very simple circles? I did not find an answer on the internet that matches these kinds of circles. thank you for any help!
Edit:
This is what the image looks like straight from the camera, no processing:

I feel desperate to see how often people spoil good information by jumping on edge detection and/or Hough transformations.
In this particular case, you have a lovely blob, which can be detected in a fraction of a millisecond and for which the centroid will yield good accuracy. The radius can be obtained just from the area.
You report that in case of motion the Hough becomes jittery; this can be because of motion blur or frame interleaving (depending on the camera). The centroid should be more robust to these effects.

Related

Confusing inaccuracy in Emgu CV stereo calibration

I have 9 stereo camera rigs that are essentially identical. I am calibrating them all with the same methodology:
Capture 25 images of an 8x11 chessboard (the same one for all rigs) in varying positions and orientations
Detect the corners for all images using FindChessboardCorners and refine them using CornerSubPix
Calibrate each camera intrinsics individually using CalibrateCamera
Calibrate the extrinsics using StereoCalibrate passing the CameraMatrix and DistortionCoeffs from #3 and using the FixIntrinsics flag
Compute the rectification transformations using StereoRectify
Then, with a projector using structured light, I place a sphere (the same one for all rigs) of known radius (16 mm) in front of the rigs and measure the sphere using:
Use image processing to match a large number of features between the two cameras in the distorted images
Use UndistortPoints to get their undistorted image locations
Use TriangulatePoints to get the points in homogeneous coordinates
Use ConvertFromHomogeneous to get the points in world coordinates
On two of the rigs, the sphere measurement comes out highly accurate (RMSE 0.034 mm). However, on the other seven rigs, the measurement comes out with an unacceptable RMSE 0.15 mm (5x). Also, the inaccuracy of each of the measurements seems to be skewed vertically. Its as if the sphere is measured "spherical" in the horizontal direction, but slightly skewed vertically with a peak pointing slightly downward.
I have picked my methodology apart for a few weeks and tried almost every variation I can think of. However, after recalibrating the devices multiple times and recapturing sphere measurements multiple times, the same two devices remain spot-on and the other seven devices keep giving the exact same error. Nothing about the calibration results of the 7 incorrect rigs stands out as "erroneous" in comparison to the results of the 2 good rigs other than the sphere measurement. Also, I cannot find anything about the rigs that are significantly different hardware-wise.
I am pulling my hair out at this point and am turning to this fine community to see if anyone notices anything I'm missing in my above described calibration procedure. I've tried every variation I can think of in each step of the above process. However, the process seems valid since it works for 2 of the 9 devices.
Thank you!

Unity Coordinate Limitations and its impact

Some objects which I have placed at a position (-19026.65,58.29961, 1157) from the origin (0,0,0) are rendering with issues, the problem is referred to as spatial Jitter (SJ) ref. Like You can check its detail here or You can see the below image. The objects are rendering with black spots/lines or maybe it is Mesh flickering. (Actually, I can't describe the problem, maybe the picture will help you to understand it)
I have also tried to change the camera near and far clipping but it was useless. Why I am getting this? Maybe my object and camera are far away from the origin.
Remember:
I have a large environment and some of my game objects (where the problem is) are at (-19026.65,58.29961, 1157) position. And I guess this is the problem that Object and Camera are very far from the origin (0,0,0). I found numerous discussions which is given below
GIS Terrain and Unity
Unity Coordinates bound Question at unity
Unity Coordinate and Scale - Post
Infinite Runner and Unity Coordinates
I didn't find that what is the minimum or maximum limit to place the Object in unity so that works correctly.
Since the world origin is a vector 3(0,0,0), the max limit that you can place an object would be 3.402823 × 10^38 since it is a floating point. However, as you are finding, this does not necessarily mean that placing something here will insure it works properly. Your limitation will be bound by what other performance factors your have in your game. If you need to have items placed at this point in the world space,consider building objects at runtime based on where the camera is. This will allow the performance to work at different points from the origin.
Unity suggests: not recommended to go any further than 100,000 units away from the center, the editor will warn you. If you notice in today's gaming world, many games move the world around the player rather than the player around the world.
To quote Dave Newson's site Read Here:
Floating Point Accuracy
Unity allows you to place objects anywhere
within the limitations of the float-based coordinate system. The
limitation for the X, Y and Z Position Transform is 7 significant
digits, with a decimal place anywhere within those 7 digits; in effect
you could place an object at 12345.67 or 12.34567, for just two
examples.
With this system, the further away from the origin (0.000000 -
absolute zero) you get, the more floating-point precision you lose.
For example, accepting that one unit (1u) equals one meter (1m), an
object at 1.234567 has a floating point accuracy to 6 decimal places
(a micrometer), while an object at 76543.21 can only have two decimal
places (a centimeter), and is thus less accurate.
The degradation of accuracy as you get further away from the origin
becomes an obvious problem when you want to work at a small scale. If
you wanted to move an object positioned at 765432.1 by 0.01 (one
centimeter), you wouldn't be able to as that level of accuracy doesn't
exist that far away from the origin.
This may not seem like a huge problem, but this issue of losing
floating point accuracy at greater distances is the reason you start
to see things like camera jitter and inaccurate physics when you stray
too far from the origin. Most games try to keep things reasonably
close to the origin to avoid these problems.

Unity3D - Detect a vertex on planet mesh without cubes

I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}

Icosphere versus Cubemapped Sphere

I'm doing research on generating planets for a game engine I'm planning to code, and I was wondering what would be the best approach to procedurally generate a planet. (In terms of performance.) So far I've seen the Icosphere and Cubemapped Sphere pop up the most, but I was wondering which of the two is faster to generate. My question is particularly aimed at LOD, since I hope to have gameplay similar to No Man's Sky.
Thanks in advance.
I would say a octahedron sphere would be best, but since they are all Platonic Solids, they will be similar, so the premature optimization might not be worth it. (Here's a tutorial in Unity)
The possible advantages of the octahedron are that the faces are triangles (unlike the cube) and there is one triangle for each quadrant in 3d space (unlike the icosphere and cube).
My rationale behind octahedrons (and icospheres) being faster than cubes lies in the fact that the face is already a triangle (whereas the cube has a square face). Adding detail for an octahedron, icosahedron, or cube usually means turning each triangle into four smaller triangles. During this generation, you create three new vertices whose positions will need to be normalized so that the mesh is still properly inscribed in a unit-sphere.
Tessellating a Cube
Octahedron and icosahedron can use a lookup table that fetches this normalization factor (as opposed to the cube) because the number is consistent for each iteration.
Assuming you can write a custom mesh format, you might store the mesh for a given planet through an array (size 4, 8, or 20) of quad-trees (because each triangle is optionally tessellated into four additional triangles). (This is essentially a LOD system, but you need to periodically determine whether or not to tessellate or reduce a portion of the mesh based on the distance from the camera.) This system will likely be the bottleneck since meshes have to be recalculated at runtime.

OpenCV - How to detect and measure an angle between two frames?

I'm trying to understand and use OpenCV. I wanted to know if it is possible to find and measure an angle between two frames.
I explain : The cam is fix and the frames could rotate around the center and won't move. For now I managed to rotate manually and I would like to be able to compare frames and return the angle. For instance :
double getRotation(Image img1, Image img2) {
//Compare the frames
//Return the value
}
and then I rotate following that angle.
If you're able to detect static objects, e. g. background, on the frames then you may find points called good_features_to_track (cvGoodFeaturesToTrack) on the background and track this points using optical_flow (cvCalcOpticalFlowPyrLK).
If rotation is only on 'xy' plain you're able to detect rotation using cvGetAffineTransform.
Since only rotation is allowed (no translation and scaling) it's not difficult to determine an angle of rotation using transformation matrix, obtained by cvGetAffineTransform. That matrix looks like (see wikipedia):
Where \theta is the rotation angle
Well this might be very tricky, just a simpler solution might be to find the hough lines of the frames. Of course you would need to determined where the best and stable lines are which you can track between the two frames, once that is available, you can then find the angle between the two frames. What Andrey has suggested for finding the angles should be usable as well.

Categories