I'm trying to understand and use OpenCV. I wanted to know if it is possible to find and measure an angle between two frames.
I explain : The cam is fix and the frames could rotate around the center and won't move. For now I managed to rotate manually and I would like to be able to compare frames and return the angle. For instance :
double getRotation(Image img1, Image img2) {
//Compare the frames
//Return the value
}
and then I rotate following that angle.
If you're able to detect static objects, e. g. background, on the frames then you may find points called good_features_to_track (cvGoodFeaturesToTrack) on the background and track this points using optical_flow (cvCalcOpticalFlowPyrLK).
If rotation is only on 'xy' plain you're able to detect rotation using cvGetAffineTransform.
Since only rotation is allowed (no translation and scaling) it's not difficult to determine an angle of rotation using transformation matrix, obtained by cvGetAffineTransform. That matrix looks like (see wikipedia):
Where \theta is the rotation angle
Well this might be very tricky, just a simpler solution might be to find the hough lines of the frames. Of course you would need to determined where the best and stable lines are which you can track between the two frames, once that is available, you can then find the angle between the two frames. What Andrey has suggested for finding the angles should be usable as well.
Related
I have just recently started to work with OpenCV and image processing in general, so please bear with me
I have the following image to work with:
The gray outline is the result of the tracking algorithm, which I drew in for debugging, so you can ignore that.
I am tracking glowing spheres, so it is easy to turn down the exposure of my camera and then filter out the surrounding noise that remains. So what I have to work with is always a black image with a white circle. Sometimes a little bit of noise makes it through, but generally that's not a problem.
Note that the spheres are mounted on a flat surface, so when held at a specific angle the bottom of the circle might be "cut off", but the Hough transform seems to handle that well enough..
Currently, I use the Hough Transform for getting position and size. However, it jitters a lot around the actual circle, even with very little motion. When in motion, it sometimes loses track entirely and does not detect any circles.
Also, this is in a real-time (30fps) environment, and i have to run two Hough circle transforms, which takes up 30% CPU load on a ryzen 7 cpu...
I have tried using binary images (removing the "smooth" outline of the circle), and changing the settings of the hough transform. With a lower dp value, it seems to be less jittery, but it then is no longer real-time due to the processing needed.
This is basically my code:
ImageProcessing.ColorFilter(hsvFrame, Frame, tempFrame, ColorProfile, brightness);
ImageProcessing.Erode(Frame, 1);
ImageProcessing.SmoothGaussian(Frame, 7);
/* Args: cannyThreshold, accumulatorThreshold, dp, minDist, minRadius, maxRadius */
var circles = ImageProcessing.HoughCircles(Frame, 80, 1, 3, Frame.Width / 2, 3, 80);
if (circles.Length > 0)
...
The ImageProcessing calls are just wrappers to the OpenCV framework (EmguCV)
Is there a better, less jittery and less performance-hungry way or algorithm to detect these kinds of (as i see it) very simple circles? I did not find an answer on the internet that matches these kinds of circles. thank you for any help!
Edit:
This is what the image looks like straight from the camera, no processing:
I feel desperate to see how often people spoil good information by jumping on edge detection and/or Hough transformations.
In this particular case, you have a lovely blob, which can be detected in a fraction of a millisecond and for which the centroid will yield good accuracy. The radius can be obtained just from the area.
You report that in case of motion the Hough becomes jittery; this can be because of motion blur or frame interleaving (depending on the camera). The centroid should be more robust to these effects.
I am trying to find a way to link the Kinect joints to the body parts of a character in Unity using Final IK.
Right now, all the joints are linked with the body parts correctly (I can move and the movement is duplicated to the character), but it seems like the scale between the joints (sent by the Kinect) is way smaller than the scale of the body parts. For example, when the Kinect detects my body, the character seems to collapse on itself. When I move, the movement is detected, but it is really small compared to the character's body.
Is there a way to sync those two scales?
Thank you very much!
I finally found a way of doing this and it works quite well. I'll try to keep this as simple as possible.
The main problem I had was to get a value of scale from the Kinect Joints that would be applicable to the size of my caracter.
In order to solve this, I added a calibration method that performs the following actions :
Ask the user to maintain a T position.
Reads the length of the left arm of the user (kinect)
Reads the length of the left arm of the character in game.
Compare both length and find a relative scale (ex: if the left arm lengths returned by the kinect if 4 and the caracter has a left arm of 2, then the scale to have is 4/2 = 2)
I had a small issue at first but ended up solving it. The values of Vector3 returned by the Kinect in Unity are relative to the Kinect's origin (camera). There for, I had to use a bit of linear algebra to solve this. Since both Vector3 (left hand and left shoulder) are relative to the Kinect's origin, I had to substract them in order to find the left arm length.
Thanks to linear algebra, we know that A + C = B, therefore, we can say that B - A = C. In our case, LeftHand - LeftShoulder = Left Arm. By applying this principle to the Vector3 received by the kinect, we now have the vector that represents the left arm. All we need to do is get the "magnitude" property and we now have its length.
That pretty much sums it up.
I have a camera that needs to orbit locally around an object. This object has an arbitrary rotation, described by a normal vector. Imagine a spherical planet, with a camera looking down at a certain triangle on that planet.
My current implementation is to use the classic vector crossing method to generate a rotation matrix from the triangle's normal, then use that matrix as the basis for the standard orbit camera. This works fine near the equator of the planet, but once it gets near the poles, it starts blowing up, with the camera behaving increasingly erratically the closer that it gets to the very center of the pole.
I've determined that this is due to the first vector cross, as the two vectors are close to one another in that case - I'm not sure what the technical name for the phenomena is. If first vector is 0,1,0, the craziness happens when the normal is close to 0, 1, 0 or 0, -1, 0.
I've found quite a few descriptions of this problem, but no working solutions. The closest I've come was here: http://xboxforums.create.msdn.com/forums/p/13278/13278.aspx It mentions that to handle the 'singularity', use a different vector when it is detected. I can easily determine when the camera is on planet face that will cause this to happen (as my planet sphere is generated from 6 quadtrees projected to spherical coordinates), but there is a very noticeable snap when I switch to a new vector.
Here's the current code:
Vector3 triNormal; //the current normal of the target vertex
Vector3 origin = Vector3.Forward;
Matrix orientation.Forward = origin;
orientation.Up = triNormal;
orientation.Right = Vector3.Cross(orientation.Up, orientation.Forward);
orientation.Right.Normalize();
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
I've experimented with detecting when triNormal is on one of the pole faces, and setting 'origin' to something else such as Right. The camera then behaves properly once it is on the face, but is immediately snapped to a new rotation as it crosses over. This makes sense, as its reference vector has just changed, but needs to be eliminated for a smooth user experience. I tried figuring out how to offset the camera's yaw for the orbit camera to counteract the new coordinate system, but it doesn't seem to be a constant value, depending on where on the sphere the camera is currently aiming. I'm not sure how I could calculate what the difference is.
Also note that as it's in XNA and C#, I'm using a right-hand coordinate system.
I don't understand why you do this:
orientation.Forward = Vector3.Cross(orientation.Right, orientation.Up);
orientation.Forward.Normalize();
when you've already used previous orientation.Forward to get orientation.Right.
(If you are "crossing" normal vector I don't think you'll need to normalize them.)
Anyway, if triNormal is the current normal of the target vertex, and your camera is looking down to it, I think you should have:
orientation.Forward = -triNormal
I'm a real noob who just started learning 3d programming and i have a really hard time learning about rotation in 3D space. My problem is that I can't seem to figure out how to rotate an object using it's local coordinates.
I have a basic class for 3d objects and, for starters, i want to implement functions that will rotate the object on a certain axis with x degrees. So far i have the following:
public void RollDeg(float angle)
{
this.rotation = Matrix4.Mult(rotation,
Matrix4.CreateRotationX(MyMath.Conversions.DegToRad(angle)));
}
public void PitchDeg(float angle)
{
this.rotation = Matrix4.Mult(rotation,
Matrix4.CreateRotationY(MyMath.Conversions.DegToRad(angle)));
}
public void YawDeg(float angle)
{
this.rotation = Matrix4.Mult(rotation,
Matrix4.CreateRotationZ(MyMath.Conversions.DegToRad(angle)));
}
'rotation' is a 4x4 matrix which starts as the identity matrix. Each time i want to roll/pitch/yaw the object, i call one of the functions above.
for drawing, i use another function that pushes a matrix onto the ModelView stack, multiplies it with the translation, rotation and scale matrices of the object (in this order) and begins drawing the vertices. ofcourse, finally i pop the matrix off the stack.
the problem is that the functions above rotate the object on the GLOBAL axis, not on the LOCAL ones, even if, from my understanding, every time you rotate an object, the local system changes it's axis and then, when a new rotation is applyied on top of the others, the local axis are used for the new one.
i read different tutorials about the math behind it and how to rotate objects, but i couldn't find one the could help me.
if anyone has the time, i would really appreciate if he could help me understand HOW to rotate around local axis and, maybe even more important, what i did wrong on my current implementation.
If you want to perform your transformations in this order : translation -> rotation -> scale (which makes perfectly sense, it's what's wanted usually), you have to multiply your matrices in the reverse order.
In a right-handed coordinate system (i.e. the one openGL uses), matrix multiplication must be performed from right to left. This is why :
ModelViewTransform = Transform * View * Model // <- you begin by the model, right ? so it's this way
Note that in directX they use a left-handed coordinate system. It has his shortcomings, but it's more intuitive.
I am creating a CAD like program, creating modelvisual3D objects. How do i do collision detection between my objects(modelvisual3d) using MeshGeometry3D. Do i have to compare every triangle in the moving object against the still standing objects?
What will be my best way to do collision detection?
It depends on how precise your collision detection needs to be.
There is no built-in collision detection in WPF's 3D library. If you need high precision, you'll need to compare every triangle.
That being said, you can start with comparing bounding boxes and/or bounding spheres. This is always a good first step, since it can quickly eliminate most cases. If you don't need precision collision detection, this alone may be fine.
To add to Reed's answer (based on my answer here):
After you've eliminated most of your objects via the bounding box/sphere to bounding box/sphere test you should test the triangles of your test object(s) against the other object's bounding box/sphere first before checking triangle/triangle collisions. This will eliminate a lot more cases.
To rule out a collision you'll have to check all the triangles in the test object, but to find a case where you'll need to go down to the triangle/triangle case you only need to find the first triangle that interacts with the bounding box/sphere of the other object.
Look at the SAT theorem (Separating Axes Theorem), it's the fastest and easiest one out there.
The theory about this is that if you can draw a line which separates the triangles, then they're not colliding.
As is said, first do an AABB earlier detection, and when two objects collide, test each polygon of object A against each polygon of object B.
Starting in 2D, to test if two polygons collide, you get the extents of them in the possible axes (in this case X and Y), if those extents intersect, then the poligons are colliding.
On this page you can find a very good explanation on how it works and how to apply it:
http://www.metanetsoftware.com/technique/tutorialA.html
To apply it to 3D simply use the edges of each polygon as the separating axes.
If the extents on those axes intersect, then the polygons are colliding.
Also, this method resolves collission for moving objects, giving also the momentum of collision (resolve the relative angular velocity, substracting velocity B from velocity A, this way the problem is reduced to a moving object and a static one, and add the velocity in the axis you are testing to the extent of the polygon A, if they intersect, rest the original extent of the polygon and you will get the momentum of collission).
Another option would be to use BulletSharp, a C# wrapper of the well-known Bullet Physics Engine. In this case, you would need to write functions to create a (concave) collision shape from a MeshGeometry3D.
In my experience, it works pretty well, even though dynamic collision between concave shapes is not supported. You'll need to use convex decompsition, as a workaround.