Detecting mouse over line in Zedgraph - c#

I try to detect when the mouse is over a curve in zedgraph, I am capable of doing it if the mouse is over a point of the curve, but the problem is when the curve has no points in that region,
let me show you an example:
Curve is defined by 2 points: [X=0;Y=10] -- [X=1000;Y=10]
If mouse is at point [X=500;Y=10] it is over the curve, but not over any point so i cannot detect it.
Is there any event which gets fired when mouse is over line but not necessarily over a point?
Thanks

No, this must be done by manual interpolation. See answers posted to my similar question, showing an example of a FindNearestCurve function (I have not tested it)
https://stackoverflow.com/a/5885812/445533
(There is FindNearestObject which works for LineObj though, as detailed in those answers)

Related

How to a read raw mouse data in the form of either Delta or Pixel movement?

Ill get this out of the way at the beginning, I know how to read POINTER or CURSOR data, I can get the X,Y coords and through a interval of some sort calculate mouse Δ or movement, but this falls apart when he mouse reaches the edge of the screen as every known method I have seen relies on the concept of the derived cursor movement and not the mouse device itself.
I am also aware of the ability to "capture" the mouse (cursor) in a predefined space and or constantly reset the position to keep the mouse from ever actually reaching the edge of the screen, but this will not work for me as I would like the mouse to remain fully usable and not in a trapped state.
Every single result I have found in my searches for an answer have yielded code that only cares about the mouse cursor, so please don't carelessly mark this as duplicate of some other answer that is just doing one of the above.
I find it hard to believe that there isnt a low level function out there somewhere that gets me raw data.
To accomplish your task, use the WM_INPUT API (https://msdn.microsoft.com/en-us/library/windows/desktop/ee418864%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396#WM_INPUT). WM_INPUT messages are read directly from the Human Interface Device (HID) stack and reflect high-definition results.

Detect mouseover of non-square part of an Image

So I am working on a Risk type game in XNA/C#. I have a map, similar this one, and I need to be able to detect mouseovers on each territory (number). If these areas were squares, it would be easy, as they could each be represented by a rectangle. However, they are different size polygons. Is there a polygon shape that behaves similar to a square? If there isn't, how would I go about doing this?
I sugest this:attach color to each number, recreate your picture in these colors: every shape will be in its particular color. Dont draw it onscreen, use it only as reference map. And when the user clicks or moves mouse over your original map, you just simply project mouse coordinates into the color map, check the color of pixel laying under the mouse and because you have each color associated to number of territory...
This is not c# specific (as I've never written anything in the language, so no idea of what apis there are), though there are 2 algorithms that come to mind for detecting if a point is inside a polygon (which can be used to detect if a mouse point is over another polygon/map shape).
One is based on raycasting, where you cast a ray in 1 direction from the (mouse) point to "infinity" (edge of the board in this case) and count the number of times it crosses the polygon's edges. If it is odd, then the point is inside the polygon, if it is even, then the point is outside of the polygon.
A wiki link to it: http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm
The other algorithm that comes to mind works only for triangles I think but it can be more simple to implement I think (taking a quick glance at your shapes, I think they can easily be broken down into triangles and some are already triangles). It is to do with checking if the point is on the same (internal) "side" of all the edges in the triangle. To find out what "side" a point is on vs an edge, you'd take create 2 vectors, the first vector would be the edge itself (made up of 2 points) and the other vector would be the first point of that edge to the input point, then calculate the cross product of those 2 vectors. The result will be negative or positive, which can be used to determine the "direction".
A link to it: http://www.blackpawn.com/texts/pointinpoly/default.html
(On that page is another algorithm that can also work for triangles)
Hit testing on a polygon is not so difficult to do in real time. You could use a KD-Tree for optimisation if the map is huge. Otherwise find a simple Contains method for a polygon and use that. I have one on another computer. Let me know if you'd like it.

How do I implement a wave gesture in kinect?

I would like to use a gesture, so the kinect can select the person with the gesture as the main player. After this he can control the PC. Selecting the person and giving them control is done. Now i have to implement a gesture, but i dont know how to start.
Can anyone help me?
I guess that is what you want (if you like to recognize gestures by yourself):
MS explains how to recognize a wave gesture with a full code example here:
http://blogs.msdn.com/b/mcsuksoldev/archive/2011/08/08/writing-a-gesture-service-with-the-kinect-for-windows-sdk.aspx
By now there are also some gesture recognizer toolkits available.
See this for example:
http://kinecttoolbox.codeplex.com/
You can also surf on http://channel9.msdn.com for similar projects, like that one:
http://channel9.msdn.com/coding4fun/kinect/Gestures-and-Tools-for-Kinect-and-matching-Toolkit-too
Did you get as far that you have the skeleton?
The easiest is to check how many times the hand changed velocity direction
+x --> -X means it went left and is now coming back right, you can do a distance check between these points to determine if the wave gesture is obvious enough (omits very tiny waves/jitter)
Take some reference for hand have - say elbow - and store it into a variable and take some reference distance for the hand move such that whenever the hand moves on both sides beyond the reference distance on both sides, calculate the number of waves with the waves you require in your program. If both match select that person for your program

How to detect direction of HorizontalDrag in WP7 XNA?

Like mentioned what is the best way of detecting in which direction the user is dragging horizontally. I'm trying to create a camera class that responds to this gesture but am having problems determining which direction they are dragging. Any suggestions are appreciated.
Admittedly I haven't tested this, but the documentation suggests that you should check the value of GestureSample.Delta.X, which should be negative for a left movement, and positive for a rightwards one.
Because the delta is only for that particular gesture sample (not the overall gesture), you may need to accumulate it, and only trigger your drag action if the accumulated value is above some threshold (possibly upon receiving a DragComplete).

Detect marker in 2D image

I am hoping to obtain some some help with 2D object detection. I'll give a brief overview of the context in which this will be implemented.
There will be an image taken of the ceiling. The ceiling will have markers placed on it so the orientation of the camera can be determined. The pictures will always be taken facing straight up. My goal is to detect one of these markers in the image and determine its rotation. So rotation and scaling(to a lesser extent) will be the two primary factors used in the image detection. I will be writing the software in either C# or matlab(not quite sure yet).
For example, the marker might be an arrow like this:
An image taken of the ceiling would contain markers. The software needs to detect a single marker and determine that it has been rotated by 170 degrees.
I have no prior experience with image analysis. I know image processing is a fairly broad topic and was hoping to get some advice on which direction I should take and which techniques would be best for my application. Thanks!
I'm not directly in this field but I would tell you to start by looking into edge detection specifically. If you have a background in math/engineering the materials are pretty easy to understand:
This seemed to spark some ideas:
http://www.cfar.umd.edu/~fer/cmsc426/lectures/edge1.ppt
I'd recommend MATLAB or if you're intent on using C#, Emgu CV is pretty good.
Hough transforms are a great idea. Once you detect the edges in your image, using, say a Canny edge detector, you get an edge image (which is binary image with only 1 or 0 for values).
Then, the Hough straight line transform (essentially) spins a line about each white pixel in the edge image (the resolution of the line depends on you) using a parametrized function for the line and calculates the total number of white (valued at 1) pixels along each spun line and stores this information in a big accumulator which stores the data indexed by the parameters of the line.
alt text http://upload.wikimedia.org/wikipedia/en/a/af/Hough_space_plot_example.png
In the example above, the parametric form for a line is:
rho = x*cos(theta) + y*sin(theta)
where rho is the distance and theta is
the angle
So as you can see the, if you look at the bin at a particular orientation you can find out how many lines are oriented at that angle. Of course, you'll have to do some extra work to figure out which lines are oriented at that angle since you have 5 other lines per arrow but that shouldn't be too hard.
as always in computer vision, your first problem is image illumination and acquisition. before going further, establish how your markers will be printed on the ceiling, what their form will be, what light you will be using to see them, and what camera setup you will chose to look at the markers.
given a good material, a good light and a good camera, you may have no problem at all to process the image. for example, you can print a full arrow in a retro-reflective material, with a longer tail than your example, use a colored light and a corresponding filter on the camera. now all you have on your image is arrows... there are plenty other ways of acquiring the image that will help you there.
once you have plain arrows, a simple blob analysis (which consist of computing statistical moments of objects in the image) will give you a lot of informations: each arrow should have values almost equal for the 7 hu moments, which allows you to filter objects efficiently, also the orientation computed from the central moments will give you the angle of the arrow. blob analysis being only statistical, it is extremely fast.
Several systems have been developed to detect markers and their orientation robustly:
reacTIVision (open source) uses these types of tags to find position and orientation:
ARToolKit (open source) uses a different type of tags to extract all 6 degrees of freedom:
alt text http://www.schanes.net/docs/robot/marker.png
If your primary goal is not to learn, but to make the application work, I would suggest you use one of these. It is not a trivial task for a beginner to robustly detect the position and orientation of a random marker in an image.
On the other hand, if you are manly interested in learning, I would also direct you to ARToolKit and its publications (and their references) that explain how to robustly implement marker detection.
You will need to explore edge detection, so look into Hough filters. After that you will need to look into pattern classifiers and feature extraction.
This paper has an algorithm that appears to work without edge detection.
This book excerpt is more oriented toward the kind of symbol detection you intend, once you have done the edge detection.
A rigorous way to determine the orientation of an imaged acquired under projective geometry (most of cameras) is using the vanishing points and vanishing lines. Good news to you: your marker can be used to find this information! More good news, your image can be rectified, so the image columns (the y-axis) will correspond to the up-down direction. You will find more about this stuff in chapter 8 of Hartley and Zisserman's book, Multiple View Geometry in Computer Vision.
Also remember that probably you will need to work on the radial distortion issue, the distortion caused by the camera lens. The other guys are right about the arrow detection problem: you have to use edge detection and, after that, Hough transform or template matching. Refer to Gonzalez and Woods' book Digital Image Processing for details.

Categories