My homework is to write a very simple application (Java or C#, I know both), which can detect water level of a glass of water / coke in a picture (it have to draw a line there). I don't even know how to start it. I have googled all day, but have found no useful results. Are there any good algorithms, which can detect the level of the liquid?
The photo is taken from the side, like this:
(it's also good if it detects both lines). So could you help me out with how to start? Use egde detection (are there any good basic algorythms?), or other method?
It would be best would be if it detected water, coke, and every liqued etc....
You are going to have to do some edge detection and then once you have the edges, try and find the level within the glass. You could use a toolkit like Aforge.NET. Then code to detect the edges is pretty simple, for example:
Bitmap b = new Bitmap(Image.FromFile(#"C:\Temp\water.jpg"));
// create filter
Edges filter = new Edges();
// apply the filter
filter.ApplyInPlace(b);
pictureBox1.Image = b;
Yields an image like this:
Now it should be a little bit easier to find the point of water in the glass. Since all of the background noise has been eliminated, you can focus on determining which edge you should key off of.
Check Hough transformation here
It will help you to get the capacity of the glass in question.
Once you know how much water a glass can hold, you can draw two lines onto the image using functions that you write your self. I would advise one line for the glass size and one line for the water level super imposed over the image, you can then use these lines and the maximum capacity of the glass to form a correlation between the two and calculate the level of fluid contained within the glass.
Remember that your professors aren't interested so much in you getting the assignment 100% correct, they are more interested in preparing you to solve problems using your own initiative. Google searching can't always solve your problems.
Related
Part 1: Forgive me if the question itself is unclear. I am learning how to use Unity and script in C#, and I want to know if there's a way to apply a gradient of color (or an image) utilizing the game objects that already exist as the places where the gradient will show up.
Say I have a group of these circles that randomly grow and change size during the game run.
Example image of circles
I am not sure of the correct terminology, but a couple of words come to mind, i.e. shader/mask. My goal is to display the gradient/image only within where the game objects exist. So instead of white circles, it's circles display parts of the one singular image/gradient.
Part 2:
To take a it a step further, I'd like to know how to have the gradient continuously run through its spectrum so one can see the colors shift across the circles.
Again, still very new to this kind of stuff, but would anyone know what steps I would need to take to get there.
Thanks!
I am developing a card game using WPF and since i do not have any knowledge about animations i would like to know if someone could help me in how to write an animation to simulate a Card ( Image ) been played over a table.
At the bottom and top of my game table i have my cards on vertical position.
At the right and left my cards are on horizontal position.
What i really want is give an impression that a human is selecting and throwing the card.
Since your question is pretty open-ended, I'll give you an open-ended start...
Look into Storyboards, and how to use the same to modify the RenderTransform of your Card UserControl.
Your first step should really just be animating your card's position from its initial spot to the center of the table. As an additional hint (which will come in handy after you've learned about Storyboards), your DoubleAnimation.From property does not need to be specified. You just need to specify the DoubleAnimation.To property.
I see questions like this all the time on SO, and it really does give the impression of "I haven't tried, and I have not read anything". You already have your cards on the table (so to speak), and the question is, make it look like a human did it.
There are a variety of ways, some cheap and simple, some more complex and involved. You won't know the answer until you try.
For example, perhaps you want a card to go from one position to another (optionally flipping). You have varying degrees of difficulty here:
Move the card to the position, as is. Cheap and easy. You could even use the distance between the source and the target to determine the speed to have some kind of residual momentum.
Cards are at different angles. How do we rotate? XNA makes this pretty simple, have you looked up on XNA and general rendering? Or do you want to this purely using WPF?
Does the move involve showing the card face-up, or not? Will there be an animation involved? Are you happy with just the face changing or do you want to see an actual "flip"? If it's the latter than some kind of a plane in XNA using 3D might be better, at least then you can have two faces with two different textures.
What I am saying is, and why this is an answer as opposed to a comment, is that you have given no indication of anything that might be considered trying to solve the problem. You seem like you've got halfway there, you've already got cards rendered on the screen. But to ask "Make it look like a human put a card in"...? Well, sorry... it's not that simple. You can make this task as easy or had as you wish.
I have a following problem. A large rectangle contains smaller non-intersecting rectangles (The black rectangles in the picture below) and I need to find an algorithm to fill remaining free area with non-intersecting rectangles(red ones in the picture below). Speed is not an issue for the algorithm. Also if someone would have an example source code of the algorithm I would really appreciate that.
Edit. Small clarification I need to get the coordinates of the red rectangles not to draw them. I am also working with point data not images.
http://koti.mbnet.fi/niempi2/Squares.gif
Like most bin-packing problems this one looks like an NP-hard problem to me. With 2 rectangles, there are 8! (= 40320) possible arrangements you need to consider. Three rectangles produces 12! possibilities, a cool 480 million.
You'll need an heuristic to make this computable. Beyond favoring the outer edges of the rectangles closest to the bounding rectangle, I don't see a good one. You'd need tighter requirements on the resulting rectangles you accept, the number of them isn't going to help. Glad this is not my problem :)
Although there are multiple possible solutions, I think you can get to one fairly easily.
I would work in increasing values along one axis. By scanning all rectangles and ordering their edge's appearances along that axis, you could walk through them and create rectangles as you go. Each time you hit a new pair of corners, you can compare with the rectangles you currently have 'open' and determine what to do (close them, start new, divide, etc).
That statement isn't a complete solution, but I think it gets you from a complex solution to a simple one. It also doesn't seem to be NP complete in terms of performance. You might even be able to get O(n) perf.
An interesting problem. Let us know how you get on.
Look at the Region class.
I am hoping to obtain some some help with 2D object detection. I'll give a brief overview of the context in which this will be implemented.
There will be an image taken of the ceiling. The ceiling will have markers placed on it so the orientation of the camera can be determined. The pictures will always be taken facing straight up. My goal is to detect one of these markers in the image and determine its rotation. So rotation and scaling(to a lesser extent) will be the two primary factors used in the image detection. I will be writing the software in either C# or matlab(not quite sure yet).
For example, the marker might be an arrow like this:
An image taken of the ceiling would contain markers. The software needs to detect a single marker and determine that it has been rotated by 170 degrees.
I have no prior experience with image analysis. I know image processing is a fairly broad topic and was hoping to get some advice on which direction I should take and which techniques would be best for my application. Thanks!
I'm not directly in this field but I would tell you to start by looking into edge detection specifically. If you have a background in math/engineering the materials are pretty easy to understand:
This seemed to spark some ideas:
http://www.cfar.umd.edu/~fer/cmsc426/lectures/edge1.ppt
I'd recommend MATLAB or if you're intent on using C#, Emgu CV is pretty good.
Hough transforms are a great idea. Once you detect the edges in your image, using, say a Canny edge detector, you get an edge image (which is binary image with only 1 or 0 for values).
Then, the Hough straight line transform (essentially) spins a line about each white pixel in the edge image (the resolution of the line depends on you) using a parametrized function for the line and calculates the total number of white (valued at 1) pixels along each spun line and stores this information in a big accumulator which stores the data indexed by the parameters of the line.
alt text http://upload.wikimedia.org/wikipedia/en/a/af/Hough_space_plot_example.png
In the example above, the parametric form for a line is:
rho = x*cos(theta) + y*sin(theta)
where rho is the distance and theta is
the angle
So as you can see the, if you look at the bin at a particular orientation you can find out how many lines are oriented at that angle. Of course, you'll have to do some extra work to figure out which lines are oriented at that angle since you have 5 other lines per arrow but that shouldn't be too hard.
as always in computer vision, your first problem is image illumination and acquisition. before going further, establish how your markers will be printed on the ceiling, what their form will be, what light you will be using to see them, and what camera setup you will chose to look at the markers.
given a good material, a good light and a good camera, you may have no problem at all to process the image. for example, you can print a full arrow in a retro-reflective material, with a longer tail than your example, use a colored light and a corresponding filter on the camera. now all you have on your image is arrows... there are plenty other ways of acquiring the image that will help you there.
once you have plain arrows, a simple blob analysis (which consist of computing statistical moments of objects in the image) will give you a lot of informations: each arrow should have values almost equal for the 7 hu moments, which allows you to filter objects efficiently, also the orientation computed from the central moments will give you the angle of the arrow. blob analysis being only statistical, it is extremely fast.
Several systems have been developed to detect markers and their orientation robustly:
reacTIVision (open source) uses these types of tags to find position and orientation:
ARToolKit (open source) uses a different type of tags to extract all 6 degrees of freedom:
alt text http://www.schanes.net/docs/robot/marker.png
If your primary goal is not to learn, but to make the application work, I would suggest you use one of these. It is not a trivial task for a beginner to robustly detect the position and orientation of a random marker in an image.
On the other hand, if you are manly interested in learning, I would also direct you to ARToolKit and its publications (and their references) that explain how to robustly implement marker detection.
You will need to explore edge detection, so look into Hough filters. After that you will need to look into pattern classifiers and feature extraction.
This paper has an algorithm that appears to work without edge detection.
This book excerpt is more oriented toward the kind of symbol detection you intend, once you have done the edge detection.
A rigorous way to determine the orientation of an imaged acquired under projective geometry (most of cameras) is using the vanishing points and vanishing lines. Good news to you: your marker can be used to find this information! More good news, your image can be rectified, so the image columns (the y-axis) will correspond to the up-down direction. You will find more about this stuff in chapter 8 of Hartley and Zisserman's book, Multiple View Geometry in Computer Vision.
Also remember that probably you will need to work on the radial distortion issue, the distortion caused by the camera lens. The other guys are right about the arrow detection problem: you have to use edge detection and, after that, Hough transform or template matching. Refer to Gonzalez and Woods' book Digital Image Processing for details.
I have problem optimizing drawing Google-like map. It works OK for hundreds of points, but when it comes to larger amounts like thousands it gets fuzzy and slow. Also unzoomed it looks weird.
I'd like to know how to optimize drawing algorithm to draw fewer places so it looks like unzooming on Google Maps.
However I also draw links between places, and I can't optimize that.
Please, post anything you can think of, I have to finish this and send it tomorrow.
Here's how it looks like:
zoomed in
zoomed out
Here's two ideas:
Every object that we draw on a map has an extra value in the database, "Zoom Level". When zooming in extra items will be shown based on that value.
A second way to this is to use grouping. If items start to overlap show one point with [10 items]. Only show the items beneath it when zooming in.
I think I would be tempted to not draw lines that are shorter than a threshold (and I mean this in terms of the viewport, not absolute distance terms). That means that when the map is zoomed out, you will have less to draw and the map will look less busy and when the map is zoomed in the lines between these nearby points will become visible. Edit: actually, thinking about it some more, I think I would only apply this length restriction when there are a large number of lines on screen — or make the length threshold a function of the number of lines on screen.
I think I would also be tempted to not draw lines that are from points that are off screen (out of the viewport) or, at least, quite a way off screen (a threshold away from the viewport's centre). I would suggest trying this change first.
These changes may seem like they will be hiding information (and they will) but, as it stands, the map is so busy this information presented is near useless anyway.
some hints:
clip region, to draw points only in the clip region
you can check opensource GIS project, see how they optimize drawing.