I'm writing an program that traces the contour of individual frames within an image. The tracing is complete and works very well.
Basically I start pixel 0,0 and loop though each row until I find a contour pixel, then using the Moore neighborhood algorithm, I trace out the block until I reach my initial starting point.
However, if anyone has looked at a bitmap up close, you would see that the pixels are not perfectly straight and it's possible for frame #2 or #3 to have a slightly higher starting Y coordinate. Thus I will need to allow for some tolerance on the Y axis.
In the perfect world. I could sort the frames via (y) and then by (x) in ascending order.
Getting to the point, If I have the following image loaded into a bitmap class, and lets say I already knew the top left X, top left Y, width, and height for each frame. How could I programmatically sort the frames correctly?
Image: (figure 12, image a)
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3629985/figure/F12/
You can conceptually align the nearly aligned the frames like this:
Sort the frame locations by X
Set each frame location within a few X pixels of the previous frame's location to the previous frame's X value.
Do the same for Y.
Then you can order them normally.
Related
I have a database that holds data of eyetracking on some videos.
I export those data to an int[,] input matrix for this issue. And then try to create a heatmap.
What I get so far is something like this:
But this is not actually what I want it to be. I want something like the heatmaps that you see when you google it, e.g.:
Treat each spot as an artificial circle with the center at that spot. A circle that has, say, 50 pixel radius. Now, go over all pixels of the image and for each one count all circles that cover that pixels. This is your score for that pixel. Translate it into color, i.e 0: black/transparent, 10: light green, 20: yelow, and so on. After analyzing all pixels you will get a color for each pixel. Write a bitmap, look at it. It should be something close to what you want.
Of course, circle radius, color mappings, etc, need adjusting to your needs. Also, that's probably not the best/simplest/fastest algorithm.
Different approach would be to store the "heat" in the pixels greyvalue.
Just create a second image with the same size as the original one and count up the greyvalue of a pixel everytime it was looked at.
Later you can use that value to calculate a size and color for the circle you want to draw.
You can then lay the heatmap image on top of the original one and you are done (dont forget to set transparency).
I started from EMGU Example (Emgu.CV v2.4.10.1939):
http://www.emgu.com/wiki/index.php/CompareImages_-_Difference
Instead of comparing previous video frame and the next one I am dealing with a screen capture of the primary screen at time1 and another screen capture at time2. Second screen capture brings a minimal difference, especially in one part of the captured image and that is: an outline (closed polygon of n vertices). I applied ThresholdBinary method and this code:
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.05, storage);
to get the shape of the difference (which for me is a white polygon)
I then cropped that polygon to avoid processing unnecessary parts of the image.
In the attached images on the right I tried to depict what I want and don't want based on processing left input image.
I want to find x,y coordinates (in pixels) of all intersections of vertices of the polygon for the cropped image.
When I would later redraw the polygon I would expect to as close match as possible to this input polygon. I would like to have a reasonable number of straight lines detected so the final shape faithfully resembles input shape on the left.
This is a similar question that didn't have a solution.
Find contour of the set of points in OpenCV
Suppose I have such bitmap from laser scanning with a red line from the laser on it, what would be the right way to find the center of that line? Either to store its coordinates in an array or to just draw a think line over it.
What approach would you suggest? Preferebly with an option to be able to smooth out that line.
(source: gyazo.com)
THanks
I'd suggest to
Convert image to monochrome
Convert image to black-white using "image thresholding"
Split image in small parts
For every part,that is not entirely black, calculate Hough Transform and fine approximating segment
Merge these segments into chain and then smooth them (using Catmull-Rom splines for example)
However It is not the only possible approach, there are a lot of them
I would approach this with a worm. Have your worm start on one pixel and allow it to move along the line. Every time you detect a change in direction in any dimension you place a point. Then fit a spline through your points. Add the start and end locations as points too.
Important issues:
You need to maintain which pixels have been visited, such that when a worm finishes you can detect if you need to start a new one on what is left.
You need to maintain a velocity vector in your worm and weight posible forward choices based on which will more closely continue the line your're currently on. This is because...
You need to deal with topology changes, where you have two or more lines intersecting the same point. or a Split in the line into two.
For fitting the spline itself have a look at Numerics on NuGet
My suggestion is:
you go row by row, saving the coordinates of the first and last appearance of red-ish pixel in each row.
then, stretch a line between each two coordinates in every row, or between the middle pixels of each two coordinates.
using a file I want to create a map and I am wondering about the best approach doing so.
Actually I searched the forum but I only found map generation algorithms that randomly creates maps.
Let's look at a minimal example.
e.g. Ihave a file containing
0110
1001
1000
0000
Every 0 shall be water and every 1 shall be earth.
I would handle this by simply havin two different bitmaps and loading them at the right coordinates. That'd be simple.
But let's guess we have a 1000*1000 big map and there is only enough space for 16*16 tiles per frame. Then I'd get the current position and would build the map around it.
Assuming we can only display 3*3 tiles, using the minimal example and being at position (2,2) where x and y is element 1..4 so what the user could see at this time would be:
011
100
100
Solution
I thought about using a text file, where a line represents the x-coordinate direction and
a column represents the y-coordinate direction. The whole file is being loaded at the beginning of the program. This shouldn't use too much ram assuming 1 tile needs 1 byte, what should be enough.
For redrawing the map when the user is moving, I'd get the moving direction and slide the current Bitmap for the height/width of a tile in the opposite direction and only look up the bitmaps for the new blank spaces. So I only need to look up the tile information for m+n-1 (where m is the amount of displayed tiles in y and n in x direction) tiles (max case if moving diagonal) instead of loading m*n tiles everytime the user moves.
Example
I created an example to make the above given example more easily to understand.
this is the whole map:
We can only display 3*3 tiles and the user is at position (2,2) so what we'd actually see is:
now he is moving towards the bottom right corner:
and the black framed section is being move to the opposite direction, so that we get:
now the blank tiles (black framed white areas) have to be looked up and teh final result will be:
Question
is this a good way of building a map? Or are there much faster functions, maybe already implemented in the microsoft xna-gamestudio package ?
I would pre-fetch 1-2 tiles range outside the screen view, so that you won't have weird pop-up as the player move.
But if your game is a top-down tile game, this solution is quite conservative. In most hardware today, you could create a very big range around the player without problem. Just look at the number of block Minecraft can process and display. Since you are reusing the same texture, you just load the asset once and reuse them in a tile, which would probably an object with very little memory footprint.
Have you tried implementing it yet?
I have an array of Point variables. When drawn using Graphics.DrawLine, they create the expected image. My problem is that 0,0 is actually the center of the image (not the top left of my canvas as expected. My X and Y coordinates in the Points can contain negative numbers.
When I try to draw this to my Image, of course I get 1/4 of the total image as the remainder is drawn outside the bounds of my canvas. How do I center this drawing correctly onto my canvas?
I know the dimensions of the image I want to draw. I know where 0,0 is (width / 2, height / 2).
I suppose I can translate each and every single Point, but that seems like the hard way to do this.
TranslateTransform() can map coordinates for you if you setup a transformation during your drawing handlers.
Graphics.TranslateTransform # MSDN
Or, map your coordinates by adding half the width and half the height of the desired viewing area to each coordinate.
Also, you may need to scale your coordinates. You may use Graphics.ScaleTransform to do this.
Graphics.ScaleTransform # MSDN
If you don't wish to use this, then you should divide X coordinates by the percent amount you wish to stretch the width, and divide Y coordinates by the percent amount you wish to stretch the height. This gives us 1 for 100%, 1.2 for 120%, 0.8 for 80%, etc.
Welcome to the Windows' version of the Cartessian Plane. Your last statement is correct. You do have to offset each and every point. The only real help you can give yourself is to make the offset logic a separate method to clean up your main drawing code.
When creating the array, add an offset to each x value equal to half of the width and an offset to y equal to half of the height. That way when the points are drawn, they're in the expected position.