I have an array of pixels and I would like to search through it to see if there is a specific template.
But I am unsure how to start - for each pixel run the search for the template image? I cannot imagine how it could work if e.g. First 5 pixels match and the sixth does not, should it move back to second pixel and start over> Also I assume some kind of tolerance must be there.
You can take a look at the Accord.NET Extensions library which implements fast template matching algorithm along with some samples.
Fast template matching algorithm:
Gradient Response Maps for Real-Time Detection of Textureless Objects
Link
The library:
https://github.com/dajuric/accord-net-extensions
source:
https://github.com/dajuric/accord-net-extensions/tree/master/Source/ImageProcessing/FastTemplateMatching
There is an example too.
Related
I'm using Oxyplot HeatMapSeries for representing some graphical data.
For a new application I need to represent the data with isosurfaces, something looking like this:
Some ideas around this:
I know the ContourSeries can do the isolines, but I can't find any option that allows me to fill the gaps between lines. Does this option exists?
I know the HeatMapSeries can be shown under the contourSeries so I can get a similar result but it does not fit our needs. .
Another option wolud be limiting the HeatMapSeries colours and eliminate the interpolation. Is this possible?
If anyone has another approach to the solution I will hear it!
Thanks in advance!
I'm evaluating whether Oxyplot will meet my needs and this question interests me... from looking at the ContourSeries source code, it appears to be only for finding and rendering the contour lines, but not filling the area between the lines. Looking at AreaSeries, I don't think you could just feed it contours because it is expecting two sets of points which when the ends are connected create a simple closed polygon. The best guess I have is "rasterizing" your data so that you round each data point to the nearest contour level, then plot the heatmap of that rasterized data under the contour. The ContourSeries appears to calculate a level step that does 20 levels across the data by default.
My shortcut for doing the rasterizing based on a step value is to divide the data by the level step you want, then truncate the number with Math.Floor.
Looking at HeatMapSeries, it looks like you can possibly try to turn interpolation off, use a HeatMapRenderMethod.Rectangles render method, or supply a LinearColorAxis with fewer steps and let the rendering do the rasterization perhaps? The Palettes available for a LinearColorAxis can be seen in the OxyPalettes source: BlueWhiteRed31, Hot64, Hue64, BlackWhiteRed, BlueWhiteRed, Cool, Gray, Hot, Hue, HueDistinct, Jet, and Rainbow.
I'm not currently in a position to run OxyPlot to test things, but I figured I would share what I could glean from the source code and limited documentation.
Is it possible to Extract any Shape that's in front of an image?
let's say we have an image of two objects 1 in front, the other is behind and a blank or transparent background.
can we extract the one in front and place it in a new image?
can this be done by detecting edge of frontal shape and then crop it?
This article is doing something near to my question :
Cropping Particular Region In Image Using C#
but i want to do it fully automated.
any help would be highly appreciated.
Thanks in advance.
I think you cannot do this fully automated; however, there are maybe some semi-automated ways, at least, you need some prior information such as how far your object can be placed. Here are some of my suggestions.
First way(you have experience in implementing academic papers, you have some prior information about depth of object's place),
Download a "scene - depth images database" from internet
Get the average value of database
Query K-Nearest-Neighbors of input image according to GISt of scene[1]
Apply SIFT flow to align database scenes according to input scene
Infer the depth
Remove a certain range from image.
It's possible to infer rough depth map of an input image. By using this, you'll try to infer depth map of input image then remove the certain depth range which includes your object. You can check the paper[2] for more detailed explanation.
Example Depth Map from http://www.the.me/wp-content/uploads/2013/09/z-depth_map_expanding_exif_more_powerful_post-processing_2n.jpg
Second way(assumption: human input is allowed at the end of algorithm.),
- Segment the image(you can find state of the art algorithm with a little search)
- Select the contour that you want to remove.
Example Segmented Image from http://vision.ece.ucsb.edu/segmentation/edgeflow/images/garden_edge.gif
References:
[1]Aude Oliva
Gist of the Scene
[2]Karsch, K.; Liu, C.; Kang, S.B.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2014.
OpenCV gives option for to extract contours of the objects. So convert your image to gray scale and given to the open CV , detect all the contours in your images. and from that select contours which will be required for your requirement.
Since your project is on C# you can take a look of Emgu CV which is a cross platform .Net wrapper for OpenCV. Pl. refer to the below url where you can download the examples for Emgu CV.
http://sourceforge.net/projects/emguexample/?source=recommended
Objective: I want to count from image below.
What are the ideas can work here?
I tried FindContour(). It returns boundary. Further I need to use those contour points.Using matchShape() and Contour.slice() is not helping.. Any working example for this case will be very helpful.
Any help will be appreciated.
Basically, perform a normalized cross correlation and find relevant peaks. To improve your results, you need to rethink/redo the earlier steps that got to this image your are showing. You need to consider whether you actually did the best/correct steps to get to it.
Here is the normalized cross relation result cropped to the original size, and the non-black points for where the result is greater than 0.35 (the implementation I used produces values in the range [-1, 1]).
The right image is trivially binarized, and gives 5 components, which is your result.
I'm looking at creating a heatmap of numerical data spread over various locations within a building. I've spent a few hours researching data mapping, etc. and am looking for some advice. I am new to GIS. The majority of options available are mostly tile APIs that use lat/long, and are overkill for my requirements...
Ultimately, I just want to output a background image (a floor plan) with the heatmap overlay demonstrating areas of high intensity. The data is bound to specific locations (example, activity level: 14, location: reception entrance) and so is not randomly distributed over the map. Data has timestamps, and the final objective is to print PNGs of hourly activity for animation.
I feel like I have two options.
I like this tutorial (http://dylanvester.com/post/Creating-Heat-Maps-with-NET-20-%28C-Sharp%29.aspx) as it offers a huge amount of flexibility and the final imagery is very similar to what I would like - it's a great head start. That said, I'd need to assign locations such as "reception entrance" to an x,y co-ordinate, or even a number of x,y co-ordinates. I'd then need to process a matrix prior to every heatmap, taking data from my CSV files and placing activity values in the appropriate co-ordinates.
The other option I think I have is to create a custom shapefile (?) from the floor plan. That is, create a vector graphic with defined regions, where each is attributable to a taggable location. This seems the most flexible option, but I'm really, really struggling to find out how to create shapefiles?
My unfamiliarity with GIS terminology is making searches difficult. The latter seems the most sensible solution (use the shapefile with something like https://gist.github.com/1370472) to change the activity values over time.
Links found:
guthcad.com/cad2shape.htm (but don't have CAD drawing, just raster floorplan)
stackoverflow.com/questions/4014072/arcgis-flex-overlay-floor-plan-png (unhelpful, don't want tiled)
oliverobrien.co.uk/2010/01/simple-choropleth-maps-in-quantum-gis/
gis.stackexchange.com/questions/20901/using-gis-for-interactive-floor-plan (looks great)
To summarise: I'd like to map data bound to locations within a building. There's very good code in a C# tutorial I'd like to use, but the linking of activity data to co-ordinates is potentially messy (although could allow for describing transitions of activity between locations as vectors between co-ordinates could be used...). The other option is to create an image with regions that can be linked to CSV data by something like QGIS. Could people with more experience suggest the best direction, or even alternatives?
Thank you!
I recently did something similar for a heatmap of certain events in the USA.
My input data was simply a CSV file with three columns: Latitude, Longitude, and Number of Events.
After examining available options, I ended up using GHeat.Net. It was quite easy to use and required only a little modification to meet my needs.
The output is a transparent PNG that I then overlaid onto Google Maps.
Although your scale is quite different, I imagine the same solution should work in your case.
UPDATE
If the x,y values are integers in a reasonably small range, and if you have enough samples, you might simply create a (sparse?) array, with each array element's value being the number of samples at that coordinate. Identify the "hottest" array element (the one with the most samples) and equate that to "white" in a heat map, with lesser values corresponding to colder colors (or in other words, normalize all values in the array using the highest value and map the normalized values to a color scale). Map the array to a PNG.
Heat maps like GHeat create a sphere of influence around each data point. Depending on your data, you may not need that.
If your sample rate is not high enough, you could lift the sphere of influence code out of GHeat and apply it to your own array.
The sphere of influence stuff basically adds a value of "1" to the specific coordinate in the data sample, and also adds a smaller value to adjacent pixels in the map in order to provide for smoother-looking maps. I don't know the specific algorithm used in GHeat, but the basic idea is to add to the specific x,y value as well as neighbors using a pattern something like this:
0.25 | 0.5 | 0.25
-----------------
0.5 | 1.0 | 0.5
-----------------
0.25 | 0.5 | 0.25
i want to use sift/surf for template matching. Image can have 1...n targets.
Using surf/sift only one target can be extracted. One idea can be segment image in many segments and then look for sift/surf matching. It works but obviously it is not ideal because of speed and effort. Does there exist any alternative approach?. / Anyone has source code for scale and rotation invariant template matching.
regards,
If i understand correctly what you are saying (provide more informations please), you have N planar image objects. You want to extract SIFT/SURF features from the N images and put all the features in some sort of container (an array or an acceleration data structure for high-dimensional nearest neighbors). When you process a given image, you extract SIFT (or SURF) features and search, for every feature, its closest feature in the container. You end up having a list of pairs (feature from current image, feature from container). Now you have to apply some robust model estimator (RANSAC for example) to construct the homography. If a good homography can be found (with at least 10, 12 inliers), you will be sure that your target is there. Obviously, given the array of features pairs, you subdivide it into groups, where each group is one of the N planar image objects of your database (this is a not the best way to do, probably you should associate to each feature extracted from the current image to k features of the database and using some form of voting-scheme to establish which are the pairs, but doing so things gets more complicated).
So, generally speaking, you have to make some decisions:
feature to use (SIFT? SURF? others?)
robust model estimator (RANSAC? PROSAC? MLSAC?)
which geometric considerations to use when computing the homography (take advantage of
the fact that the homography relates points in two planar objects)
which multi-dimensional data structure you will use to accelerate the search
how to compute the homography (well, probably there is only one way: normalized DLT)
If your obects are NOT planar, the problem is more difficult, since a 3D rigid objects probably changes as the viewpoint changes. To describe it, you will need K images instead of only one. This is a lot more challenging to do, because as N and K grows, recognition rates drops down. Probably, there are other better ways. I strongly suggest to check using google relevant literature.