C# newbie here so please forgive me if my terminology isn't quite correct.
As part of this project, I have a user hold up a piece of paper to a webcam so I can capture, isolate and then eventually display back what they've drawn on it. I've put some restrictions on where the corners of the paper have to be in order for the program to accept it, but there's still the chance that it's distorted by perspective.
Here's an example image that I've captured and isolated the paper out of: image
What I want to be able to do is to distort this image so that the corners of the piece of paper are turned back into a 8.5x11-proportioned rectangle (as if the user had scanned it rather than held it up to the webcam). Rotation and skewing can only get me so far, ideally I would be able to freely transform the image, like in Photoshop. I found this example, I am basically trying to do the opposite. Curious if anyone's had to do this, before I start trying to reverse that four-point image distortion example.
This is sometimes called a Quadrilateral warp.
Disclaimer: I work for Atalasoft.
Our DotImage Photo SDK can do this and it's free. Look at QuadrilateralWarpCommand. You need to know the source and destination quadrilateral.
Related
I'm sorry if this question is broad but I have not been able to find any real solutions to the problem I must solve. I need to solve the problem of mapping a user's location to an image that represents a map (like an amusement park map).
One possible solution would be to define GPS coordinates to different parts of the image and then snap the user's location to the closest defined location.
Something else I saw was Geospatial PDF's but I couldn't find much on implementing a way to read Geospatial information from the PDF.
How can I take an image that represents lets say a theme park and map a user's location to it?
Short answer:
You can't, by which I mean you can't just take a regular image and snap co-ordinates to it's pixels.
Long answer:
You can, but it takes a lot of work and preparation, here's the basics of what you need to do.
STEP 1 - Georefrence the image
To do this you need some GIS software, and an existing map of the area that's registered in the correct co-ordinate space.
If you have the budget, then you should consider using professional software such as Autodesk map 3D or the ESRI suite of tools. If you don't have a budget, you can do this using free tools such as QGIS.
I'll assume QGis for this description.
Load the existing map that you have for the area (The one that's already referenced) into your GIS Package. (How and where you get this map from is entirely up to you, if your lucky you might have one someone else did, or the builders of your park might have site plans you can use, without this source map however, you can forget any chance of matching the image to it unless you have a list of all the points you want to reference) [SIDE NOTE: It's perfectly feasible to go out with a GPS device and record your points manually, esp if the site your mapping is not to big and you have full access to it, since your only referencing an image of your own, and not building anything then super, duper 1000% accuracy is not needed]
Assuming the use of QGis, go up to the "Raster" menu, and select the "GeoReferencer" tool
Once the tool loads, you'll be presented with a child window that allows you to load your "Un-referenced" map image into it. (The load button is marked with a red arrow)
Once you have your raster image loaded, you then need to use the already referenced map you loaded into QGis (Sorry no space to document this part, there are a multitude of ways, depending on what data you have) and pick points from it that match the raster, in your georeferencer tool.
Make sure the georeferencer tool is in add point mode.
Then click on the image that you loaded into your geo-referencing tool, at the location where you want your first point.
The enter map co-ordinates box will pop open.
If you already know the location of these points (For example because you went out with a GPS, or you have some survey data) then you can simply just type them in. If not, then click the button marked "from map canvas", and the geo-reference tool will switch to the already referenced map you have loaded, and ask you to click on the same location on the referenced map.
Once you click on the referenced map, QGis will then switch back to the Geo-reference tool with the co-ordinates filled in
At this point, you can click "OK" and the point will be registered on your un-referenced raster image as a referenced point (Small red dot)
Repeat this process, until you have as many locations as you want referenced. You don't have to do everything, but key locations, such as park entrances, corners around the main site outline, centers of prominent buildings and road junctions should be done.
The more points you reference, the more accurate the final referenced raster image will be.
Once you've finished adding the points to your image to reference it, you then need to click the yellow cog wheel, and fill in the options for the output raster, target SRS and other things that will turn this into a map.
Now, at this stage I've not mentioned a VERY, VERY, VERY important concept, and that's the "SRS" otherwise known as the "Spatial Reference System"
You may have noticed in my screen shots above, when the co-ordinates were entered in the dialog by clicking on the map, that they did not look like the usual latitude/longitude pair that a phone or GPS unit might produce.
That's because ALL of my maps are in an SRS known as "OSGB36" (or EPSG:27700), which is the local spatial reference system for the united kingdom.
I could have if I'd wanted to, used the standard GPS system (Known as WGS84 or 'EPSG:4326') but because I'm working only within the UK, doing that would actually have cause errors in my calculations.
If your working with something a small as an Amusement park, then for best results you NEED to find what your local geographic co-ordinate system is, using standard GPS co-ordinates will cause too many errors, and might even lead to incorrect location plotting when you finally plot your point on your image.
There's simply far too much info for me to put into an SO post, so I would strongly suggest that you grab a free copy of the EBook I've written on the subject from here:
https://www.syncfusion.com/resources/techportal/details/ebooks/gis
That will fill in a large amount of the background knowledge you need, in order to do this.
Once you've set your settings, and added all your reference points, your then ready to create your referenced raster image, by simply clicking on the green triangle
Once you get to this point, and your referenced image is saved, you will now have a map/image that should be referenced in your local co-ordinates, and can understand a point given to it in the same co-ordinate system, and know where to plot it on a map.
That however, is only the start of your journey.
STEP 2 - Build a map server
Once you have the image, you then need to host in something called a WMS server.
Again, describing how to do this from the ground up in an SO post is simply just not practical, what you need is something like "geoserver" (A Java based easy to use map server system) or something like a bare bones linux system, with apache webserver installed and the mapserver CGI binary application to run under it.
Once you have a map server set up, and serving maps using the WMS protocol, you can then move onto the final stage
STEP3 - Creating your application to display the map
The final part of the equation is to build an application to display the map from your WMS server, and then take the location of the person or item you want to plot, optionally convert the co-ordinates to the local SRS that matches your image, then plots the dot over the image in the correct location.
If your doing this in a web/mobile application, then you'll most likely want to investigate the use of openlayers.js or leaflet.js, if your doing this in a C# application, then GreatMaps or SharpMap are the toolkits you want to be looking at.
Wrap up
Many folks think that plotting locations onto a map image is a quite straight forward and simple task, I can tell you now it's not.
Iv'e built many GIS systems over the years, and even the simplest of them has taken me over 3 months.
Even a simple idea such as the one your asking about, takes tremendous amounts of planning and analysis, there is no quick way of doing this unless you simply just want to host a google maps image on your web page, and match your device co-ordinates up to that.
The second you start to produce custom maps, you automatically set yourself up for a lot of work, that's going to take you time and patience to get it right.
Pixels in images simply don't match up to real world co-ordinates, and the truth of the matter is simple. There's a reason why mapping and GIS companies charge as much as they do to create systems like this.
References and further reading
http://www.qgistutorials.com/en/docs/georeferencing_basics.html
http://www.digital-geography.com/qgis-tutorial-i-how-to-georeference-a-map/
http://glaikit.org/2011/03/27/image-georeferencing-with-qgis/
http://geoserver.org/
http://mapserver.org/uk/index.html
http://openlayers.org/
All the best with your project, and please know this, your in for a lot of work, but your also going to have a lot of fun, and learn heaps of new stuff, the world of GIS is by it's very nature complicated, but it's also a very fascinating subject, especially when you start drawing your own maps from scratch :-)
Shawty
If your map image represents a not so large area, then I would think of this as a rectangle.
It would be just a matter of transforming your Lat/Lng coordinates to (x,y) coordinates inside your image.
Lat2 |----------------------------|
| |
Lat1 |----------------------------|
Long1 Long2
Assign the real world Lat/Long coordinates to each corner of your map:
Bottom Left Corner = Lat1, Long1
Bottom Right Corner = Lat1, Long2
Upper Left Corner = Lat2, Long1
Upper Right Corner = Lat2, Long2
Given the user longitude and latitude and knowing the width and height of your image, you can calculate the transformed (x,y) coordinates over the image:
x = User Longitude * Image Width / |Long2 - Long1|
y = User Latitude * Image Height / |Lat2 - Lat1|
You should now be able to put a pin over that (x,y) position.
Here's the scenario:
I am using Visual Studio 2008 with .NET framework 3.5. I am using C#. And for database I am using MySQL. I have a picturebox on a form and 10-12 buttons (each with some image manipulation function). On clicking one of the buttons openfiledialog box is shown up where the user can select the specific file to provide to the program. On clicking another button the program should perform the actions as explained below.
I have an image of a circuit. Suppose this is the image which is provided to the program. e.g.
What I intend to do is that - the program should hypothetically label the circuit as follows:
and then it should separate the image and store the information in a database.
Is there any way to do that. Can anyway tell me the approach to do that? Any help or suggestions please.
Thanks.
In image processing, the problem of finding the 'parts' of the circuit is known as connected component labeling. If you are using C#, I believe that you can use EmguCV (a wrapper to the OpenCV library) to solve the first part of the problem. To do that, you have to consider that the white pixels are the background and that the black pixels are objects.
Now that you have the separated traces, the problem is reduced to finding and labeling the white dots. Again, you can solve it by connected component labeling, but now the objects are represented by white pixels and the background are the black pixels.
At least for your example case, a very simple algorithm would work.
Find a black pixel from the image
Using a flood-fill algorithm, find all the pixels connected to it, and separate it. That's one of your traces.
Working with the separated trace, find a white pixel and use a flood-fill algorithm to find all the pixels connected to it. If you run to the edge of the image, it's not a hole. If you don't, it might be a hole, or a loop in the trace. Use a threshold for the hole size to determine if it's a terminal hole or a loop.
Label the hole and remove it from consideration. Repeat until there are no more unprocessed white pixels.
Remove the whole trace from consideration, and jump to 1.
When there are no more black pixels in consideration in step 1, you're done.
You should probably get pretty far with a basic image editing library that has a flood-fill function, a function to separate a certain color into a new image, and a function to replace colors (the last two are trivial to implement, and there are plenty of flood-fill algorithms available online). You can use different colors to mark different things, for instance, color everything "no in consideration" red. It also makes for an interesting visualization if you look at it in real time!
I'm working on a project where I need to take a single horizontal or vertical pixel row (or column, I guess) from each frame of a supplied video file and create an image out of it, basically appending the pixel row onto the image throughout the video. The video file I plan to supply isn't a regular video, it's actually just a capture of a panning camera from a video game (Halo: Reach) looking straight down (or as far as the game will let me, which is -85.5°). I'll look down, pan the camera forward over the landscape very slowly, then take a single pixel row from each frame the captured video file (30fps) and compile the rows into an image that will effectively (hopefully) reconstruct the landscape into a single image.
I thought about doing this the quick and dirty way, using a AxWindowsMediaPlayer control and locking the form so that it couldn't be moved or resized, then just using a Graphics object to capture the screen, but that wouldn't be fast enough, there would be way too many problems, I need direct access to the frames.
I've heard about FFLib, and DirectShow.NET, I actually just installed the Windows SDK but haven't had a chance to mess with and of the DirectX stuff yet (I remember it being very confusing for me a while back when I messed with it). Hopefully someone can give me a pointer in the right direction.
If anyone has any information they think might help, I'd be super grateful for it. Thank you!
You could use a video rendered in renderless mode (E.g. VMR9, EVR), which allows you to process every frame yourself. By using frame stepping playback you can step one frame each time and process the frame.
DirectShow.NET can help you to use managed code where possible, and I can recommend it. It is however only a wrapper to DirectShow, so it might be worthwhile to look for more advanced libraries as well.
A few sidenotes: wouldn't you experience issues with lighting which differs from angle to angle? Perhaps it's easier to capture some screenshots and use existing stitching algorithms?
i've been working on a webapp. i got stuck here in a problematic issue.
i'll try to explain what im trying to do.
here you see first big image which has green shapes in it.
what i want to do is to crop those shapes into different png files and make their background transparent like the example cropped images below the big one.
The first image will be uploaded by user and i want to crop into pieces like the example cropped images above.it can be done with GD library of php or by a server-side software written in python or c#. but i dunno what this operation called so i dunno what to google to find information. it is something to do with computer vision detecting blobs and cropping them into pieces etc.
any keywords,links would be helpful.
thanks for helps
A really easy way to do this is to use Flood Fill/Connected Component Labeling. Basically, this would just be using a greedy algorithm by grouping any pixels that were the same or similar in color.
This is definitely not the ideal way to detect blobs and is only going to be effective in limited situations. However, it is much easier to understand and code and might be sufficient for your purposes.
Opencv provides a function named cv::findContours to find connected components in an image. If it's always green vs white, You want to cv::split the image into channels, use cv::threshold on the blue or the red channel (those will be white in the white regions and near black in the green region) with THRESH_BINARY_INV (because you want to extract the dark regions), then use cv::findContours to detect the blobs. You can then compute the bounding rectangle with cv::boundingRect, create a new image of that size, and use the contour you got as a mask to fill the new image.
Note: These are links to the C++ documentation, but those functions should be exposed in the python and C# wrappers - see http://www.emgu.com for the latter.
I believe this Wikipedia article covers the problem really well: http://en.wikipedia.org/wiki/Blob_detection
Can't remember any ready-to-use solutions though (-:
It really depends on what kinds of images you will be processing.
As Brian mentioned, you could use Connected Component Labeling, which usually is applied to binary images, where foreground is denoted by white pixels and background by black pixels (or the opposite). The problem is then how to transform the original image to a binary one. If all images are like the example you provided, this is straightforward and can be accomplished with thresholding. OpenCV provides useful methods:
Threshold
FindContours for finding contours of connected components
DrawContours for extracting each component individually into a separate image
For more complex images, however, all bets are off.
I have a ton of pdfs scans that I have converted to images. Most of these scans contain a lot of whitespace around the edges.
What is the best way to go about finding a boundingbox for the actual content and then subsequently removing the whitespace?
I've thought about writing a program that just displays the image, then you drag a box and its saves the image, and moves on to the next one. This would be VERY time consuming, but it would get the job done. I'd like to be able to automate this process somehow using C#.
Either buy just cropping the image or by perhaps by suggesting a bounding box.
Emgu CV (on SourceForge) is a .NET wrapper around OpenCV, which has numerous image manipulation capabilities, including image filters and a bounding box algorithm that could solve this pretty easily.
http://code.google.com/p/aforge/
Aforge is a complete C# library Not a wrapper. OpenCV is very professional tool in compare of AForge.
Are you talking about scanned documents or scanned photos ? What format are your images in ? It sounds like you need an AutoCrop function.
Here is a freeware C# component that has an autocrop function. It should work well on B/W documents. You will need to see if it works the way you want if you are using photos.
http://www.hi-components.com/nievolution_features.asp
This component would also allow you to write code to load your images, draw a bounding box and and then save the cropped images as needed.