I have a program for transportation company where optimal route computed by Dijkstra . Cities as vertexes and routes as edges . To find weight of edge . I connected cities in map with line and measure it . Then I accept it as weight of edge . But in real life routes aren't straight . So How can I fix it ? enter image description here
In my project I must solve logistic problem with creating Software . Can anyone give me idea what to solve ?
As you already found out, problem is not so simple as it might seem.
First of all, connecting only major cities is a bad idea, as they're probably not connected directly with a highways (if it's not U.S. or something).
That's your current idea:
What I would propose is to try to get every minor city on every way that makes sense and add it as a vertex to your Dijkstra:
Now, we come to the point where we can see which way actually exists in real world. From just looking on our graph, we might presume that going with bottom path should be more efficient. But what if we found out this:
We can easily come to conclusion now that upper path is actually much better, because you can achieve twice the speed of the bottom one.
Is that very precise classification? No, it's not.
We might want to think of what traffic is on each way and change weights of edges dynamically. But that's probably too much for your basic implementation.
What I would do in the end is think of what data can I gather almost alone or with little help.
So I can definitely:
somehow scrap some data about actual ways of reaching point B from point A; good references are either Google Maps API or Bing Maps API;
gather minor cities along the way with finding real world ways from point A to B;
try finding out what speed limit is where (if there is any database for it)
Actually, you might want to go for full into either Google Maps or Bing Maps and just let them provide you with best road possible.
They both have quite actual data of any road you need.
There is no way you can gather as much data as they do.
You have everything on a plate, if you feel that's the way you can do.
If not, I would go hybrid way - get some vital data from any maps API, then use it for my Dijkstra algorithm, then use this data to write a simple algorithm for measuring actual weight of each edge based on possible modifiers (speed limit, traffic if API provides it and so on).
Related
I have a Polygon (a sequence of 2D-points) loaded (I can parse from GPX, Google polyline and GeoJson) in my C# program. And I have around 1000 of 'segments' that also are GeoJson Linestrings.
Some of these segments are 'far away' off the track, while others could perfectly intersect with the track. My question is what the fastest way (ideally using an existing in memory library) would be to detect which segments are part of the overall track. (The points obviously don't have to match 100%, but can be a few meters off the track)
Consider the situation where I have a recorded GPS-track (of a car for example) and I want to check against a library of streets if that track has been driving through those streets. (ideally in the specified direction, if possible)
So, my main questions:
Is there an out of the box open source library available that has implemented this?
If not, I'm happy to contribute it, but then I'm looking for a good description of such an algorithm.
Further clarification
I have indeed found several options to work out if a point is inside , or on a polygon. (see here).
But the main challenge is to find out if someone has done this before in .net and to understand if there are out of the box possibilities for this.
Any help is appreciated.
Despite Googling around a fair amount, the only things that surfaced were on neural networks and using existing APIs to find tags about an image, and on webcam tracking.
What I would like to do is create my own data set for some objects (a database containing the images of a product (or a fingerprint of each image), and manufacturer information about the product), and then use some combination of machine learning and object detection to find if a given image contains any product from the data I've collected.
For example, I would like to take a picture of a chair and compare that to some data to find which chair is most likely in the picture from the chairs in my database.
What would be an approach to tackling this problem? I have already considered using OpenCV, and feel that this is a starting point and probably how I'll detect the object, but I've not found how to use this to solve my problem.
I think in the end it doesn't matter what tool you use to tackle your problem. You will probably need some kind of machine learning. It's hard to say which method would result in the best detection, for this I'd recommend to use a tool like weka. It's a collection of multiple machine learning algorithms and lets you easily try out what works best for you.
Before you can start trying out the machine learning you will first need to extract some features out of your dataset. Since you can hardly compare the images pixel by pixel which would result in huge computational effort and does not even necessarily provide the needed results. Try to extract features which make your images unique, like average colour or brightness, maybe try to extract some shapes or sizes out of the image. So in the end you will feed your algorithm just with the features you extracted out of your images and not the images itself.
Which are good features is hard to define, it depends on your special case. Generally it helps to have not just one but multiple features covering completely different aspects of the image. To extract the features you could use openCV, or any other image processing tool you like. Get the features of all images in your dataset and get started with the machine learning.
From what I understood, you want to build a Content Based Image Retrieval system.
There are plenty of methods to do this. What defines the best method to solve your problem has to do with:
the type of objects you want to recognize,
the type of images that will be introduced to search the objects,
the priorities of your system (efficiency, robustness, etc.).
You gave the example of recognizing chairs. In your system which would be the determining factor for selecting the most similar chair? The color of the chair? The shape of the chair? These are typical question that you have to answer before choosing the method.
Either way one of the most used methods to solve such problems is the Bag-of-Words model (also Referred the Bag of Features). I wish I could help more but for that I need that you explain it better which are the final goals of your work / project.
I was hoping that I could achieve some guidance from the stackoverflow community regarding a dilemma I have run into for my senior project. First off, I want to state that I am a novice programmer, and I'm sure some of you will quickly tell me this project was way over my head. I've quickly become well aware that this is probably true.
Now that's that's out of the way, let me give some definitions:
Project Goal:
The goal of the project, like many others have sought to achieve in various SO questions (many of which have been very helpful to me in the course of this effort), is to detect
whether a parking space is full or available, eventually reporting such back to the user (ideally via an iPhone or Droid or other mobile app for ease of use -- this aspect was quickly deemed outside the scope of my efforts due to time constraints).
Tools in Use:
I have made heavy use of the resources of the AForge.Net library, which has provided me with all of the building blocks for bringing the project together in terms of capturing video from an IP camera, applying filters to images, and ultimately completing the goal of detection. As a result, you will know that I have selected to program in C#, mainly due to ease-of-use for beginners. Other options included MATLAB/C++, C++ with OpenCV, and other alternatives.
The Problem
Here is where I have run into issues. Below is linked an image that has been pre-processed in the AForge Image Processing Lab. The sequence of filters and processes used was: Grayscale, Histogram Equalization, Sobel Edge Detection and finally Otsu Threshholding (though I'm not convinced the final step is needed).
http://i.stack.imgur.com/u6eqk.jpg
As you can tell from the image with the naked eye of course, there are sequences of detected edges which clearly are parked cars in the spaces I am monitoring with the camera. These cars are clearly defined by the pattern of brightened wheels, the sort of "double railroad track" pattern that essentially represents the outer edging of the side windows, and even the outline of the license plate in this instance. Specifically though, in a continuation of the project the camera chosen would be a PTZ to cover as much of the block as possible, and thus I'd just like to focus on the side features of the car (eliminating factors such as license plate). Features such as a a rectangle for a sunroof may also be considered but obviously this is a not a universal feature of cars, whereas the general window outline is.
We can all see that there are differences to these patterns, varying of course with car make and model. But, generally this sequence not only results in successful retrieval of the desired features, but also eliminates the road from view (important as I intend to use road color as a "first litmus test" if you will for detecting an empty space... if I detect a gray level consistent with data for the road, especially if no edges are detected in a region, I feel I can safely assume an empty space). My question is this, and hopefully it is generic enough to be practically beneficial to others out there on the site:
Focused Question:
Is there a way to take an image segment (via cropping) and then compare the detected edge sequence with future new frames from the camera? More specifically, is there a way to do this while allowing leeway/essentially creating a tolerance threshhold for minor differences in edges?
Personal Thoughts/Brainstorming on The Question:
-- I'm sure there's a way to literally compare pixel-by-pixel -- crop to just the rectangle around your edges and then slide your cropped image through the new processed frame for comparison pixel-by-pixel, but that wouldn't help particularly unless you had an exact match to your detected edges.
All help is appreciated, and I'm more than happy to clarify as needed as well.
Let me give it a shot.
You have two images. Lets call them BeforePic and AfterPic. For each of these two pictures you have a ROI (rectangle of interest) - AKA a cropped segment.
You want to see if AfterPic.ROI is very different from BeforePic.ROI. By "very different" I mean that the difference is greater then some threshold.
If this is indeed your problem, then it should be split into three parts:
get BeforePic and AfterPic (and the ROI for each).
Translate the abstract concept of picture\edge difference into a numerical one.
compare the difference to some threshold.
The first part isn't really a part of your question, so I'll ignore it.
The last part is based basically finding the right threshold. Again out of the scope of the question.
The second part is what I think is the heart of the question (I hope I'm not completely off here). For this I would use the algorithm ShapeContext (In the PDF, it'll be best for you to implement it up to section 3.3, as it gets too robust for your needs from 3.4 and on).
Shape Context is a image matching algorithm using image edges with great success rates.
Implementing this was my finals project, and it seems like a perfect match (no pun intended) for you. If your edges are well, and your ROI is accurate, it won't fail you.
It may take some time to implement, but if done correctly, this will work perfectly for you.
Bare in mind, that a poor implementation might run slowly and I've seen a worst case of 5 seconds per image. A good (yet not perfect) implementation, on the other hand, will take less then 0.1 seconds per image.
Hope this helps, and good luck!
Edit: I found an implementation of ShapeContext in C# # CodeProject, if it's of any interest
I take on a fair number of machine vision problems in my work and the most important thing I can tell you is that simpler is better. The more complex the approach, the more likely it is for unanticipated boundary cases to create failures. In industry, we usually address this by simplifying conditions as much as possible, imposing rigid constraints that limit the number of things we need to consider. Granted, a student project is different than an industry project, as you need to demonstrate an understanding of particular techniques, which may well be more important than whether it is a robust solution to the problem you've chosen to take on.
A few things to consider:
Are there pre-defined parking spaces on the street? Do you have the option to manually pre-define the parking regions that will be observed by the camera? This can greatly simplify the problem.
Are you allowed to provide incorrect results when cars are parked illegally (taking up more than one spot, for instance)?
Are you allowed to provide incorrect results when there are unexpected environmental conditions, such as trash, pot holes, pooled water or snow in the space?
Do you need to support all categories of vehicles (cars, flat-bed trucks, vans, delivery trucks, motorcycles, mini electric cars, tripod vehicles, ?)
Are you allowed to take a baseline snapshot of the street with no cars present?
As to comparing two sets of edges, probably the most robust approach is known as geometric model finding (describing the edges of interest mathematically as a series of 'edgels', combining them into chains and comparing the geometry), but this is over-kill for your application. I would look more toward thresholds of the count of 'edge pixels' present in a parking region or differencing from a baseline image (need to be careful of image shift, however, since material expansion from outdoor temperature changes may cause the field of view to change slightly due to the camera mechanically moving.)
We need to calculate driving distances for records in a SQL Server database, so I need to find some sort of library or program that will let me do so without connecting to the internet(if it has it's own database, great, if not, I know where to get data). I'm not too worried about calculation types right now, we're probably going to go with Djikstra's, but we just need something offline. Also, I will be dealing with multiple countries, though mostly USA.
So far, I haven't found anything that would work reliably, closest is MapPoint (per Marc Gravell), so I want to ask what offline solutions are available either to plug into, call from, or work next to my code (Delphi and .NET) to calculate driving distances? Thanks.
Options:
For a sensible number of locations, you could obtain (purchase, calculate, etc) a travel matrix between all locations - gets large as you increase the count, though
If you have the lat/long for each, you can do great-arc quite easily; but tends to get messy near lakes, oceans, etc
You could use an offline like MapPoint desktop, perhaps by storing a queue of unknown routes and processing those outside the db
Please check http://www.routeware.dk for RW Net. Developed with Delphi and can use TIGER for off-line calculations. Very fast for large scale matrix calculations.
btw: A better forum for such questions is https://gis.stackexchange.com/
Ok, after sleeping on the problem, I found a solution by using google to search on "vehicle routing software." So far I have found three options that look like they might work, and will be investigating them. Those are ALK Technologies' PCMiler, Telogis' Developer tools, and DNA Evolutions' JOpt.NET. Still plenty more companies to check out for developer tools on that search phrase. I think my main problem was I was using "Driving Distance" and "Route distance" as my search terms yesterday.
Edit: for what I'm looking for, Telogis seems to have the most complete function set.
I know my question seems pretty vague, but I can't think of a better way to put it, so I'll start off by explaining what I'm trying to do.
I'm currently working on a project whereby I've been given a map and I'm coding a 'Critter' that should be able to navigate its way around the map; the critter has various other functions, but those are not relevant to the current question. The whole program and solution is being written in C#.
I can control the speed of the critter, and retrieve its current location on the map by returning its current X and Y position, I can also set its direction when it collides with the terrain that blocks it.
The only problem I have is that I can't think of a way to intelligently navigate my way around the map; so far I've been basing it on what direction the critter is facing when it collides with the terrain, and this is in no way a good way of moving around the map!
I'm not a games programmer, and this is for a software assignment, so I have no clue on AI techniques.
Here's a link to an image of what the maps and critters look like:
Map and Critter image
I'm in no way looking for anyone to give me a full solution, just a push in the general direction on map navigation.
If the only knowledge of the environment that you have is the position of your critter and its velocity the best you can do is a wall following algorithm I think. If you can detect some of the other things in your environment you have many more options.
Some of the more popular algorithm types are...
A* Search (The classic)
Visibility Graphs
Voronoi Diagrams (similar to the above)
Potential Fields
Potential Fields is a fancy way of saying every obstacle or wall has a "repulsive force" while every goal has an "attractive force". The strength of the force is based on the distance from the object and the "severity" of the object. (A pit of lava is much more severe to travel through than a bumpy road) After constructing the force fields the naive algorithm boils down to following the path of least resistance. Better versions can detect local minima and maxima and escape those wells.
Critter
-----\ /-------\
\ / \
\/ \
Local Minima Trap \
\
\
Goal
A* Search
Take a look at the A* pathfinding algorithm. It's essentially the standard approach for stuff like this.
Amit Patel's write up on pathfinding for games has a pretty good introduction to A* as well as popular variants of the algorithm.
You'll find a C# implementation here, and here
Dynamic A*
Let's say the terrain you'll be searching is not known ahead of time, but rather is discovered as the agent explores its environment. If your agent comes across a previously unknown obstacle, you could just update the agent's map of the terrain and then re-run A* to find a new path to the goal that routes around the obstruction.
While a workable solution, rerunning the planning algorithm from scratch every time you find a new obstacle results in a sizable amount of redundant computation. For example, once you're around the obstacle, it might be that the most efficient route to the goal follows the one you were planning on taking before you discovered the obstacle. By just rerunning A*, you'll need to recompute this section of the previous path.
You can avoid this by using Dynamic A* (D*). Since it keeps track of previously computed paths, when the agent finds a new obstacle, the system only needs to compute new routes in the area around the obstacle. After that, it can just reuse existing paths.
I would use a goal-oriented approach. Your question states the goal is than explore the map and avoid obstacles, so that's what we make our goal. But how do we explore the whole map? We explore what is unexplored.
From the outset you have only one unexplored area, the square you are on. The rest of the map is marked as unexplored. You choose an unexplored location, and make it your goal to explore it. But how do you get there? You create a subgoal to explore the location next to it. And how do you do that - explore the square next to that, and so on, until your original goal is broken down into a sequence of explorations, starting from your current square and navigating to the target square.
As you hit obstacles and discover features of the map, some of the subgoals may need to be changed. E.g. when you hit a wall, the subgoal to explore that square has to be scrubbed and you create a new plan to find an alternate route. This is known as backtracking.
That's basically it for the high level description. I hope it helps!
I seem to be late at the party. If your critter have a GPS and the full map at hand, the right thing to do is definitely A*, and if the map is small enough a simple BFS would do as well if you don't feel like coding up A* (A* has quite a few corner cases you want to handle right).
However a different question is what if your critter only knows the direction of the goal and can only observe locally what is around it? What if your critter does not know the full map?
In this case you would want to implement the "bug algorithm" for navigation. Link: http://www.cs.cmu.edu/~./motionplanning/lecture/Chap2-Bug-Alg_howie.pdf
It's a cute piece of algorithm that works for all unknown maps, you would have a blast coding it I'm sure.