We need to calculate driving distances for records in a SQL Server database, so I need to find some sort of library or program that will let me do so without connecting to the internet(if it has it's own database, great, if not, I know where to get data). I'm not too worried about calculation types right now, we're probably going to go with Djikstra's, but we just need something offline. Also, I will be dealing with multiple countries, though mostly USA.
So far, I haven't found anything that would work reliably, closest is MapPoint (per Marc Gravell), so I want to ask what offline solutions are available either to plug into, call from, or work next to my code (Delphi and .NET) to calculate driving distances? Thanks.
Options:
For a sensible number of locations, you could obtain (purchase, calculate, etc) a travel matrix between all locations - gets large as you increase the count, though
If you have the lat/long for each, you can do great-arc quite easily; but tends to get messy near lakes, oceans, etc
You could use an offline like MapPoint desktop, perhaps by storing a queue of unknown routes and processing those outside the db
Please check http://www.routeware.dk for RW Net. Developed with Delphi and can use TIGER for off-line calculations. Very fast for large scale matrix calculations.
btw: A better forum for such questions is https://gis.stackexchange.com/
Ok, after sleeping on the problem, I found a solution by using google to search on "vehicle routing software." So far I have found three options that look like they might work, and will be investigating them. Those are ALK Technologies' PCMiler, Telogis' Developer tools, and DNA Evolutions' JOpt.NET. Still plenty more companies to check out for developer tools on that search phrase. I think my main problem was I was using "Driving Distance" and "Route distance" as my search terms yesterday.
Edit: for what I'm looking for, Telogis seems to have the most complete function set.
Related
I have a program for transportation company where optimal route computed by Dijkstra . Cities as vertexes and routes as edges . To find weight of edge . I connected cities in map with line and measure it . Then I accept it as weight of edge . But in real life routes aren't straight . So How can I fix it ? enter image description here
In my project I must solve logistic problem with creating Software . Can anyone give me idea what to solve ?
As you already found out, problem is not so simple as it might seem.
First of all, connecting only major cities is a bad idea, as they're probably not connected directly with a highways (if it's not U.S. or something).
That's your current idea:
What I would propose is to try to get every minor city on every way that makes sense and add it as a vertex to your Dijkstra:
Now, we come to the point where we can see which way actually exists in real world. From just looking on our graph, we might presume that going with bottom path should be more efficient. But what if we found out this:
We can easily come to conclusion now that upper path is actually much better, because you can achieve twice the speed of the bottom one.
Is that very precise classification? No, it's not.
We might want to think of what traffic is on each way and change weights of edges dynamically. But that's probably too much for your basic implementation.
What I would do in the end is think of what data can I gather almost alone or with little help.
So I can definitely:
somehow scrap some data about actual ways of reaching point B from point A; good references are either Google Maps API or Bing Maps API;
gather minor cities along the way with finding real world ways from point A to B;
try finding out what speed limit is where (if there is any database for it)
Actually, you might want to go for full into either Google Maps or Bing Maps and just let them provide you with best road possible.
They both have quite actual data of any road you need.
There is no way you can gather as much data as they do.
You have everything on a plate, if you feel that's the way you can do.
If not, I would go hybrid way - get some vital data from any maps API, then use it for my Dijkstra algorithm, then use this data to write a simple algorithm for measuring actual weight of each edge based on possible modifiers (speed limit, traffic if API provides it and so on).
I'm developing a PC app in Visual Studio where I'm showing the status of hundreds of sensors that are connected via WiFi. The thing is that I need to hold on to the sensor data even after I close the app, so I'm considering some form of permanent storage. These are the options I've considered:
1) My Sensor object is relatively compact with only a few properties. I could serialize all the objects before closing the app and load them every time the app starts anew.
2) I could throw all the properties (which are mostly strings and doubles) into a simple text file and create a custom protocol for storage and retrieval.
3) I could integrate a database with my app. Someone told me this is the best way to go about it, but I'm a bit hesitant seeing as I'm not familiar with DBs.
Which method would yield the best results in terms of resource usage and speed? Or is there some other, better way to go about this?
First thing you need is to understand is your problem. For example, when the program is running do you need to have everything in memory at the same time or do you work with your sensors one at a time?
What is a "large amount of data"? For example, to me that will never be less than million (or billion in some cases).
Once you know that you shouldn't be scared of using something just because you are not familiar to it. Otherwise you are not looking for the best solution for your problem, you are just hacking around it in a way that you feel comfortable.
This being said, you have several ways of doing this. Like you said you can serialize data, using json to store and a few other alternatives but if we are talking about a "large amount of data that we want to persist" I would always call for the use of Databases (the name says a lot). If you don't need to have everything in memory at the same time then I believe that this is you best option.
I personally don't like them (again, personal choice) but one way of not learning SQL (a lot) while you still use your objects is to use an ORM like NHibernate (you will also need to learn how to use it so you don't get things a slower).
If you need to have everything loaded at the same time (most often that is not the case so be sure of this) you need to know what you want to keep and serialize it. If you want that data to be readable by another tool or organize in a given way consider a data format like XML or JSON.
Also, you can use mmap-file.
File is permanent, and keep data between program run.
So, you just keep your data structs in the mmap-ed area, and no more.
MSDN manual here:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366556%28v=vs.85%29.aspx
Since you need to load all the data once at the start of the program, the database case seems doubtful. The DB necessary when you need to load a bit of data many times.
So first two cases seem more preferred. I would advice to hide a specific solution behind an interface, then you'll can change it later.
Standard .NET serialization of sensors' array is more simple probably, and it will be easier to expand.
Fairly new to coding and i want a project to work on that could help me advance my skills. I'm not sure what language would be best for this sort of undertaking but i would definitely prefer to use C++ or C#.
For the first part of the program i basically would like to try and take all my pandora likes and put them on a spreadsheet with song name is one column and artist in the other. I don't see the formatting being too hard once i actually get the data i need, but i'm not really sure how to communicate with a server at all in this point in time. I'm guessing i probably won't be able to grab a raw list of likes so the i'm thinking my best course of action will be to first expand the likes list all the way, and then i need to read the text on the screen ro in the source code.
For the first step, expanding my like i found the HTML source code that actually does this:
<div class="show_more tracklike" data-nextLikeStartIndex="0" data-nextThumbStartIndex="5">Show more</div>"
Not sure if this is something i can work with but i was thinking if i could set data-nextThumbStartIndex="5" to be equal to the # of likes - 5 (the amount it shows by default) it would be fairly easy to expand the list. If not i would probably have to click the "show more" link repeatedly until i have all the likes on the page.
For the next step, getting the data i want, i think my best option would be to basically just grab the text that i physically see on the screen and worry about filtering and manipulating the data afterwards. The other option is looking at the source code, which i actually found the pieces of code where the info i want is stored. If i could retrieve the page's source code i think it would be relatively easy to pick out the data i actually want from that.
So yea that's about it, i know i'm pretty noob atm and what i'm saying is probably wrong and/or much more complicated than i think but i'm a pretty quick learner and at the very least if someone could point me in the right direction to communicate with a server that would be much appreciated.
This question is quite "wide" (and I have absolutely no knowledge of Pandora itself - can't access it from where I live).
In general, there are several different ways to solve this type of problem:
Screen Scraping - basically access the website as if you were a web-server, and from the HTML string that comes back, dig out the information you need. The problem here is that the data is not very suitable for "machine reading", as it often has no distinct points for the "reader" to find the relevant information, and it's difficult to sort the data from the "chaff".
AJAX api - "Asynchronous Java Script and XML" where the provider of the website has an interface to fetch certain data within to the web-browser - of course, if you "pretend" to be the web-browser, requesting the same type of information. You are relying on the website to have such an interface, but if it exists, the data is generally in a "more suitable form to be machine read" (typically XML, but not always).
JSON api - "Java Script Object Notation" is a similar solution to AJAX - like XML, JSON is a "human and machine readable format".
The latter two are definitely preferable, as the data coming back is meant for machine reading. The drawback is that you need to have "server side cooperation". The good thing here is that Pandora does have a JSON API. The bad thing is that it seems to be hard to use... Here's one discussion on the subject:
Making JSON calls to Unoffical Pandora API
The main principle here is that you send some stuff to the webserver, and receive a reply with the requested information. Exactly how this is done depends on the language/programming environment. A popular C++ solution is libcurl.
There is a Ruby Client here, using the JSON interface
https://github.com/nixme/pandora_client
A C# implementation to interface with Pandora is here:
http://pandoraunleashed.googlecode.com/svn/trunk/PandoraUnleashed/Pandora.cs
Unfortunately, I can't find any direct reference to "listing likes".
I'm writing a desktop UI (.Net WinForms) to assist a photographer clean up his image meta data. There is a list of 66k+ phrases. Can anyone suggest a good open source/free .NET component I can use that employs some sort of algorithm to identify potential candiates for consolidation? For example there may be two or more entries which are actually the same word or phrase that only differ by whitespace or punctuation or even slight mis-spelling. The application will ultimately rely on the user to action the consolidation of phrases but having an effective way to automatically find potential candidates will prove invaluable.
Let me introduce you to the Levenshtein distance formula. It is awesome:
http://en.wikipedia.org/wiki/Levenshtein_distance
In information theory and computer science, the Levenshtein distance is a string metric for measuring the amount of difference between two sequences. The term edit distance is often used to refer specifically to Levenshtein distance.
Personally I used this in a healthcare setting, where Provider names were checked for duplicates. Using the Levenshtein process, we gave them a confidence rating and allowed them to determine if it was a true duplicate or something unique.
I know this is an old question, but I feel like this answer can help people who are dealing with the same issue in current time.
Please have a look at https://github.com/JakeBayer/FuzzySharp
It is a c# NuGet package that has multiple methods that implement a certain way of fuzzy search. Not sure, but perhaps Fosco's anwer is also used in one of them.
Edit:
I just noticed a comment about this package, but I think it deserves a better place inside this question
In informal conversations with our customer service department, they have expressed dissatisfaction with our web-based CSA (customer service application). In a callcenter, calls per hour are critical, and lots of time is wasted mousing around, clicking buttons, selecting values in dropdown lists, etc. What the dirrector of customer service has wistfully asked for is a return to the good old days of keyboard-driven applications with very little visual detail, just what's necessary to present data to the CSR and process the call.
I can't help but be reminded of the greenscreen apps we all used to use (and the more seasoned among us used to make). Not only would such an application be more productive, but healthier for the reps to use, as they must be risking injury doing data entry through a web app all day.
I'd like to keep the convenience of browser-based deployment and preserve our existing investment in the Microsoft stack, but how can I deliver this keyboard-driven ultra-simple greenscreen concept to the web?
Good answers will link to libraries, other web applications with a similar style, best practices for organizing and prioritizing keyboard shortcut data (not how to add them, but how to store and maintain the shortcuts and automatically resolve conflicts, etc.
EDIT: accepted answers will not be mini-lectures on how to do UI on the web. I do not want any links, buttons or anything to click on whatsoever.
EDIT2: this application has 500 users, spread out in call centers around North America. I cannot retrain them all to use the TAB key
I make web based CSR apps. What your manager is forgetting is now the application is MUCH more complex. We are asking more from our reps than we did 15 years ago. We collect more information and record more data than before.
Instead of a "greenscreen" application, you should focus on making the web application behave better. For example,dont have a dropdown for year when it can be a input field. Make sure the taborder is correct and sane, you can even put little numbers next to each field grouping to indicate tab order. Assign different screens/tabs to F keys and denote them on the screen.
You should be able to use your web app without a mouse at all with no loss of productivity if done correctly.
Leverage the use of AJAX so a round trip to the server doesn't change the focus of their cursor.
On a CSR app, you often have several defaults. you should assign each default a button and allow the csr to push 1 button to get the default they want. this will reduce the amount of clicking and mousing around.
Also very important You need to sit with the CSR's and watch them for a while to get a feel for how they use the app. if you haven't done this, you are probably overlooking simple changes that will greatly enhance their productivity.
body { background: #000; color: #0F0; }
More seriously, it's entirely possible to bind keyboard shortcuts to actions in a web app.
You might consider teaching your users to just use the Tab key - that's how I fill out most web forms. Tab to a select list and type out the first few letters of the option I'm attempting to select. If the page doesn't do goofy things with structure and tabindexes, I can usually fill out most web forms with just the keyboard.
As I had to use some of those apps over time, will give my feedback as a user, FWIW, and maybe it helps you to help your users :-) Sorry it's a bit long but the topic is rather close to my heart - as I had myself to prototype the "improved" interface for such a system (which, according to our calculations, saves very nontrivial amounts of money and avoids the user dissatisfaction) and then lead the team that implemented it.
There is one common issue that I noticed with quite a few of CRMs: there is 20+ fields on the screen, of which typically one uses 4-5 for performing of 90% of operations. But one needs to click through the unnecessary fields anyway.
I might be wrong with this assumption, of course (as in my case there was a wide variety of users with different functions using the system). But do try to sit down with the users and see how they are using the application and see if you can optimize something UI-wise - or, if really it's a matter of not knowing how to use "TAB" (and they really need to use each and every of those 20 fields each time) - you will be able to coach a few of them and check whether this is something sufficient for them - and then roll out the training for the entire organization. Ensure you have the intuitive hotkey support, and that if a list contains 2000 items, the users do not have to scroll it manually to find the right one, but rather can use FF's feature to select the item by typing the start of its text.
You might learn a lot by looking at the usage patterns of the application and then optimizing the UI accordingly. If you have multiple organizational functions that use the system - then the "ideal UI" for each of them might be different, so the question of which to implement, and if, becomes a business decision.
There are also some other little details that matter for the users - sometimes what you'd thought would be the main input field for them in reality is not - and they have an empty textarea eating up half of the screen, while they have to enter the really important data into a small text field somewhere in the corner. Or that in their screen resolution they need the horizontal scrolling (or, scrolling at all).
Again, sitting down with the users and observing should reveal this.
One more issue: "Too fast developer hardware" phenomenon: A lot of the web developers tend to use large displays with high resolution, showing the output of a very powerful PCs. When the result is shown on the CSR's laptop screen at 1024x768 of a year-old laptop, the layout looks quite different from what was anticipated, as well as the rendering performance. Tune, tune, tune.
And, finally - if your organization is geographically disperse, always test with the longest-latency/smallest bandwidth link equivalent. These issues are not seen when doing the testing locally, but add a lot of annoyance when using the system over the WAN. In short - try to use the worst-case scenario when doing any testing/development of your application - then this will become annoying to you and you will optimize its use - so then the users that are in better situation will jump in joy over the apps performance.
If you are in for the "green screen app" - then maybe for the power users provide a single long text input field where they could type all the information in the CLI-type fashion and just hit "submit" or the ENTER key (though this design decision is not something to be taken lightly as it is a lot of work). But everyone needs to realize that "green-screen" applications have a rather steep learning curve - this is another factor to consider from the business point of view, along with the attrition rate, etc. Ask the boss how long does the typical agent stay at the same place and how would the productivity be affected if they needed a 3-month term to come to full speed. :) There's a balance that is not decided by the programmers alone, nor by the management alone, but requires a joint effort.
And finally a side note in case you have "power users": you might want to take a look at conkeror as a browser - though fairly slow in itself, it looks quite flexible in what it can offer from the keyboard-only control perspective.
I can't agree with the others more when they say the first priority of the redesign should be going and talking to / observing your users and see where they have problems. I think you would see far more ROI if you find out the most common tasks and the most common errors your users make and streamline those within the bounds of your existing UI. I realize this isn't an easy thing to do, but if you can pull it off you'll have much happier users (since you've solved their workflow issues) and much happier bosses (since you saved the company money by not having to re-train all the users on a completely new UI).
After reading everyone else's answers and comments, I wanted to address a few other things:
EDIT: accepted answers will not be mini-lectures on how to do UI on the web. I do not want any links, buttons or anything to click on whatsoever.
I don't mean to be argumentative, but this sounds like you've already made up your mind without having thought of the implications on the users. I can immediately see a couple pitfalls with this approach:
A greenscreen-esque UI may not be
more productive for your users. For
example, what's the average age of
your users? Most people 25 and
younger have had little to no
exposure to these types of UIs.
Suddenly imposing this sort of
interface on them could cause a
major backlash from your users. As an example, look at what happened
when Facebook decided to change its
UI to the "stream" concept - huge
outrage from the users!
The web wasn't really designed with this sort of interface in mind. What I mean is that people are not used to having command-line-like interfaces when they visit a website. They expect visual medium (images, buttons, links, etc.) in addition to text. Changing too drastically from this could confuse your users.
Programming this type of interface will be tough. As in my last point, the web doesn't play well with command-line-like or text-only interfaces. Things like function keys, keyboard shortcuts (like ctrl- and alt-) are all poorly and inconsistently supported which means you'll have to come up with your own ways of accessing standard things like help (since F1 will map to the web browser's help, not your app's).
EDIT2: this application has 500 users, spread out in call centers around North America. I cannot retrain them all to use the TAB key
I think this argument is really just a strawman. If you are introducing a wholly new UI, you're going to have to train your users on it. Really, it should be assumed that any change to your UI will require training in one form or another. Something simple like adding tab-navigation to the UI is actually comparatively small in the training department. If you did this it would be very easy to send out a "handy new feature in the UI" email, or even better, have some sort of "tip of the day" (that users can toggle off, of course) which tells them about cool timesaving features like tab navigation.
I can't speak for the other posters here, but I did want to say that I hope you don't think we're being too argumentative here as that's not our (well OK, my) intent. Rather the reaction comes from us hearing the idea for your UI and not being convinced that it is necessarily the best thing for your users. You are fully welcome to say I'm wrong and that this is what your users will benefit most from; but before you do, just remember that at the end of the day it's your users who matter most and if they don't buy in to your new UI, no one will.
It's really more of a keyboard-centric mentality when developing. I use the keyboard for as much as possible and the apps I build tend to show that (so I can quickly go through my use cases).
Something as simple as getting the tab order correct could be all your app needs (I guess I'm not sure if you can set this in ASP.NET...). A lot of controls will auto-complete for the rest.