Search by country and keyword/term? - c#

I have recently started using google places api and am a big noob on it, I have looked around the main docs on how to run a query on the API but seems that It does not support what I want or im looking at the wrong place.
I need to search on a specific place for a specific term for example:
Restaurants and USA
Is this possible or how would I have to go in order to produce it using the API ?

When you do a Places Search: https://developers.google.com/maps/documentation/places/#PlaceSearches
You can specify a types parameter which limits the types of things you are searching for.
Or you can specify a keyword parameter which selects for a certain term across the whole Place record.
For location, your only option is to select a latitude/longitude pair and specify a radius. This won't work for "USA" as the maximum radius is 50000 meters. You could add that as a keyword however. For locations such as cities, you could geocode first to get the lat/long pair:
https://developers.google.com/maps/documentation/geocoding/

Related

Algorithm to identify similarity between text messages

I'm looking for an algorithm than can compare two text messages (let's say forum posts) and identify the similarity in percentage.
What would be the most efficient solution for this purpose?
The idea is to use this algorithm to identify users on a forum who have more than two nicknames, pretending to be different people.
I'm going to build a program that will read all their posts and compare each post from the first account to posts of the second account to find whether they are genuinely two different persons or just two registrations of a single user.
The first thing that came to my mind was the Levenshtein Distance, but it is more focused on words similarities.
You could use tf-idf, but it will probably work better if your corpus contains more than only two documents.
An alternative could be representing the documents (posts) using a vector space model, like:
(w_0, w_1, ..., w_k)
where
k is the total of terms (words) in your document
w_i is the i-th term.
and then compute the Hamming Distance, which basically compares two vectors (arrays) and count the positions where they are different. You can discard stop-words first (i.e. words like prepositions, etc.)
Take in count that the user might change some words, use synonyms, etc. There are lots of models for representing documents, computing similarity between them. Some of them take in count words dependency, which gives more semantic to the process, and others don't.
google-diff-match-patch will be a good choice for you. you can look the demo for testing.

Geocode lookup in C

I want to do a super fast geocode lookup, returning co-ordinates for an input of Town, City or Country. My knowledge is basic but from what I understand writing it in C is a good start. I was thinking it makes sense to have a tree structure like this:
England
Kent
Orpington
Chatam
Rochester
Dover
Edenbridge
Wiltshire
Swindon
Malmsbury
In my file / database I will have the co-ordinate and the town/city name. If give my program the name "Kent" I want a program that can return me the co-ordinate assoaited with "Kent" in the fastest way possible
Should I store the data in a binary file or a SQL database for performance reasons?
What is the best method of searching this data? Perhaps binary tree searching?
How should the data be stored? perhaps?
Here's a little advice, but not much more than that:
If you want to find places by name, or name prefix, as you indicate that you wish to, then you would be ill-advised to set up a data structure which stores the data in a hierarchy of country, region, town as you suggest you might. If you have an operation that dominates the use of your data structure you are generally best picking the data structure to suit the operation.
In this case an alphabetical list of places would be more suited to your queries. To each place not at the topmost level you would want to add some kind of reference to the name of its 'parent'. If you have an alphabetical list of places you might also want to consider an index , perhaps one which points directly to the first place in the list which starts with each letter of the alphabet.
As you describe your problem it seems to have much more in common with storing words in a dictionary (I mean the sort of thing in which you look up words rather than any particular collection data-type in any specific programming language which goes under the same name) than with most of what goes under the guise of geo-coding.
My guess would be that a gazetteer including the names of all the world's towns, cities, regions and countries (and their coordinates) which have a population over, say, 1000, could be stored in a very simple data structure (basically a list) with an index or two for rapid location of the first A place-name, the first B, and so on. With a little compression you could probably hold this in the memory of most modern desktop PCs.
I think the best advice I can give is to use whatever language you are familiar with to get the results you want. Worry about performance once your code works. Then you can look at translating very specific pieces of functionality into C or C++ one at a time until you have the results you want.
You should not worry about how the information is stored, except not to duplicate data.
You should create one or more indices for the data. The indicies are associative arrays / maps data structures that contain a key (the item you want to search) and a value (such as the record and other information associated with the key). This will enable you with fast lookups without altering your data for each type of search.
On the other hand, your case is an excellent fit for a data base. I suggest you let the database manager your data (such as efficient lookups). After all, that is what they live for.
See also: At what point is it worth using a database?

How to configure tolkenizers with indexing and searching with Lucene and Nhibernate

This is a question for using Lucene via the NHibernate.Search namespace, which works in conjunction with Lucene.
I'm indexing a Title in the Index: Grey's Anatomy
Title : "Grey's Anatomy"
By using Luke, I see that that title is getting Tokenized into:
Title: anatomy
Title: grey
Now, I get a result if I search for:
"grey" or "grey's"
However, if I search for "greys" then I get nothing.
I would like "greys" to return a result. And I guess this could be an issue with any word with an apostrophe.
So, here are some questions:
Am I right in thinking I could fix this issue either by changing something on the time of index (so, changing the tolkenizer..??) or changing it a query time (query parser?)
If there is a solution, could someone provide a small code sample?
thanks
If you make a classic Term search using Lucene, then greys it's most likely not to show on the results, except that you make a nice tokenizing work when saving, so from where I see it, you have 2 choices or a 3rd beign a combination of them:
Use a Stemmer for indexed data and query. Stemmers are fast, and you can always find an implementation of Porter's stemmer somewhere in Google. Problem is when you look for different languages.
Use Fuzzy queries. Using a Fuzzy Query you can set the edit distance that you want to get "away" from the word being search. The thing is that because 2 words are "close" using an edition distance (i.e, Lehvenstein) doesn't mean that they're the same, but the problem of Grey and Grey's and Greys should be solved with setting an edit distance of 2.
I think you will be able to find a decent implementation of the Porter Stemmer, which is nice right here.
Hope I can help!

How do I determine if two similar band names represent the same band?

I'm currently working on a project that requires me to match our database of Bands and venues with a number of external services.
Basically I'm looking for some direction on the best method for determining if two names are the same. For Example:
Our database venue name - "The Pig and Whistle"
service 1 - "Pig and Whistle"
service 2 - "The Pig & Whistle"
etc etc
I think the main differences are going to be things like missing "the" or using "&" instead of "and" but there could also be things like slightly different spelling and words in different orders.
What algorithms/techniques are commonly used in this situation, do I need to filter noise words or do some sort of spell check type match?
Have you seen any examples of something simlar in c#?
UPDATE: In case anyone is interested in a c# example there is a heap you can access by doing a google code search for Levenshtein distance
The canonical (and probably the easiest) way to do this is to measure the Levenshtein distance between the two strings. If the distance is small relative to the size of the string, it's probably the same string. Note that if you have to compare a lot of very small strings it'll be harder to tell whether they're the same or not. It works better with longer strings.
A smarter approach might be to compare the Levenshtein distance between the two strings but to assign a distance of zero to the more obvious transformations, like "and"/"&", "Snoop Doggy Dogg"/"Snoop", etc.
I did something like this a while ago, I used the the Discogs database (which is public domain), which also tracks artist aliases;
You can either:
Use an API call (namevariations field).
Download the monthly data dumps (*_artists.xml.gz) & import it in your database. This contains the same data, but is obviously a lot faster.
One advantage of this over the Levenshtein distance) solution is that you'll get a lot less false matches.
For example, Ryan Adams and Bryan Adams have a score of 2, which is quite good (lower is better matches, Pig and Whistle and Pig & Whistle has a score of 3), yet they're obviously different people.
While you could make a smarter algorithm (which also looks at string length, for example), using the alias DB is a lot simpler & less error-phone; after implementing this, I could completely remove the solution that was suggested in the other answer & had better matches.
soundex may also be useful
In bioinformatics we use this to compare DNA- or protein sequences all the time.
There are plenty of algorithms, you probably want to look at global alignments.
In this respect the Needleman-Wunsch algorithm is probably what you seek.
If you have particularly long recurring strings to compare you might also want to consider heuristic searches like BLAST.

Efficient algorithm for finding related submissions

I recently launched my humble side project and would like to add a "related submissions" section when viewing a submission. Exactly like what SO is doing here - see right column, titled "Related"
Considering that each submission has a title and a set of tags, what is most effective (optimum result), most efficient (fast, memory friendly) way to query the database for related submissions?
I can think of one way to do this (which I'll post as an answer) but I'm very interested to see what others have to say. Or perhaps there's already a standard way of achieving this?
Here's my two cent solution:
To achieve the best output, we need to put “weight” on the query results.
To start with, each submission in the database is assumed to have a weight of zero.
Then, if a submission in the "pool" shares one tag with the current submission, we'd add +3 to the found submission. Hence, if another submission is found that shares two tags with the current submission, we add +6 to the weight.
Next, we split/tokenize the title of the current submission and remove “stop words”.
I’ve seen a list of stop words from google, but for now I’ll define my stop words to be: [“of”, “a”, “the”, “in”]
Example:
Title “The Best Submission of All Times”
Result the array: ["The", “Best”, “Submission”, “of”, “All”, “Times”]
Remove stop words: [“Best”, “Submission”, “All”, “Times”]
Then we query the database for submissions containing any of the mentioned titles, and for each result we add the weight: +2
And finally sort the list descending by weight and take the top N results.
What do you think? (be gentle!)
If I understand well, you need a technique to find whether two posts are "similar" one to each other. You may want to use a probabilistic model for that:
http://en.wikipedia.org/wiki/Mutual_information
The idea would be to say that if two posts share a lot of "uncommon" words, they are probably speaking on the same topic. For detecting uncommon words, depending on your application, you may use a general table of frequencies, or maybe better, build it yourself on the universe of the words of your posts (but you will need to have enough of them to have something relevant).
I would not limit myself on title and tags, but I would overweight them in the research.
This kind of ideas is very common in spam filtering. I unfortunately the time to make a full review, but a quick google search gives:
http://www.aclweb.org/anthology/P/P04/P04-3024.pdf
karlmicha.googlepages.com/acl2004_poster.pdf

Categories