n-grams using regex - c#

I am working on an augmentative and alternative communication (AAC) program. My current goal is to store a history of input/spoken text and search for common phrase fragments or word n-grams. I am currently using an implementation based on the lzw compression algorithm as discussed at CodeProject - N-gram and Fast Pattern Extraction Algorithm. This approach although producing n-grams does not behave as needed.
Let's say for example that I enter "over the mountain and through the woods" several times. My desired output would be the entire phrase "over the mountain and through the woods". Using my current implementation the phrase is broken into trigrams and on each repeated entry one word is added. So on the first entry I get "over the mountain". On the second entry "over the mountain and", etc.
Let's assume we have the following text:
this is a test
this is another test
this is also a test
the test of the emergency broadcasting system interrupted my favorite song
My goal would be that if "this is a test of the emergency broadcasting system" were entered next that I could use that within a regex to return "this is a test" and "test of the emergency broadcasting system". Is this something that is possible through regex or am I'm walking the wrong path? I appreciate any help.

I have been unable to find a way to do what I need with regular expressions alone although the technique shown at Matching parts of a string when the string contains part of a regex pattern comes close.
I ended up using a combination of my initial system along with some regex as shown below.
flow chart http://www.alsmatters.org/files/phraseextractor.png
This parses the transcript of the first presidential debate (about 16,500 words) in about 30 seconds which for my purposes is quite fast.

From your use case it appears you do not want fixed-length n-gram matches, but rather a longest sequence of n-gram match. Just saw your answer to your own post, which confirms ;)

In python you can use the fuzzywuzzy library to match a set of phrases to a canonical/normalized set of phrases through an associated list of "synonym" phrases or words. The trick is segmenting your phrases appropriately (e.g. when do commas separate phrases and when do they join lists of related words within a phrase?)
Here's the structure of the python dict in RAM. Your data structure in C or a database would be similar:
phrase_dict = {
'alternative phrase': 'canonical phrase',
'alternative two': 'canonical phrase',
'less common phrasing': 'different canonical phrase',
}
from fuzzywuzzy.process import extractOne
phrase_dict[extractOne('unknown phrase', phrase_dict)[0]]
and that returns
'canonical phrase'
FuzzyWuzzy seems to use something like a simplified Levenshtein edit-distance... it's fast but doesn't deal well with capitalization (normalize your case first), word sounds (there are other libraries, like soundex, that can hash phrases by what they sound like), or word meanings (that's what your phrase dictionary is for).

Related

regex that can handle horribly misspelled words

Is there a way to create a regex will insure that five out of eight characters are present in order in a given character range (like 20 chars for example)?
I am dealing with horrible OCR/scanning, and I can stand the false positives.
Is there a way to do this?
Update: I want to match for example "mshpeln" as misspelling. I do not want to do OCR. The OCR job has been done, but is has been done poorly (i.e. it originally said misspelling, but the OCR'd copy reads "mshpeln"). I do not know what the text that I will have to match against will be (i.e. I do not know that it is "mshpeln" it could be "mispel" or any number of other combinations).
I am not trying to use this as a spell checker, but merely find the end of a capture group. As an aside, I am currently having trouble getting the all.css file, so commenting is impossible temporarily.
I think you need not regex, but database with all valid words and creative usage of functions like soundex() and/or levenshtein().
You can do this: create table with all valid words (dictionary), populate it with columns like word and snd (computed as soundex(word)), create indexes for both word and snd columns.
For example, for word mispeling you would fill snd as M214. If you use SQLite, it has soundex() implemented by default.
Now, when you get new bad word, compute soundex() for it and look it up in your indexed table. For example, for word mshpeln it would be soundex('mshpeln') = M214. There you go, this way you can get back correct word.
But this would not look anything like regex - sorry.
To be honest, I think that a project like this would be better for an actual human to do, not a computer. If the project is to large for 1 or 2 people to do easily, you might want to look into something like Amazon's Mechanical Turk where you can outsource to work for pennies per solution.
This can't be done with a regex, but it can be done with a custom algorithm.
For example, to find words that are like 'misspelling' in your body of text:
1) Preprocess. Create a Set (in the mathematical sense, collection of guaranteed to be unique elements) with all of the unique letters that are in misspelling - {e, i, g, l, m, n, p, s}
2) Split the body of text into words.
3) For each word, create a Set with all of its unique letters. Then, perform the operation of set intersection on this set and the set of the word you are matching against - this will get you letters that are contained by both sets. If this set has 5 or more characters left in it, you have a possible match here.
If the OCR can add in erroneous spaces, then consider two words at a time instead of single words. And etc based on what your requirements are.
I have no solution for this problem, in fact, here's exactly the opposite.
Correcting OCR errors is not programmaticaly possible for two reasons:
You cannot quantify the error that was made by the OCR algorithm as it can goes between 0 and 100%
To apply a correction, you need to know what the maximum error could be in order to set an acceptable level.
Let nello world be the first guess of "hello world", which is quite similar. Then, with another font that is written in "painful" yellow or something, a second guess is noiio verio for the same expression. How should a computer know that this word would have been similar if it was better recognized?
Otherwise, given a predetermined error, mvp's solution seems to be the best in my opinion.
UPDATE:
After digging a little, I found a reference that may be relevant: String similarity measures

how do I make my program guess for the correct word?

I am interested in doing some AI/algorithmic explorations. So I have this idea to make a simple application kind of like hang man, were I assign a word and leave some letters as clues. But instead of a user guessing for the word I want to make my application try to figure it out based on the clues I leave it. Does anyone know where I should start? thanks.
Create a database of words of the desired language (index wikipedia dumps).
That probably shouldn't exceed 1 million words.
Then you can simply query a database:
for example: fxxulxxs
--> SELECT * FROM T_Words WHERE word LIKE f__ul__s
--> fabulous
if there are more than 1 word in the return set, you need to return the one that is statistically the most used.
Another method would be to take a look at nhunspell
If you want to do it more analytically, you need to find a statistical method to correlate stems, endings and beginnings, or basically a measurement for word similarity.
Language research shows that you can easily read words when you only have the start and the ending. If you only have the middle, then it gets difficult.
You might want to check out some form of algorithm for measuring edit distance, such as Damerau-Levenshtein distance (wikipedia). That is typically used to find the one word among several that most closely matches some other given word.
It is used a lot for searching and comparison when processing DNA and Protein sequences, but might be useful in your case too.
The first step is to build a data structure containing all the valid words and which can be queried easily to retrieve all the words matching the current pattern. Then with this list of matching words you can compute the most frequent letter to get the next candidate. Another approach could be to find the letter which will give the smallest next matching words set.
next_guess(pattern, played_chars, dictionary)
// find all the word matching the pattern and not containing letters played
// not in the pattern
words_set = find_words_matching(pattern, played_chars, dictionary)
// build an array containing for each letter the frequency in the words set
letter_freq = build_frequency_array(words_set)
// build an array containing the size of the words set if ever the letter appears at least once
// in the word (I name it its power)
letter_power = build_power_array(words_set)
// find the letter minimizing a function (the AI part ?)
// the function could take last two arrays in account
// this is the AI part.
candidate = minimize(weighted_function, letter_freq, letter_power)

Processing commands with inaccurate natural language strings

We're designing a system which can accepts commands in this format
command context
The context is defined from a list of about 200 tuples of words such as:
physical therapy
cardiac
physician visit
hospital inpatient
hospital outpatient
etc.
We want the system to be able to correct user errors such as spelling mistakes but also to understand that "physical therapy" is the same as "physical therapist" AND also to accept synonyms
Finally, if it's not an exact match, it should ask the user to disambiguate between the best matches
This is how I'm thinking of doing it:
Stem both the context words and incoming queries
Delete/isolate command strings from the query
Check for and correct any anagrams (however: this only covers one category of spelling mistakes)
Look for an exact word match
Look for "close matches"
This doesn't feel like a neat solution, especially steps 3 and 5.
What's a better/easier way to do this? Any libraries to do it in C#, bonus.
Can Lucene do this perhaps? Any guidance appreciated.
Thanks!
It may be too imprecise for your purposes, but Soundex is a common algorithm for telling if two words "sound similar".
I think Lucene would be best applied only at steps 4 and 5, as Lucene currently only supports approximate matching in the "glob" sense (wildcard characters -- "?" for matching a single character and "*" for matching multiple characters).
There is a whole set of literature on approximate matching -- I would start with the agrep work and proceed from there (but in part that is because I'm familiar with agrep).

Algorithm for sentence analysis and tokenization

I need to analyze a document and compile statistics as to how many times each a sequence of words is used (so the analysis is not on single words but of batch of recurring words). I read that compression algorithms do something similar to what I want - creating dictionaries of blocks of text with a piece of information reporting its frequency.
It should be something similar to http://www.codeproject.com/KB/recipes/Patterns.aspx
Do you have anything written in C#?
This is very simple to implement.
Use Split(a member function of string class) to split the string into words. (you can use the delimiters in the codeproject url).
A forloop to enumerate all the n-gram out and use Dictionary<string, int> to get the count.

How can I correctly prefix a word with "a" and "an"?

I have a .NET application where, given a noun, I want it to correctly prefix that word with "a" or "an". How would I do that?
Before you think the answer is to simply check if the first letter is a vowel, consider phrases like:
an honest mistake
a used car
Download Wikipedia
Unzip it and write a quick filter program that spits out only article text (the download is generally in XML format, along with non-article metadata too).
Find all instances of a(n).... and make an index on the following word and all of its prefixes (you can use a simple suffixtrie for this). This should be case sensitive, and you'll need a maximum word-length - 15 letters?
(optional) Discard all those prefixes which occur less than 5 times or where "a" vs. "an" achieves less than 2/3 majority (or some other threshholds - tweak here). Preferably keep the empty prefix to avoid corner-cases.
You can optimize your prefix database by discarding all those prefixes whose parent shares the same "a" or "an" annotation.
When determining whether to use "A" or "AN" find the longest matching prefix, and follow its lead. If you didn't discard the empty prefix in step 4, then there will always be a matching prefix (namely the empty prefix), otherwise you may need a special case for a completely-non matching string (such input should be very rare).
You probably can't get much better than this - and it'll certainly beat most rule-based systems.
Edit: I've implemented this in JS/C#. You can try it in your browser, or download the small, reusable javascript implementation it uses. The .NET implementation is package AvsAn on nuget. The implementations are trivial, so it should be easy to port to any other language if necessary.
Turns out the "rules" are quite a bit more complex than I thought:
it's an unanticipated result but it's a unanimous vote
it's an honest decision but a honeysuckle shrub
Symbols: It's an 0800 number, or an ∞ of oregano.
Acronyms: It's a NASA scientist, but an NSA analyst; a FIAT car but an FAA policy.
...which just goes to underline that a rule based system would be tricky to build!
You need to use a list of exceptions. I don't think all of the exceptions are well defined, because it sometimes depends on the accent of the person saying the word.
One stupid way is to ask Google for the two possibilities (using the one of the search APIs) and use the most popular:
http://www.google.co.uk/search?q=%22a+europe%22 - 841,000 hits
http://www.google.co.uk/search?q=%22an+europe%22 - 25,000 hits
Or:
http://www.google.co.uk/search?q=%22a+honest%22 - 797,000 hits
http://www.google.co.uk/search?q=%22an+honest%22 - 8,220,000 hits
Therefore "a europe" and "an honest" are the correct versions.
If you could find a source of word spellings to word pronunciations, like:
"honest":"on-ist"
"horrible":"hawr-uh-buhl, hor-"
You could base your decision on the first character of the spelled pronunciation string.
For performance, perhaps you could use such a lookup to pre-generate exception sets and use those smaller lookup sets during execution instead.
Edited to add:
!!! - I think you could use this to generate your exceptions:
http://www.speech.cs.cmu.edu/cgi-bin/cmudict
Not everything will be in the dictionary, of course - meaning not every possible exception would wind up in your exceptions sets - but in that case, you could just default to an for vowels/ a for consonants or use some other heuristic with better odds.
(Looking through the CMU dictionary, I was pleased to see it includes proper nouns for countries and some other places - so it will hande examples like "a Ukrainian", "a USA Today paper", "a Urals-inspired painting".)
Editing once more to add: The CMU dictionary does not contain common acronyms, and you have to worry about those starting with s,f,l,m,n,u,and x. But there are plenty of acronym lists out there, like in Wikipedia, which you could use to add to the exceptions.
You have to implemented manually and add the exceptions you want like for example if the first letter is 'H' and followed by an 'O' like honest, hour ... and also the opposite ones like europe, university, used ...
Since "a" and "an" is determined by phonetic rules and not spelling conventions, I would probably do it like this:
If the first letter of the word is a consonant -> 'a'
If the first letter of the word is a vowel-> 'an'
Keep a list of exceptions (heart, x-ray, house) as rjumnro says.
You need to look at the grammatical rules for indefinite articles (there are only two indefinite articles in English grammar - "a" and "an). You may not agree these sound correct, but the rules of English grammar are very clear:
"The words a and an are indefinite
articles. We use the indefinite
article an before words that begin
with a vowel sound (a, e, i, o, u) and
the indefinite article a before words
that begin with a consonant sound (all
other letters)."
Note this means a vowel sound, and not a vowel letter. For instance, words beginning with a silent "h", such as "honour" or "heir" are treated as vowels an so are proceeded with "an" - for example, "It is an honour to meet you". Words beginning with a consonant sound are prefixed with a - which is why you say "a used car" rather than "an used car" - because "used" has a "yoose" sound rather than a "uhh" sound.
So, as a programmer, these are the rules to follow. You just need to work out a way of determining what sound a word begins with, rather than what letter. I've seen examples of this, such as this one in PHP by Jaimie Sirovich :
function aOrAn($next_word)
{
$_an = array('hour', 'honest', 'heir', 'heirloom');
$_a = array('use', 'useless', 'user');
$_vowels = array('a','e','i','o','u');
$_endings = array('ly', 'ness', 'less', 'lessly', 'ing', 'ally', 'ially');
$_endings_regex = implode('|', $_endings);
$tmp = preg_match('#(.*?)(-| |$)#', $next_word, $captures);
$the_word = trim($captures[1]);
//$the_word = Format::trimString(Utils::pregGet('#(.*?)(-| |$)#', $next_word, 1));
$_an_regex = implode('|', $_an);
if (preg_match("#($_an_regex)($_endings_regex)#i", $the_word)) {
return 'an';
}
$_a_regex = implode('|', $_a);
if (preg_match("#($_a_regex)($_endings_regex)#i", $the_word)) {
return 'a';
}
if (in_array(strtolower($the_word{0}), $_vowels)) {
return 'an';
}
return 'a';
}
It's probably easiest to create the rule and then create a list of exceptions and use that. I don't imagine there will be that many.
Man, I realize that this is probably a settled argument, but I think it can be settled easier than using ad hoc grammar rules from Wikipedia, which would derive vernacular grammar, at best.
The best solution, it seems, is to have the use of a or an trigger a phoneme-based matching of the following word, with certain phonemes always associated with "an" and the remaining belonging to "a".
Carnegie Mellon University has a great online tool for these kind of checks - http://www.speech.cs.cmu.edu/cgi-bin/cmudict - and at 125k words with the matching 39 phonemes. Plugging a word in provides the entire phonemic set, of which only the first is important.
If the word does not appear in the dictionary, such as "NSA" and is all capitalized, then the system can assume the word is an Acronym and use the first letter to determine which indefinite article to use based on the same original rule set.
#Nathan Long:
Downloading wikipedia is actually not a bad idea. All images, videos and other media is not needed.
I wrote a (crappy) program in php and javascript(!) to read the entire Swedish wikipedia (or at least all aricles that could be reached from the aricle about math, which was the start for my spider.)
I collected all words and internal links in a database, and also kept track of the frequency of every word. I now use that as a word database for various tasks:
* Finding all words that can be created from a given set of letters (including wildcard)
* Created a simple syntax file for Swedish (all words not in the database are considered incorrect).
Oh, and downloading the entire wiki took about one week, using my laptop running most of the time, with 10Mbit connection.
When you're at it, log all occurrences that are inconsistent with the english language and see if some of them are mistakes. Go fix 'em and give something back to the community.
Note that there are differences between American and British dialects, as Grammar Girl pointed out in her episode A Versus An.
One complication is when words are pronounced differently in British and American English. For example, the word for a certain kind of plant is pronounced “erb” in American English and “herb” in British English. In the rare cases where this is a problem, use the form that will be expected in your country or by the majority of your readers.
Take a look at Perl's Lingua::EN::Inflect. See sub _indef_article in the source code.
I've ported a function from Python (originally from CPAN package Lingua-EN-Inflect) that correctly determines vowel sounds in C# and posted it as an answer to the question Programmatically determine whether to describe an object with a or an?. You can see the code snippet here.
Could you get a English dictionary that stores the words written in our regular alphabet, and the International Phoenetic Alphabet?
Then use the phoenetics to figure out the beginning sound of the word, and thus whether “a” or “an” is appropriate?
Not sure if that would actually be easier than (or as much fun as) the statistical Wikipedia approach.
I would use a rule-based algorithm to cover as many as I could, then use a list of exceptions. If you wanted to get fancy, you could try to determine some new "rules" from your exception list.
I just looks like a set of heuristics. It needs be a bit more complicated and answer some things which I never got a good answer for, for example how do you treat abbreviations ("a RPM" or "an RPM"? I always thought the latter one makes more sense).
A quick search yielded on linguistic libraries that talk about how to handle the English singular prefix, but you can probably find something if you dig dip enough. And if not - you can always write your own inflection library and gain world fame :-) .
I don't suppose you can just fill-in some boiler plate stuff like 'a/an' as a one step cover-all. Otherwise you will end up with assumption errors like all words with 'h' proceed by 'o' get 'an' instead of 'a' like 'home' - (an home?). Basically, you will end up including the logic of the english language or occassionally find rare cases that will make you look foolish.
Check for whether a word starts with a vowel or a consonent. A "u" is generally a consonant and a vowel ("yu"), hence belongs in the consonant group for your purposes.
The letter "h" stands for a gottal stop (a consonant) in French and in French words used in English. You can make a list of those (in fact, including "honor", "honour", and "hour" might be sufficient) and count them as starting with vowels (since English doesn't recognise a glottal stop).
Also count "eu" as a consonant etc.
It's not too difficult.
choice of an or a depends on the way the word is pronounced. By looking at the word you can't necessarily tell its correct pronunciation e.g. a Jargon or abbreviation etc.
One of the ways can be to have a dictionary with support for phonemes and use the phoneme information associated with the word to determine whether an "a" or an "an" should be used.
I can't be certain that it has the appropriate information in it to differentiate "a" and "an", but Princeton's WordNet database exists precisely for the purpose of similar sorts of tasks, so I think it's likely that the data is in there. It has some tens of thousands of words and hundreds of thousands of relationships between said words (IIRC; I can't find the current statistics on the site). Give it a look. It's freely downloadable.
How? How about when? Get the noun with article attached. Ask for it in a specific form.
Ask for the noun with the article. Many a MUD codebase store items as information consisting of:
one or more keywords
a short form
a long form
The keyword form might be "short sword rusty". The short form will be "a sword". The long form will be "a rusty short sword".
Are you writing an "a vs. an" Web service? Take a step back and look at if you can attack this leak further upstream. You can build a dam, but unless you stop it from flowing, it will spill over eventually.
Determine how critical this is, and as others have suggested, go for "quick but crude", or "expensive but sturdy".
The rule is very simple. If the next word starts with a vowel sound then use 'an', if it starts with a consonant then use 'a'. The hard thing is that our school classification of vowels and consonants doesn't work. The 'h' in 'honour' is a vowel, but the 'h' in 'hospital' is a consonant.
Even worse, some words like 'honest' start with a vowel or a consonant depending on who is saying them. Even worse, some words change depending on the words around them for some speakers.
The problem is bounded only by how much time and effort you want to put into it. You can write something in a couple using 'aeiou' as vowels in a couple of minutes, or you can spends months doing linguistic analysis of your target audience. Between them are a huge number of heuristics which will be right for some speakers and wrong for others -- but because different speakers have different determinations for the same word it simply isn't possible to be right all of the time no matter how you do it.
The ideal approach would be to find someplace online that can give you the answers, dynamically query them and cache the answers. You can prime the system with a few hundred words for starters.
(I don't know of such an online source, but I wouldn't be surprised if there is one.)
So, a reasonable solution is possible without downloading all of the internet. Here's what I did:
I remembered that Google published their raw data for Google Books N-Gram frequencies here. So I downloaded the 2-gram files for "a_" and "an". It's about 26 gigs if I recall correctly. From that I produced a list of strings where they were overwhelmingly preceded by the opposite article you'd expect (if we were to expect vowels take an "an"). That final list of words I was able to store in under 7 kilobytes.
Rather than writing code that could be culture-dependent and have numerous exceptions I tend to rework the statement that includes the indefinite article. For example, rather than saying "This customer wants to live in a Single-Family Home.", you could say "This customer wants a housing type of 'Single-Family Home'." That way, the indefinite article is not dependent on the variable - e.g., "This customer wants a housing type of 'Apartment'."
I'd like to synthesize a few of the given answers, and contribute my own solutions as well.
Let's start with some basic heuristics:
Start with the first letter of the word.
If it starts with an "a", "i" or "o", then use "an". As far as I know, those letters always begin with an actual vowel.
If it starts with an "e", then it will be pronounced as a vowel, unless it is followed by a "u" (e.g., euphonium, eugenics, euphoric, euphemism, etc.). This would be the case with "i" as well, in the unlikely cases of "Iuka", "Iuliyanov", and "IUPAC". (https://en.wiktionary.org/w/index.php?title=Category:English_terms_with_IPA_pronunciation&from=iu)
If it starts with a "b", "c", "d", "g", "k", "p", "q", "t", "v", "w", or "z", then it is guaranteed to be a consonant, and pronounced like a consonant.
If it starts with an "f", "l", "m", "n", "r", "s", or "x", it may be pronounced with a vowel, but only if it's in an acronym. Otherwise, it's guaranteed to be pronounced as a consonant.
If it begins with a "u", or with an "h", "j", or "y", then it falls into a corner case.
Determine whether the word is an acronym.
If the word is an acronym, then assume that it contains more than one consecutive capital letter, or contains periods. This could be solved via a simple regex (e.g. [A-Z][A-Z]+).
If the word is an acronym, then first turn it into a more "word-like" form (i.e., not all capitalized, not containing periods) before going to Step 3. If it isn't an acronym, then refer back to the information in Step 1.
Use a dictionary!
If the word is in this dictionary, and begins with an "a", "e", "i", "o", or "u", then it begins with a vowel. Otherwise, it's a consonant.
Wiktionary and Wikipedia use the IPA to represent the pronunciations of words. If the word begins with one of these letters, then it begins with a vowel.
Hopefully this helps. I suspect that it will be less resource intensive than any single option, given that much of it can be solved by either a simple "equals" statement (e.g. word[0] == 'a'), or by a regex expression (e.g. [aioAIO]), and by some simple knowledge of linguistics and the pronunciations of the English letter names. If the word doesn't fall into a simple case, then use one of the more complex solutions that the other answerers have provided.
You use "a" whenever the next word isn't a vowel? And you use "an" whenever there is a vowel?
With that said, couldn't you just do a regular expression like "a\s[a,e,i,o,u].*"? And then replace it with an "an?"

Categories