C# Contains not picking up substring (on foreign country) [duplicate] - c#

I came across the word 'The Turkey Test' while learning about code testing. I don't know really what it means.
What is Turkey Test? Why is it called so?

The Turkey problem is related to software internationalization or simply to its misbehavior in various language cultures.
In various countries there are different standards, for example for writing dates (14.04.2008 in Turkey and 4/14/2008 in US), numbers (i.e. 123,45 in Poland and 123.45 in USA) and rules about character uppercasing (like in Turkey with letters i, I and ı).
As Jeff Moser pointed below one such problem was pointed out by a Turkish user who found a bug in the ToUpper() function. There are more details in comments below.
However the problem is not limited to Turkey and to string conversions.
For example, in Poland and many other countries, dates and numbers are also written in a different manner.
Some links from a Google search for the Turkey Test :
Does Your Code Pass The Turkey Test?
by Jeff Moser
What's Wrong With Turkey?
by Jeff Atwood

Here is described the turkey test
Forget about Turkey, this won't even pass in the USA. You need a case insensitive compare. So you try:
String.Compare(string,string,bool ignoreCase):
....
Do any of these pass "The Turkey Test?"
Not a chance!
Reason: You've been hit with the "Turkish I" problem.
As discussed by lots and lots of people, the "I" in Turkish behaves differently than in most languages. Per the Unicode standard, our lowercase "i" becomes "İ" (U+0130 "Latin Capital Letter I With Dot Above") when it moves to uppercase. Similarly, our uppercase "I" becomes "ı" (U+0131 "Latin Small Letter Dotless I") when it moves to lowercase.

We write dates smaller to bigger like dd.MM.yyyy: 28.10.2010
We use '.'(dot) for thousands separator, and ','(comma) for decimal separator: 4.567,9
We have ö=>Ö, ç=>Ç, ş=>Ş, ğ=>Ğ, ü=>Ü, and most importantly ı=>I and i => İ; in other words, lower case of upper I is dotless and upper case of lower i is dotted.
People may have very stressful times because of meaningless errors caused by the above rules.
If your code properly runs in Turkey, it'll probably work anywhere.

The so called "Turkey Test" is related to Software internationalization. One problem of globalization/internationalization are that date and time formats in different cultures can differ on many levels (day/month/year order, date separator etc).
Also, Turkey has some special rules for capitalization, which can lead to problems. For example, the Turkish "i" character is a common problem for many programs which capitalize it in a wrong way.

The link provided by #Luixv gives a comprehensive description of the issue.
The summary is that if your going to test your code on only one non-English locale, test it on Turkish.
This is because the Turkish has instances of most edge cases you are likely to encounter with localization, including "unusual" format strings and non-standard characters (such as a different capitalization rules for i).

Jeff Atwood has a blog article on same which is the first place I came across it myself.
in summary attempting to run your application under a Turkish Locale is an excellent test
of your I18n.
here's jeffs article

Related

C# string.IndexOf() returns unexpected value

This question applies to C#, .net Compact Framework 2 and Windows CE 5 devices.
I encountered a bug in a .net DLL which was in use on very different CE devices for years, without showing any problems. Suddenly, on a new Windows CE 5.0 device, this bug appeared in the following code:
string s = "Print revenue receipt"; // has only single space chars
int i = s.IndexOf(" "); // two space chars
I expect i to be -1, however this was only true until today, when indexOf suddenly returned 5.
Since this behaviour doesn't occur when using
int i = s.IndexOf(" ", StringComparison.Ordinal);
, I'm quite sure that this is a culture based phenomenom, but I can't recognize the difference this new device makes. It is a mostly identical version of a known device (just a faster cpu and new board).
Both devices:
run Windows CE 5.0 with identical localization
System.Environment.Version reports '2.0.7045.0'
CultureInfo.CurrentUICulture and CultureInfo.CurrentCulture report 'en-GB' (also tested with 'de-DE')
'all' related registry keys are equal.
The new device had the CF 3.5 preinstalled, whose GAC files I experimentally renamed, with no change in the described behaviour. Since at runtime always Version 2.0.7045.0 is reported, I assume these assemblies have no effect.
Although this is not difficult to fix, i can not stand it when things seem that magical. Any hints what i was missing?
Edit: it is getting stranger and stranger, see screenshot:
One more:
I believe you already have the answer using an ordinal search
int i = s.IndexOf(" ", StringComparison.Ordinal);
You can read a small section in the documentation for the String Class which has this to say on the subject:
String search methods, such as String.StartsWith and String.IndexOf, also can perform culture-sensitive or ordinal string comparisons. The following example illustrates the differences between ordinal and culture-sensitive comparisons using the IndexOf method. A culture-sensitive search in which the current culture is English (United States) considers the substring "oe" to match the ligature "œ". Because a soft hyphen (U+00AD) is a zero-width character, the search treats the soft hyphen as equivalent to Empty and finds a match at the beginning of the string. An ordinal search, on the other hand, does not find a match in either case.
Culture stuff can really appear to be quite magical on some systems. What I came to always do after years of pain is always set the culture information manually to InvariantCulture where I do not explicitly want different behaviour for different cultures. So my suggestion would be: Make that IndexOf check always use the same culture information, like so:
int i = s.IndexOf(" ", StringComparison.InvariantCulture);
The reference at http://msdn.microsoft.com/en-us/library/k8b1470s.aspx states:
"Character sets include ignorable characters, which are characters that are not considered when performing a linguistic or culture-sensitive comparison. In a culture-sensitive search, if value contains an ignorable character, the result is equivalent to searching with that character removed."
This is from 4.5 reference, references from previous versions don't contain nothing like that.
So let me take a guess: they have changed the rules from 4.0 to 4.5 and now the second space of a two space sequence is considered to be a "ignorable character" - at least if the engine recognizes your string as english text (like in your example string s), otherwise not.
And somehow on your new device, a 4.5 dll is used instead of the expected 2.0 dll.
A wild guess, I know :)

Why does "i" get replaced with "ı"

I received a crash report from an application which was trying to read XML from a file it had previously written. After requesting the user send me the file, I compared it with what should have been written and found a really odd problem I haven't come across before.
Some (but not all) of the i characters had been replaced with ı - a dotless i. For example, a node named "title" was fine, but a node named "initialdirectory" had the first i replaced, the second was left alone, i.e. ınitialdirectory.
Until today I wasn't even aware there was such a character, but now I do and I just don't know how it was written like that - the XML was written using an XmlWriter with UTF8 encoding. Just a normal everyday write, nothing complicated.
I normally (well, since getting Resharper and it yells at me for skipping the parameter) use StringComparison.OrdinalIgnoreCase when doing IndexOf etc, but I'm at a loss on how I'm supposed to do this when writing data, unless I'm supposed to start changing thread cultures.
Has anyone experienced a similar issue before, and if so, what's the best way to deal with it?
In Turkish there are two i's: one with a dot, i, and one without a dot, ı. In upper case the first one has a dot, İ, and the second one hasn't, I.
At some point your program is converting InitialDirectory to lower case according to the default locale, which is known to be Turkish in some parts of the world. To fix the problem you can convert cases using a fixed, known locale, such as American English.
Update: Even better, use the ToLowerInvariant() method which converts a string to lower case in the "invariant culture".

What are some good non-English phrases with singular and plural forms that can be used to test an internationalization and localization library?

I've been working on a .NET library to assist with internationalization of an application. It's written in C#, called SmartFormat, and is open-source on GitHub.
It contains Grammatical Number rules for multiple languages that determine the correct singular/plural form based on the template and locale. The rules were obtained from Unicode Language Plural Rules.
When translating a phrase that has words that depend on a quantity (such as nouns, verbs, and adjectives), you specify the multiple forms and the formatter chooses the correct form based on these rules.
An example in English: (notice the syntax is nearly identical to String.Format)
var message = "There {0:is:are} {0} {0:item:items} remaining.";
var output = Smart.Format(culture, message, items.Count);
I would like to write unit tests for many of the supported languages. However, I only speak English and Spanish (which both have the same grammatical-number rules).
What are some good non-English test phrases that can be used for these unit tests?
I am looking for phrases similar to "There {0:is:are} {0} {0:item:items} remaining.". Notice how this example requires a specific verb and noun based on the quantity.
A note about syntax:
This library looks for : delimited words and chooses the correct word based on the rules defined for the locale. For example, in Polish, there are 3 plural forms for the word "File" : 1 "plik", 2-4 "pliki", 5-21 "plików". So, you would specify all 3 forms in the format string: "{0} {0:plik:pliki:plików}".
The words are typically ordered from smallest possible value to largest, such as "{0:zero:one:two:few:many:other}", as defined by the Unicode Language Plural Rules.
Additional information about this code has been discussed here: Localization of singular/plural words - what are the different language rules for grammatical numbers?
I'd like to contribute with the approximate Turkish translation of your sample phrase, so you can test for the case where number of possible forms equals 1. ;)
{0} nesne kaldı. = ({0} items remaining.)
Although Turkish words are pluralized regularly with suffixes, the plural forms are never used when the number of items are specified. So the above should be rendered the same in each case:
1 nesne kaldı.
2 nesne kaldı.
42 nesne kaldı.
However, when the count is not explicitly specified, it may become grammatically important to indicate if we are talking about one or more items. So the message "Do you want to delete the selected item(s)?" should be rendered like this:
Seçili nesneyi silmek istiyor musunuz? (when item count=1)
Seçili nesneleri silmek istiyor musunuz? (when item count>1)
So I guess the format for this would be:
Seçili {0:nesneyi:nesneleri} silmek istiyor musunuz?

CamelCase conversion to friendly name, i.e. Enum constants; Problems?

In my answer to this question, I mentioned that we used UpperCamelCase parsing to get a description of an enum constant not decorated with a Description attribute, but it was naive, and it didn't work in all cases. I revisited it, and this is what I came up with:
var result = Regex.Replace(camelCasedString,
#"(?<a>(?<!^)[A-Z][a-z])", #" ${a}");
result = Regex.Replace(result,
#"(?<a>[a-z])(?<b>[A-Z0-9])", #"${a} ${b}");
The first Replace looks for an uppercase letter, followed by a lowercase letter, EXCEPT where the uppercase letter is the start of the string (to avoid having to go back and trim), and adds a preceding space. It handles your basic UpperCamelCase identifiers, and leading all-upper acronyms like FDICInsured.
The second Replace looks for a lowercase letter followed by an uppercase letter or a number, and inserts a space between the two. This is to handle special but common cases of middle or trailing acronyms, or numbers in an identifier (except leading numbers, which are usually prohibited in C-style languages anyway).
Running some basic unit tests, the combination of these two correctly separated all of the following identifiers: NoDescription, HasLotsOfWords, AAANoDescription, ThisHasTheAcronymABCInTheMiddle, MyTrailingAcronymID, TheNumber3, IDo3Things, IAmAValueWithSingleLetterWords, and Basic (which didn't have any spaces added).
So, I'm posting this first to share it with others who may find it useful, and second to ask two questions:
Anyone see a case that would follow common CamelCase-ish conventions, that WOULDN'T be correctly separated into a friendly string this way? I know it won't separate adjacent acronyms (FDICFCUAInsured), recapitalize "properly" camelCased acronyms like FdicInsured, or capitalize the first letter of a lowerCamelCased identifier (but that one's easy to add - result = Regex.Replace(result, "^[a-z]", m=>m.ToString().ToUpper());). Anything else?
Can anyone see a way to make this one statement, or more elegant? I was looking to combine the Replace calls, but as they do two different things to their matches it can't be done with these two strings. They could be combined into a method chain with a RegexReplace extension method on String, but can anyone think of better?
So while I agree with Hans Passant here, I have to say that I had to try my hand at making it one regex as an armchair regex user.
(?<a>(?<!^)((?:[A-Z][a-z])|(?:(?<!^[A-Z]+)[A-Z0-9]+(?:(?=[A-Z][a-z])|$))|(?:[0-9]+)))
Is what I came up with. It seems to pass all the tests you put forward in the question.
So
var result = Regex.Replace(camelCasedString, #"(?<a>(?<!^)((?:[A-Z][a-z])|(?:(?<!^[A-Z]+)[A-Z0-9]+(?:(?=[A-Z][a-z])|$))|(?:[0-9]+)))", #" ${a}");
Does it in one pass.
not that this directly answers the question, but why not test by taking the standard C# API and converting each class into a friendly name? It'd take some manual verification, but it'd give you a good list of standard names to test.
Let's say every case you come across works with this (you're asking us for examples that won't and then giving us some, so you don't even have a question left).
This still binds UI to programmatic identifiers in a way that will make both programming and UI changes brittle.
It still assumes your program will only be used in one language. Either your potential market it so small that just indexing an array of names would be scalable enough (e.g. a one-client bespoke or in-house project), or you are assuming you will never be successful enough to need to be available to other languages or other dialects of your first-chosen language.
Does "well, it'll work as long as we're a failure" sound like a passing grade in balancing designs?
Either code it to use resources, or else code it to pass the enum name blindly or use an array of names, as that at least will be modifiable afterwards.

How can I correctly prefix a word with "a" and "an"?

I have a .NET application where, given a noun, I want it to correctly prefix that word with "a" or "an". How would I do that?
Before you think the answer is to simply check if the first letter is a vowel, consider phrases like:
an honest mistake
a used car
Download Wikipedia
Unzip it and write a quick filter program that spits out only article text (the download is generally in XML format, along with non-article metadata too).
Find all instances of a(n).... and make an index on the following word and all of its prefixes (you can use a simple suffixtrie for this). This should be case sensitive, and you'll need a maximum word-length - 15 letters?
(optional) Discard all those prefixes which occur less than 5 times or where "a" vs. "an" achieves less than 2/3 majority (or some other threshholds - tweak here). Preferably keep the empty prefix to avoid corner-cases.
You can optimize your prefix database by discarding all those prefixes whose parent shares the same "a" or "an" annotation.
When determining whether to use "A" or "AN" find the longest matching prefix, and follow its lead. If you didn't discard the empty prefix in step 4, then there will always be a matching prefix (namely the empty prefix), otherwise you may need a special case for a completely-non matching string (such input should be very rare).
You probably can't get much better than this - and it'll certainly beat most rule-based systems.
Edit: I've implemented this in JS/C#. You can try it in your browser, or download the small, reusable javascript implementation it uses. The .NET implementation is package AvsAn on nuget. The implementations are trivial, so it should be easy to port to any other language if necessary.
Turns out the "rules" are quite a bit more complex than I thought:
it's an unanticipated result but it's a unanimous vote
it's an honest decision but a honeysuckle shrub
Symbols: It's an 0800 number, or an ∞ of oregano.
Acronyms: It's a NASA scientist, but an NSA analyst; a FIAT car but an FAA policy.
...which just goes to underline that a rule based system would be tricky to build!
You need to use a list of exceptions. I don't think all of the exceptions are well defined, because it sometimes depends on the accent of the person saying the word.
One stupid way is to ask Google for the two possibilities (using the one of the search APIs) and use the most popular:
http://www.google.co.uk/search?q=%22a+europe%22 - 841,000 hits
http://www.google.co.uk/search?q=%22an+europe%22 - 25,000 hits
Or:
http://www.google.co.uk/search?q=%22a+honest%22 - 797,000 hits
http://www.google.co.uk/search?q=%22an+honest%22 - 8,220,000 hits
Therefore "a europe" and "an honest" are the correct versions.
If you could find a source of word spellings to word pronunciations, like:
"honest":"on-ist"
"horrible":"hawr-uh-buhl, hor-"
You could base your decision on the first character of the spelled pronunciation string.
For performance, perhaps you could use such a lookup to pre-generate exception sets and use those smaller lookup sets during execution instead.
Edited to add:
!!! - I think you could use this to generate your exceptions:
http://www.speech.cs.cmu.edu/cgi-bin/cmudict
Not everything will be in the dictionary, of course - meaning not every possible exception would wind up in your exceptions sets - but in that case, you could just default to an for vowels/ a for consonants or use some other heuristic with better odds.
(Looking through the CMU dictionary, I was pleased to see it includes proper nouns for countries and some other places - so it will hande examples like "a Ukrainian", "a USA Today paper", "a Urals-inspired painting".)
Editing once more to add: The CMU dictionary does not contain common acronyms, and you have to worry about those starting with s,f,l,m,n,u,and x. But there are plenty of acronym lists out there, like in Wikipedia, which you could use to add to the exceptions.
You have to implemented manually and add the exceptions you want like for example if the first letter is 'H' and followed by an 'O' like honest, hour ... and also the opposite ones like europe, university, used ...
Since "a" and "an" is determined by phonetic rules and not spelling conventions, I would probably do it like this:
If the first letter of the word is a consonant -> 'a'
If the first letter of the word is a vowel-> 'an'
Keep a list of exceptions (heart, x-ray, house) as rjumnro says.
You need to look at the grammatical rules for indefinite articles (there are only two indefinite articles in English grammar - "a" and "an). You may not agree these sound correct, but the rules of English grammar are very clear:
"The words a and an are indefinite
articles. We use the indefinite
article an before words that begin
with a vowel sound (a, e, i, o, u) and
the indefinite article a before words
that begin with a consonant sound (all
other letters)."
Note this means a vowel sound, and not a vowel letter. For instance, words beginning with a silent "h", such as "honour" or "heir" are treated as vowels an so are proceeded with "an" - for example, "It is an honour to meet you". Words beginning with a consonant sound are prefixed with a - which is why you say "a used car" rather than "an used car" - because "used" has a "yoose" sound rather than a "uhh" sound.
So, as a programmer, these are the rules to follow. You just need to work out a way of determining what sound a word begins with, rather than what letter. I've seen examples of this, such as this one in PHP by Jaimie Sirovich :
function aOrAn($next_word)
{
$_an = array('hour', 'honest', 'heir', 'heirloom');
$_a = array('use', 'useless', 'user');
$_vowels = array('a','e','i','o','u');
$_endings = array('ly', 'ness', 'less', 'lessly', 'ing', 'ally', 'ially');
$_endings_regex = implode('|', $_endings);
$tmp = preg_match('#(.*?)(-| |$)#', $next_word, $captures);
$the_word = trim($captures[1]);
//$the_word = Format::trimString(Utils::pregGet('#(.*?)(-| |$)#', $next_word, 1));
$_an_regex = implode('|', $_an);
if (preg_match("#($_an_regex)($_endings_regex)#i", $the_word)) {
return 'an';
}
$_a_regex = implode('|', $_a);
if (preg_match("#($_a_regex)($_endings_regex)#i", $the_word)) {
return 'a';
}
if (in_array(strtolower($the_word{0}), $_vowels)) {
return 'an';
}
return 'a';
}
It's probably easiest to create the rule and then create a list of exceptions and use that. I don't imagine there will be that many.
Man, I realize that this is probably a settled argument, but I think it can be settled easier than using ad hoc grammar rules from Wikipedia, which would derive vernacular grammar, at best.
The best solution, it seems, is to have the use of a or an trigger a phoneme-based matching of the following word, with certain phonemes always associated with "an" and the remaining belonging to "a".
Carnegie Mellon University has a great online tool for these kind of checks - http://www.speech.cs.cmu.edu/cgi-bin/cmudict - and at 125k words with the matching 39 phonemes. Plugging a word in provides the entire phonemic set, of which only the first is important.
If the word does not appear in the dictionary, such as "NSA" and is all capitalized, then the system can assume the word is an Acronym and use the first letter to determine which indefinite article to use based on the same original rule set.
#Nathan Long:
Downloading wikipedia is actually not a bad idea. All images, videos and other media is not needed.
I wrote a (crappy) program in php and javascript(!) to read the entire Swedish wikipedia (or at least all aricles that could be reached from the aricle about math, which was the start for my spider.)
I collected all words and internal links in a database, and also kept track of the frequency of every word. I now use that as a word database for various tasks:
* Finding all words that can be created from a given set of letters (including wildcard)
* Created a simple syntax file for Swedish (all words not in the database are considered incorrect).
Oh, and downloading the entire wiki took about one week, using my laptop running most of the time, with 10Mbit connection.
When you're at it, log all occurrences that are inconsistent with the english language and see if some of them are mistakes. Go fix 'em and give something back to the community.
Note that there are differences between American and British dialects, as Grammar Girl pointed out in her episode A Versus An.
One complication is when words are pronounced differently in British and American English. For example, the word for a certain kind of plant is pronounced “erb” in American English and “herb” in British English. In the rare cases where this is a problem, use the form that will be expected in your country or by the majority of your readers.
Take a look at Perl's Lingua::EN::Inflect. See sub _indef_article in the source code.
I've ported a function from Python (originally from CPAN package Lingua-EN-Inflect) that correctly determines vowel sounds in C# and posted it as an answer to the question Programmatically determine whether to describe an object with a or an?. You can see the code snippet here.
Could you get a English dictionary that stores the words written in our regular alphabet, and the International Phoenetic Alphabet?
Then use the phoenetics to figure out the beginning sound of the word, and thus whether “a” or “an” is appropriate?
Not sure if that would actually be easier than (or as much fun as) the statistical Wikipedia approach.
I would use a rule-based algorithm to cover as many as I could, then use a list of exceptions. If you wanted to get fancy, you could try to determine some new "rules" from your exception list.
I just looks like a set of heuristics. It needs be a bit more complicated and answer some things which I never got a good answer for, for example how do you treat abbreviations ("a RPM" or "an RPM"? I always thought the latter one makes more sense).
A quick search yielded on linguistic libraries that talk about how to handle the English singular prefix, but you can probably find something if you dig dip enough. And if not - you can always write your own inflection library and gain world fame :-) .
I don't suppose you can just fill-in some boiler plate stuff like 'a/an' as a one step cover-all. Otherwise you will end up with assumption errors like all words with 'h' proceed by 'o' get 'an' instead of 'a' like 'home' - (an home?). Basically, you will end up including the logic of the english language or occassionally find rare cases that will make you look foolish.
Check for whether a word starts with a vowel or a consonent. A "u" is generally a consonant and a vowel ("yu"), hence belongs in the consonant group for your purposes.
The letter "h" stands for a gottal stop (a consonant) in French and in French words used in English. You can make a list of those (in fact, including "honor", "honour", and "hour" might be sufficient) and count them as starting with vowels (since English doesn't recognise a glottal stop).
Also count "eu" as a consonant etc.
It's not too difficult.
choice of an or a depends on the way the word is pronounced. By looking at the word you can't necessarily tell its correct pronunciation e.g. a Jargon or abbreviation etc.
One of the ways can be to have a dictionary with support for phonemes and use the phoneme information associated with the word to determine whether an "a" or an "an" should be used.
I can't be certain that it has the appropriate information in it to differentiate "a" and "an", but Princeton's WordNet database exists precisely for the purpose of similar sorts of tasks, so I think it's likely that the data is in there. It has some tens of thousands of words and hundreds of thousands of relationships between said words (IIRC; I can't find the current statistics on the site). Give it a look. It's freely downloadable.
How? How about when? Get the noun with article attached. Ask for it in a specific form.
Ask for the noun with the article. Many a MUD codebase store items as information consisting of:
one or more keywords
a short form
a long form
The keyword form might be "short sword rusty". The short form will be "a sword". The long form will be "a rusty short sword".
Are you writing an "a vs. an" Web service? Take a step back and look at if you can attack this leak further upstream. You can build a dam, but unless you stop it from flowing, it will spill over eventually.
Determine how critical this is, and as others have suggested, go for "quick but crude", or "expensive but sturdy".
The rule is very simple. If the next word starts with a vowel sound then use 'an', if it starts with a consonant then use 'a'. The hard thing is that our school classification of vowels and consonants doesn't work. The 'h' in 'honour' is a vowel, but the 'h' in 'hospital' is a consonant.
Even worse, some words like 'honest' start with a vowel or a consonant depending on who is saying them. Even worse, some words change depending on the words around them for some speakers.
The problem is bounded only by how much time and effort you want to put into it. You can write something in a couple using 'aeiou' as vowels in a couple of minutes, or you can spends months doing linguistic analysis of your target audience. Between them are a huge number of heuristics which will be right for some speakers and wrong for others -- but because different speakers have different determinations for the same word it simply isn't possible to be right all of the time no matter how you do it.
The ideal approach would be to find someplace online that can give you the answers, dynamically query them and cache the answers. You can prime the system with a few hundred words for starters.
(I don't know of such an online source, but I wouldn't be surprised if there is one.)
So, a reasonable solution is possible without downloading all of the internet. Here's what I did:
I remembered that Google published their raw data for Google Books N-Gram frequencies here. So I downloaded the 2-gram files for "a_" and "an". It's about 26 gigs if I recall correctly. From that I produced a list of strings where they were overwhelmingly preceded by the opposite article you'd expect (if we were to expect vowels take an "an"). That final list of words I was able to store in under 7 kilobytes.
Rather than writing code that could be culture-dependent and have numerous exceptions I tend to rework the statement that includes the indefinite article. For example, rather than saying "This customer wants to live in a Single-Family Home.", you could say "This customer wants a housing type of 'Single-Family Home'." That way, the indefinite article is not dependent on the variable - e.g., "This customer wants a housing type of 'Apartment'."
I'd like to synthesize a few of the given answers, and contribute my own solutions as well.
Let's start with some basic heuristics:
Start with the first letter of the word.
If it starts with an "a", "i" or "o", then use "an". As far as I know, those letters always begin with an actual vowel.
If it starts with an "e", then it will be pronounced as a vowel, unless it is followed by a "u" (e.g., euphonium, eugenics, euphoric, euphemism, etc.). This would be the case with "i" as well, in the unlikely cases of "Iuka", "Iuliyanov", and "IUPAC". (https://en.wiktionary.org/w/index.php?title=Category:English_terms_with_IPA_pronunciation&from=iu)
If it starts with a "b", "c", "d", "g", "k", "p", "q", "t", "v", "w", or "z", then it is guaranteed to be a consonant, and pronounced like a consonant.
If it starts with an "f", "l", "m", "n", "r", "s", or "x", it may be pronounced with a vowel, but only if it's in an acronym. Otherwise, it's guaranteed to be pronounced as a consonant.
If it begins with a "u", or with an "h", "j", or "y", then it falls into a corner case.
Determine whether the word is an acronym.
If the word is an acronym, then assume that it contains more than one consecutive capital letter, or contains periods. This could be solved via a simple regex (e.g. [A-Z][A-Z]+).
If the word is an acronym, then first turn it into a more "word-like" form (i.e., not all capitalized, not containing periods) before going to Step 3. If it isn't an acronym, then refer back to the information in Step 1.
Use a dictionary!
If the word is in this dictionary, and begins with an "a", "e", "i", "o", or "u", then it begins with a vowel. Otherwise, it's a consonant.
Wiktionary and Wikipedia use the IPA to represent the pronunciations of words. If the word begins with one of these letters, then it begins with a vowel.
Hopefully this helps. I suspect that it will be less resource intensive than any single option, given that much of it can be solved by either a simple "equals" statement (e.g. word[0] == 'a'), or by a regex expression (e.g. [aioAIO]), and by some simple knowledge of linguistics and the pronunciations of the English letter names. If the word doesn't fall into a simple case, then use one of the more complex solutions that the other answerers have provided.
You use "a" whenever the next word isn't a vowel? And you use "an" whenever there is a vowel?
With that said, couldn't you just do a regular expression like "a\s[a,e,i,o,u].*"? And then replace it with an "an?"

Categories