How to compare large string integer values - c#

Currently I am working on a program that processes extremely large integernumbers .
To prevent hitting the intiger.maxvalue a script that processes strings as numbers, and splits them up into a List<int>as following
0 is the highest currently known value
list entry 0: 123 (hundred twenty three million)
list entry 1: 321 (three hundred twenty one thousand)
list entry 2: 777 (seven hundred seventy seven)
Now my question is: How would one check if the incoming string value is sub tractable from these values?
The start for subtraction I currently made is as following, but I am getting stuck on the subtracting part.
public bool Subtract(string value)
{
string cleanedNumeric = NumericAndSpaces(value);
List<string> input = new List<string>(cleanedNumeric.Split(' '));
// In case 1) the amount is bigger 2) biggest value exceeded by a 10 fold
// 3) biggest value exceeds the value
if (input.Count > values.Count ||
input[input.Count - 1].Length > values[0].ToString().Length ||
FastParseInt(input[input.Count -1]) > values[0])
return false;
// Flip the array for ease of comparison
input.Reverse();
return true;
}
EDIT
Current target for the highest achievable number in this program is a Googolplex And are limited to .net3.5 MONO

You should do some testing on this because I haven't run extensive tests but it has worked on the cases I've put it through. Also, it might be worth ensuring that each character in the string is truly a valid integer as this procedure would bomb given a non-integer character. Finally, it expects positive numbers for both subtrahend and minuend.
static void Main(string[] args)
{
// In subtraction, a subtrahend is subtracted from a minuend to find a difference.
string minuend = "900000";
string subtrahend = "900001";
var isSubtractable = IsSubtractable(subtrahend, minuend);
}
public static bool IsSubtractable(string subtrahend, string minuend)
{
minuend = minuend.Trim();
subtrahend = subtrahend.Trim();
// maybe loop through characters and ensure all are valid integers
// check if the original number is longer - clearly subtractable
if (minuend.Length > subtrahend.Length) return true;
// check if original number is shorter - not subtractable
if (minuend.Length < subtrahend.Length) return false;
// at this point we know the strings are the same length, so we'll
// loop through the characters, one by one, from the start, to determine
// if the minued has a higher value character in a column of the number.
int numberIndex = 0;
while (numberIndex < minuend.Length )
{
Int16 minuendCharValue = Convert.ToInt16(minuend[numberIndex]);
Int16 subtrahendCharValue = Convert.ToInt16(subtrahend[numberIndex]);
if (minuendCharValue > subtrahendCharValue) return true;
if (minuendCharValue < subtrahendCharValue) return false;
numberIndex++;
}
// number are the same
return true;
}

[BigInteger](https://msdn.microsoft.com/en-us/library/system.numerics.biginteger.aspx) is of aribtary size.
Run this code if you don't believe me
var foo = new BigInteger(2);
while (true)
{
foo = foo * foo;
}
Things get crazy. My debugger (VS2013) becomes unable to represent the number before it's done. ran it for a short time and got a number with 1.2 million digits in base 10 from ToString. It is big enough. There is a 2GB limit on object, which can be overriden in .NET 4.5 with the setting gcAllowVeryLargeObjects
Now what to do if you are using .NET 3.5? You basically need to reimplement BigInteger (obviously only taking what you need, there is a lot in there).
public class MyBigInteger
{
uint[] _bits; // you need somewhere to store the value to an arbitrary length.
....
You also need to perform maths on these arrays. here is the Equals method from BigInteger:
public bool Equals(BigInteger other)
{
AssertValid();
other.AssertValid();
if (_sign != other._sign)
return false;
if (_bits == other._bits)
// _sign == other._sign && _bits == null && other._bits == null
return true;
if (_bits == null || other._bits == null)
return false;
int cu = Length(_bits);
if (cu != Length(other._bits))
return false;
int cuDiff = GetDiffLength(_bits, other._bits, cu);
return cuDiff == 0;
}
It basically does cheap length and sign comparisons of the byte arrays, then, if that doesn't produce a difference hands off to GetDiffLength.
internal static int GetDiffLength(uint[] rgu1, uint[] rgu2, int cu)
{
for (int iv = cu; --iv >= 0; )
{
if (rgu1[iv] != rgu2[iv])
return iv + 1;
}
return 0;
}
Which does the expensive check of looping through the arrays looking for a difference.
All you math will have to follow this pattern and can largely be ripped of from the .Net source code.
Googleplex and 2GB:
Here the 2GB limit becomes a problem, because you will be needing an object size of 3.867×10^90 gigabyte. This the the point where you give up, or get clever and store objects as powers at the cost of not being able to represent a lot of them. *2
if you moderate your expectations, it doesn't actually change the maths of BigInteger to split _bits into multiple jagged arrays *1. You change the cheap checks a bit. Rather than checking the size of the array, you check the number of subarrays and then the size of the last one. Then the loop needs to be a bit more (but not much) more complex in that it does elementwise array comparison for each sub array. There are other changes as well, but it's by no means impossible and gets you out of the 2GB limit.
*1 Note use jagged arrays[][], not multidimensional arrays [,] which are still subject to the same limit.
*2 Ie give up on precision and store the mantissa and exponent. If you look how floating point numbers are implemented they can't represent all numbers between their max and min (as the number of real numbers in a range is 'bigger' than infinite). They make a complex trade off between precision and range. If you are wanting to do this, looking at float implementations will be a lot more useful than taking about integer representations like Biginteger.

Related

Checking whether a number contains numbers 1 to n as factors

I am trying to solve Project Euler Challenge 5, What is the smallest possible number that is evenly divisible by all the numbers from 1 to 20.
My problem is that my isMultiple() method doesn't return true when it should.
* My Code *
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Challenge_5
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(isMultiple(2520, 10)); //should return True
Console.WriteLine(smallestMultiple(20)); //should return 232792560
Console.ReadLine();
}
static int factorial(int n)
{
int product = 1;
for (int i = 1; i <= n; i++)
{
product *= i;
}
return product; //returns the factorial of n, (n * n-1 * n-2... * 1)
}
static bool isMultiple(int number, int currentFactor)
{
bool returnBool = false;
if (currentFactor == 1)
{
returnBool = true; // if all factors below largestFactor can divide into the number, returns true
}
else
{
if (number % currentFactor == 0)
{
currentFactor--;
isMultiple(number, currentFactor);
}
}
return returnBool;
}
static int smallestMultiple(int largestFactor)
{
for (int i = largestFactor; i < factorial(largestFactor); i+= largestFactor) //goes through all values from the kargestFactor to largestFactor factorial
{
if (isMultiple(i, largestFactor))
{
return i; // if current number can be evenly divided by all factors, it gets returned
}
}
return factorial(largestFactor); // if no numbers get returned, the factorial is the smallest multiple
}
}
}
I know there are much easier ways to solve this, but I want the program to be used to check the lowest multiple of the numbers from 1 to any number, not just 20.
Help would be much appreciated.
EDIT
Thanks to help, i have fixed my code by changing line 42 from
isMultiple(number, currentFactor);
to
returnBool = isMultiple(number, currentFactor);
I also fixed the problem with not getting an accurate return value for smallestMultiple(20);
by changing some of the variables to long instead of int
Your problem is you forgot to use the output of isMultiple in your recursive part
if (number % currentFactor == 0)
{
currentFactor--;
returnBool = isMultiple(number, currentFactor); //you need a to save the value here.
}
Without assigning returnBool there is no way of knowing if the inner isMultiple returned true or not.
Scott's answer is perfectly valid. Forgive me if I'm wrong, but it sounds like you're a student, so in the interest of education, I thought I'd give you some pointers for cleaning up your code.
When writing recursive functions, it's (in my opinion) usually cleaner to return the recursive call directly if possible, as well as returning base cases directly, rather than storing a value that you return at the end. (It's not always possible, in complicated cases where you have to make a recursive call, modify the return value, and then make another recursive call, but these are uncommon.)
This practice:
makes base cases very obvious increasing readability and forces you to consider all of your base cases
prevents you from forgetting to assign the return value, as you did in your original code
prevents potential bugs where you might accidentally do some erroneous additional processing that alters the result before you return it
reduces the number of nested if statements, increasing readability by reducing preceding whitespace
makes code-flow much more obvious increasing readability and ability to debug
usually results in tail-recursion which is the best for performance, nearly as performant as iterative code
caveat: nowadays most compilers' optimizers will rejigger production code to produce tail-recursive functions, but getting into the habit of writing your code with tail-recursion in the first place is good practice, especially as interpreted scripting languages (e.g. JavaScript) are taking over the world, where code optimization is less possible by nature
The other change I'd make is to remove currentFactor--; and move the subtraction into the recursive call itself. It increases readability, reduces the chance of side effects, and, in the case where you don't use tail-recursion, prevents you from altering a value that you later expect to be unaltered. In general, if you can avoid altering values passed into a function (as opposed to a procedure/void), you should.
Also, in this particular case, making this change removes up to 3 assembly instructions and possibly an additional value on the stack depending on how the optimizer handles it. In long running loops with large depths, this can make a difference*.
static bool isMultiple(int number, int currentFactor)
{
if (currentFactor == 1)
{
// if all factors below largestFactor can divide into the number, return true
return true;
}
if (number % currentFactor != 0)
{
return false;
}
return isMultiple(number, currentFactor - 1);
}
* A personal anecdote regarding deep recursive calls and performance...
A while back, I was writing a program in C++ to enumerate the best moves for all possible Connect-4 games. The maximum depth of the recursive search function was 42 and each depth had up to 7 recursive calls. Initial versions of the code had an estimated running time of 2 million years, and that was using parallelism. Those 3 additional instructions can make a HUGE difference, both for sheer number of additional instructions and the amount L1 and L2 cache misses.
This algorithm came up to my mind right now, so please anyone correct me if i'm wrong.
As composite numbers are made by multiplication of prime numbers (and prime numbers can not be generated from multiplication of any other numbers) so the number must be multiplied to all prime numbers, some numbers like 6 when they are reached our smallest number is dividable (as its already multiplied to 2 and 3), but for those that are not, multiplying by a prime number (that is obviously less than the number itself so should be in our prime list) would make our smallest dividable to that number too. so for example when we get to 4, multiplying by2` (a prime number less than 4) would be enough, 8 and 9 the same way, ...
bool IsPrime(int x)
{
if(x == 1) return false;
for(int i=2;i<=Math.Sqrt(x);i+=2)
if(x%i==0) return false;
return true;
}
int smallest = 1;
List<int> primes = new List<int>();
for(int i=1;i<=20;i++)
if(IsPrime(i))
{
smallest *= i;
primes.Add(i);
}
else if(smallest % i != 0)
for(int j=0;j<primes.Count;j++)
if((primes[j]*smallest)%i == 0)
{
smallest *= primes[j];
break;
}
Edit:
As we have list of prime numbers so the best way to find out if a number is prime or not would be:
bool IsPrime(int x)
{
if(x == 1) return false;
for(int i = 0; i< primes.Count; i++)
if(x%primes[i] == 0) return false;
return true;
}

Unity formatting multiple numbers

So I'm a complete newb to unity and c# and I'm trying to make my first mobile incremental game. I know how to format a variable from (e.g.) 1000 >>> 1k however I have several variables that can go up to decillion+ so I imagine having to check every variable's value seperately up to decillion+ will be quite inefficient. Being a newb I'm not sure how to go about it, maybe a for loop or something?
EDIT: I'm checking if x is greater than a certain value. For example if it's greater than 1,000, display 1k. If it's greater than 1,000,000, display 1m...etc etc
This is my current code for checking if x is greater than 1000 however I don't think copy pasting this against other values would be very efficient;
if (totalCash > 1000)
{
totalCashk = totalCash / 1000;
totalCashTxt.text = "$" + totalCashk.ToString("F1") + "k";
}
So, I agree that copying code is not efficient. That's why people invented functions!
How about simply wrapping your formatting into function, eg. named prettyCurrency?
So you can simply write:
totalCashTxt.text = prettyCurrency(totalCashk);
Also, instead of writing ton of ifs you can handle this case with logarithm with base of 10 to determine number of digits. Example in pure C# below:
using System.IO;
using System;
class Program
{
// Very simple example, gonna throw exception for numbers bigger than 10^12
static readonly string[] suffixes = {"", "k", "M", "G"};
static string prettyCurrency(long cash, string prefix="$")
{
int k;
if(cash == 0)
k = 0; // log10 of 0 is not valid
else
k = (int)(Math.Log10(cash) / 3); // get number of digits and divide by 3
var dividor = Math.Pow(10,k*3); // actual number we print
var text = prefix + (cash/dividor).ToString("F1") + suffixes[k];
return text;
}
static void Main()
{
Console.WriteLine(prettyCurrency(0));
Console.WriteLine(prettyCurrency(333));
Console.WriteLine(prettyCurrency(3145));
Console.WriteLine(prettyCurrency(314512455));
Console.WriteLine(prettyCurrency(31451242545));
}
}
OUTPUT:
$0.0
$333.0
$3.1k
$314.5M
$31.5G
Also, you might think about introducing a new type, which implements this function as its ToString() overload.
EDIT:
I forgot about 0 in input, now it is fixed. And indeed, as #Draco18s said in his comment nor int nor long will handle really big numbers, so you can either use external library like BigInteger or switch to double which will lose his precision when numbers becomes bigger and bigger. (e.g. 1000000000000000.0 + 1 might be equal to 1000000000000000.0). If you choose the latter you should change my function to handle numbers in range (0.0,1.0), for which log10 is negative.

Fermat primality test

I have tried to write a code for Fermat primality test, but apparently failed.
So if I understood well: if p is prime then ((a^p)-a)%p=0 where p%a!=0.
My code seems to be OK, therefore most likely I misunderstood the basics. What am I missing here?
private bool IsPrime(int candidate)
{
//checking if candidate = 0 || 1 || 2
int a = candidate + 1; //candidate can't be divisor of candidate+1
if ((Math.Pow(a, candidate) - a) % candidate == 0) return true;
return false;
}
Reading the wikipedia article on the Fermat primality test, You must choose an a that is less than the candidate you are testing, not more.
Furthermore, as MattW commented, testing only a single a won't give you a conclusive answer as to whether the candidate is prime. You must test many possible as before you can decide that a number is probably prime. And even then, some numbers may appear to be prime but actually be composite.
Your basic algorithm is correct, though you will have to use a larger data type than int if you want to do this for non-trivial numbers.
You should not implement the modular exponentiation in the way that you did, because the intermediate result is huge. Here is the square-and-multiply algorithm for modular exponentiation:
function powerMod(b, e, m)
x := 1
while e > 0
if e%2 == 1
x, e := (x*b)%m, e-1
else b, e := (b*b)%m, e//2
return x
As an example, 437^13 (mod 1741) = 819. If you use the algorithm shown above, no intermediate result will be greater than 1740 * 1740 = 3027600. But if you perform the exponentiation first, the intermediate result of 437^13 is 21196232792890476235164446315006597, which you probably want to avoid.
Even with all of that, the Fermat test is imperfect. There are some composite numbers, the Carmichael numbers, that will always report prime no matter what witness you choose. Look for the Miller-Rabin test if you want something that will work better. I modestly recommend this essay on Programming with Prime Numbers at my blog.
You are dealing with very large numbers, and trying to store them in doubles, which is only 64 bits.
The double will do the best it can to hold your number, but you are going to loose some accuracy.
An alternative approach:
Remember that the mod operator can be applied multiple times, and still give the same result.
So, to avoid getting massive numbers you could apply the mod operator during the calculation of your power.
Something like:
private bool IsPrime(int candidate)
{
//checking if candidate = 0 || 1 || 2
int a = candidate - 1; //candidate can't be divisor of candidate - 1
int result = 1;
for(int i = 0; i < candidate; i++)
{
result = result * a;
//Notice that without the following line,
//this method is essentially the same as your own.
//All this line does is keeps the numbers small and manageable.
result = result % candidate;
}
result -= a;
return result == 0;
}

Fastest way to search a list of names in C#

I have a list of perhaps 100,000 strings in memory in my application. I need to find the top 20 strings that contain a certain keyword (case insensitive). That's easy to do, I just run the following LINQ.
from s in stringList
where s.ToLower().Contains(searchWord.ToLower())
select s
However, I have a distinct feeling that I could do this much faster and I need to find the way to that, because I need to look up in this list multiple times per second.
Finding substrings (not complete matches) is surprisingly hard. There is nothing built-in to help you with this. I suggest you look into Suffix Trees data structures which can be used to find substrings efficiently.
You can pull searchWord.ToLower() out to a local variable to save tons of string operations, btw. You can also pre-calculate the lower-case version of stringList. If you can't precompute, at least use s.IndexOf(searchWord, StringComparison.InvariantCultureIgnoreCase) != -1. This saves on expensive ToLower calls.
You can also slap an .AsParallel on the query.
Another option, although it would require a fair amount of memory, would be to precompute something like a suffix array (a list of positions within the strings, sorted by the strings to which they point).
http://en.wikipedia.org/wiki/Suffix_array
This would be most feasible if the list of strings you're searching against is relatively static. The entire list of string indexes could be stored in a single array of tuples(indexOfString, positionInString), upon which you would perform a binary search, using String.Compare(keyword, 0, target, targetPos, keyword.Length).
So if you had 100,000 strings of average 20 length, you would need 100,000 * 20 * 2*sizeof(int) of memory for the structure. You could cut that in half by packing both indexOfString and positionInString into a single 32 bit int, for example with positionInString in the lowest 12 bits, and the indexOfString in the remaining upper bits. You'd just have to do a little bit fiddling to get the two values back out. It's important to note that the structure contains no strings or substrings itself. The strings you're searching against exist only once.
This would basically give you a complete index, and allow finding any substring very quickly (binary search over the index the suffix array represents), with a minimum of actual string comparisons.
If memory is dear, a simple optimization of the original brute force algorithm would be to precompute a dictionary of unique chars, and assign ordinal numbers to represent each. Then precompute a bit array for each string with the bits set for each unique char contained within the string. Since your strings are relatively short, there should be a fair amount of variability of the resuting BitArrays (it wouldn't work well if your strings were very long). You then simply compute the BitArray or your search keyword, and only search for the keyword in those strings where keywordBits & targetBits == keywordBits. If your strings are preconverted to lower case, and are just the English alphabet, the BitArray would likely fit within a single int. So this would require a minimum of additional memory, be simple to implement, and would allow you to quickly filter out strings within which you will definitely not find the keyword. This might be a useful optimization since string searches are fast, but you have so many of them to do using the brute force search.
EDIT For those interested, here is a basic implementation of the initial solution I proposed. I ran tests using 100,000 randomly generated strings of lengths described by the OP. Although it took around 30 seconds to construct and sort the index, once made, the speed of searching for keywords 3000 times was 49,805 milliseconds for brute force, and 18 milliseconds using the indexed search, so a couple thousand times faster. If you rarely build the list, then my simple, but relatively slow method of initially building the suffix array should be sufficient. There are smarter ways to build it that are faster, but would require more coding than my basic implementation below.
// little test console app
static void Main(string[] args) {
var list = new SearchStringList(true);
list.Add("Now is the time");
list.Add("for all good men");
list.Add("Time now for something");
list.Add("something completely different");
while (true) {
string keyword = Console.ReadLine();
if (keyword.Length == 0) break;
foreach (var pos in list.FindAll(keyword)) {
Console.WriteLine(pos.ToString() + " =>" + list[pos.ListIndex]);
}
}
}
~~~~~~~~~~~~~~~~~~
// file for the class that implements a simple suffix array
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;
namespace ConsoleApplication1 {
public class SearchStringList {
private List<string> strings = new List<string>();
private List<StringPosition> positions = new List<StringPosition>();
private bool dirty = false;
private readonly bool ignoreCase = true;
public SearchStringList(bool ignoreCase) {
this.ignoreCase = ignoreCase;
}
public void Add(string s) {
if (s.Length > 255) throw new ArgumentOutOfRangeException("string too big.");
this.strings.Add(s);
this.dirty = true;
for (byte i = 0; i < s.Length; i++) this.positions.Add(new StringPosition(strings.Count-1, i));
}
public string this[int index] { get { return this.strings[index]; } }
public void EnsureSorted() {
if (dirty) {
this.positions.Sort(Compare);
this.dirty = false;
}
}
public IEnumerable<StringPosition> FindAll(string keyword) {
var idx = IndexOf(keyword);
while ((idx >= 0) && (idx < this.positions.Count)
&& (Compare(keyword, this.positions[idx]) == 0)) {
yield return this.positions[idx];
idx++;
}
}
private int IndexOf(string keyword) {
EnsureSorted();
// binary search
// When the keyword appears multiple times, this should
// point to the first match in positions. The following
// positions could be examined for additional matches
int minP = 0;
int maxP = this.positions.Count - 1;
while (maxP > minP) {
int midP = minP + ((maxP - minP) / 2);
if (Compare(keyword, this.positions[midP]) > 0) {
minP = midP + 1;
} else {
maxP = midP;
}
}
if ((maxP == minP) && (Compare(keyword, this.positions[minP]) == 0)) {
return minP;
} else {
return -1;
}
}
private int Compare(StringPosition pos1, StringPosition pos2) {
int len = Math.Max(this.strings[pos1.ListIndex].Length - pos1.StringIndex, this.strings[pos2.ListIndex].Length - pos2.StringIndex);
return String.Compare(strings[pos1.ListIndex], pos1.StringIndex, this.strings[pos2.ListIndex], pos2.StringIndex, len, ignoreCase);
}
private int Compare(string keyword, StringPosition pos2) {
return String.Compare(keyword, 0, this.strings[pos2.ListIndex], pos2.StringIndex, keyword.Length, this.ignoreCase);
}
// Packs index of string, and position within string into a single int. This is
// set up for strings no greater than 255 bytes. If longer strings are desired,
// the code for the constructor, and extracting ListIndex and StringIndex would
// need to be modified accordingly, taking bits from ListIndex and using them
// for StringIndex.
public struct StringPosition {
public static StringPosition NotFound = new StringPosition(-1, 0);
private readonly int position;
public StringPosition(int listIndex, byte stringIndex) {
this.position = (listIndex < 0) ? -1 : this.position = (listIndex << 8) | stringIndex;
}
public int ListIndex { get { return (this.position >= 0) ? (this.position >> 8) : -1; } }
public byte StringIndex { get { return (byte) (this.position & 0xFF); } }
public override string ToString() {
return ListIndex.ToString() + ":" + StringIndex;
}
}
}
}
There's one approach that would be a lot faster. But it would mean looking for exact word matches, rather than using the Contains functionality.
Basically, if you have the memory for it you could create a Dictionary of words which also reference some sort of ID (or IDs) for the strings in which the word is found.
So the Dictionary might be of type <string, List<int>>. The benefit here of course is that you're consolidating a lot of words into a smaller collection. And, the Dictionary is very fast with lookups since it's built on a hash table.
Now if this isn't what you're looking for you might search for in-memory full-text searching libraries. SQL Server supports full-text searching using indexing to speed up the process beyond traditional wildcard searches. But a pure in-memory solution would surely be faster. This still may not give you the exact functionality of a wildcard search, however.
In that case what you need is a reverse index.
If you are keen to pay much you can use database specific full text search index, and tuning the indexing to index on every subset of words.
Alternatively, you can use a very successful open source project that can achieve the same thing.
You need to pre-index the string using tokenizer, and build the reverse index file. We have similar use case in Java where we have to have a very fast autocomplete in a big set of data.
You can take a look at Lucene.NET which is a port of Apache Lucene (in Java).
If you are willing to ditch LINQ, you can use NHibernate Search. (wink).
Another option is to implement the pre-indexing in memory, with preprocessing and bypass of scanning unneeded, take a look at the Knuth-Morris-Pratt algorithm.

Looking for a way to optimize this algorithm for parsing a very large string

The following class parses through a very large string (an entire novel of text) and breaks it into consecutive 4-character strings that are stored as a Tuple. Then each tuple can be assigned a probability based on a calculation. I am using this as part of a monte carlo/ genetic algorithm to train the program to recognize a language based on syntax only (just the character transitions).
I am wondering if there is a faster way of doing this. It takes about 400ms to look up the probability of any given 4-character tuple. The relevant method _Probablity() is at the end of the class.
This is a computationally intensive problem related to another post of mine: Algorithm for computing the plausibility of a function / Monte Carlo Method
Ultimately I'd like to store these values in a 4d-matrix. But given that there are 26 letters in the alphabet that would be a HUGE task. (26x26x26x26). If I take only the first 15000 characters of the novel then performance improves a ton, but my data isn't as useful.
Here is the method that parses the text 'source':
private List<Tuple<char, char, char, char>> _Parse(string src)
{
var _map = new List<Tuple<char, char, char, char>>();
for (int i = 0; i < src.Length - 3; i++)
{
int j = i + 1;
int k = i + 2;
int l = i + 3;
_map.Add
(new Tuple<char, char, char, char>(src[i], src[j], src[k], src[l]));
}
return _map;
}
And here is the _Probability method:
private double _Probability(char x0, char x1, char x2, char x3)
{
var subset_x0 = map.Where(x => x.Item1 == x0);
var subset_x0_x1_following = subset_x0.Where(x => x.Item2 == x1);
var subset_x0_x2_following = subset_x0_x1_following.Where(x => x.Item3 == x2);
var subset_x0_x3_following = subset_x0_x2_following.Where(x => x.Item4 == x3);
int count_of_x0 = subset_x0.Count();
int count_of_x1_following = subset_x0_x1_following.Count();
int count_of_x2_following = subset_x0_x2_following.Count();
int count_of_x3_following = subset_x0_x3_following.Count();
decimal p1;
decimal p2;
decimal p3;
if (count_of_x0 <= 0 || count_of_x1_following <= 0 || count_of_x2_following <= 0 || count_of_x3_following <= 0)
{
p1 = e;
p2 = e;
p3 = e;
}
else
{
p1 = (decimal)count_of_x1_following / (decimal)count_of_x0;
p2 = (decimal)count_of_x2_following / (decimal)count_of_x1_following;
p3 = (decimal)count_of_x3_following / (decimal)count_of_x2_following;
p1 = (p1 * 100) + e;
p2 = (p2 * 100) + e;
p3 = (p3 * 100) + e;
}
//more calculations omitted
return _final;
}
}
EDIT - I'm providing more details to clear things up,
1) Strictly speaking I've only worked with English so far, but its true that different alphabets will have to be considered. Currently I only want the program to recognize English, similar to whats described in this paper: http://www-stat.stanford.edu/~cgates/PERSI/papers/MCMCRev.pdf
2) I am calculating the probabilities of n-tuples of characters where n <= 4. For instance if I am calculating the total probability of the string "that", I would break it down into these independent tuples and calculate the probability of each individually first:
[t][h]
[t][h][a]
[t][h][a][t]
[t][h] is given the most weight, then [t][h][a], then [t][h][a][t]. Since I am not just looking at the 4-character tuple as a single unit, I wouldn't be able to just divide the instances of [t][h][a][t] in the text by the total no. of 4-tuples in the next.
The value assigned to each 4-tuple can't overfit to the text, because by chance many real English words may never appear in the text and they shouldn't get disproportionally low scores. Emphasing first-order character transitions (2-tuples) ameliorates this issue. Moving to the 3-tuple then the 4-tuple just refines the calculation.
I came up with a Dictionary that simply tallies the count of how often the tuple occurs in the text (similar to what Vilx suggested), rather than repeating identical tuples which is a waste of memory. That got me from about ~400ms per lookup to about ~40ms per, which is a pretty great improvement. I still have to look into some of the other suggestions, however.
In yoiu probability method you are iterating the map 8 times. Each of your wheres iterates the entire list and so does the count. Adding a .ToList() ad the end would (potentially) speed things. That said I think your main problem is that the structure you've chossen to store the data in is not suited for the purpose of the probability method. You could create a one pass version where the structure you store you're data in calculates the tentative distribution on insert. That way when you're done with the insert (which shouldn't be slowed down too much) you're done or you could do as the code below have a cheap calculation of the probability when you need it.
As an aside you might want to take puntuation and whitespace into account. The first letter/word of a sentence and the first letter of a word gives clear indication on what language a given text is written in by taking punctuaion charaters and whitespace as part of you distribution you include those characteristics of the sample data. We did that some years back. Doing that we shown that using just three characters was almost as exact (we had no failures with three on our test data and almost as exact is an assumption given that there most be some weird text where the lack of information would yield an incorrect result). as using more (we test up till 7) but the speed of three letters made that the best case.
EDIT
Here's an example of how I think I would do it in C#
class TextParser{
private Node Parse(string src){
var top = new Node(null);
for (int i = 0; i < src.Length - 3; i++){
var first = src[i];
var second = src[i+1];
var third = src[i+2];
var fourth = src[i+3];
var firstLevelNode = top.AddChild(first);
var secondLevelNode = firstLevelNode.AddChild(second);
var thirdLevelNode = secondLevelNode.AddChild(third);
thirdLevelNode.AddChild(fourth);
}
return top;
}
}
public class Node{
private readonly Node _parent;
private readonly Dictionary<char,Node> _children
= new Dictionary<char, Node>();
private int _count;
public Node(Node parent){
_parent = parent;
}
public Node AddChild(char value){
if (!_children.ContainsKey(value))
{
_children.Add(value, new Node(this));
}
var levelNode = _children[value];
levelNode._count++;
return levelNode;
}
public decimal Probability(string substring){
var node = this;
foreach (var c in substring){
if(!node.Contains(c))
return 0m;
node = node[c];
}
return ((decimal) node._count)/node._parent._children.Count;
}
public Node this[char value]{
get { return _children[value]; }
}
private bool Contains(char c){
return _children.ContainsKey(c);
}
}
the usage would then be:
var top = Parse(src);
top.Probability("test");
I would suggest changing the data structure to make that faster...
I think a Dictionary<char,Dictionary<char,Dictionary<char,Dictionary<char,double>>>> would be much more efficient since you would be accessing each "level" (Item1...Item4) when calculating... and you would cache the result in the innermost Dictionary so next time you don't have to calculate at all..
Ok, I don't have time to work out details, but this really calls for
neural classifier nets (Just take any off the shelf, even the Controllable Regex Mutilator would do the job with way more scalability) -- heuristics over brute force
you could use tries (Patricia Tries a.k.a. Radix Trees to make a space optimized version of your datastructure that can be sparse (the Dictionary of Dictionaries of Dictionaries of Dictionaries... looks like an approximation of this to me)
There's not much you can do with the parse function as it stands. However, the tuples appear to be four consecutive characters from a large body of text. Why not just replace the tuple with an int and then use the int to index the large body of text when you need the character values. Your tuple based method is effectively consuming four times the memory the original text would use, and since memory is usually the bottleneck to performance, it's best to use as little as possible.
You then try to find the number of matches in the body of text against a set of characters. I wonder how a straightforward linear search over the original body of text would compare with the linq statements you're using? The .Where will be doing memory allocation (which is a slow operation) and the linq statement will have parsing overhead (but the compiler might do something clever here). Having a good understanding of the search space will make it easier to find an optimal algorithm.
But then, as has been mentioned in the comments, using a 264 matrix would be the most efficent. Parse the input text once and create the matrix as you parse. You'd probably want a set of dictionaries:
SortedDictionary <int,int> count_of_single_letters; // key = single character
SortedDictionary <int,int> count_of_double_letters; // key = char1 + char2 * 32
SortedDictionary <int,int> count_of_triple_letters; // key = char1 + char2 * 32 + char3 * 32 * 32
SortedDictionary <int,int> count_of_quad_letters; // key = char1 + char2 * 32 + char3 * 32 * 32 + char4 * 32 * 32 * 32
Finally, a note on data types. You're using the decimal type. This is not an efficient type as there is no direct mapping to CPU native type and there is overhead in processing the data. Use a double instead, I think the precision will be sufficient. The most precise way will be to store the probability as two integers, the numerator and denominator and then do the division as late as possible.
The best approach here is to using sparse storage and pruning after each each 10000 character for example. Best storage strucutre in this case is prefix tree, it will allow fast calculation of probability, updating and sparse storage. You can find out more theory in this javadoc http://alias-i.com/lingpipe/docs/api/com/aliasi/lm/NGramProcessLM.html

Categories