C# Creating your own hash algorithm - 99 documents, 0.0001 collision? - c#

Looking at Wolfram: collision to bits - graph with 99 documents, I'd need a 25.5bit hashing algorithm to have a 0.0001 chance for a collision.
I looked at CRC-24 and I was wondering if it could be improved to use even less characters. I have a big list of characters that can be used for the hash: Basically all Unicode characters except for 4 or 5 characters.
Now how do you create your own hash algorithm based on a set of usable characters in C#?
EDIT:
I try to precise the issue: I have 99 strings. I want to cut them to 64 chars max. length. This can create duplicates. But they need to be unique while maintaining their meaning. The idea was to create a hash as small as possible and replace the last characters with the hash created of the original string. The hash of course should have a low probability for collision and be as short as possible. How I understand, the more symbols can be used in the hash (as in a-z0-9 or a-zA-Z0-9 or even all unicode characters), the less characters can the hash have before there are collisions. I looked at sha-1 and just trimming it, or crc-32 but they are not using the "full potential" of unicode characters.

Related

Run-Length Encoding assumptions

When implementing the Run-length encoding (RLE), can I assume that the Runs are going to be shorter than one byte?
So there will not be a situation where there is a run like this
WWWBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB...
Where there are 256 B's because you cannot represent that length in one byte whereas you can represent the W's as 3W
If not, should the Run be split into two Runs? How should this situation be handled? I couldn't find any information about this case.
To my understanding, you understand the situation correctly. The word length used for counting the repetition of a character is usually a byte, and the individual characters usually are also encoded as a byte. If in the input there is a repetition of e.g. 300 b, the encoding will be as follows.
255 (number of repetitions of the next character)
98 (ASCII value for b)
45 (nunber of repetitions of the next character)
98 (ASCII value for b)
In total, a run of length larger than 255 will have to be split in two runs. That being said, the actual encoding depends on the specific implementations; it is also possible to use other types than bytes for counting the repetition of characters.

Any 32 or 64 bit hash function?

I want to create a method in c# which will return a string of max 10-12 characters. I have tried using SHA1 and MD5 but they are 160 and 128 bits respectively and generates 32 characters string which doesn't fulfill my requirements. Security is not the issue. I just need a small string that will remain unique
You can truncate the string (the hash) to the length you want. You'll only make it weaker (as an extreme example, if you truncate it to one byte, you'll probably have a collision after 16 elements are hashed, thanks to the birthday problem). Each part of a good hash is as much "good" as every other part. So take the first x characters/bytes and live happy. See for example a discussion about this in security. There is an explanation here about how much secure will be a truncated hash.

Algorithm to generate non word keys

I am on the mission to generate human friendly and memorable short keys, requirements are:
about 4 to 6 characters long with the Base36 [0-9A-Z]
case-insensitive charset.
Lowest percentage of meaningful words coming up, swearing words or not.
Maximizing the number of keys in range we can use within max 6 characters.
The list doesn't have to be totally random, it can be sequential.
Is there any nice functions or algorithms out there to do this? Thank you.

Generating a unique 15 digite Pin code from a 10digit number

I want to create pin codes and serial numbers for scratch papers , I have already generated unique 10 digit numbers , now I want to turn that 10 digit number to a 16 digit number (with check digit in the end) . The thing is that the function that does this should be reversible so by seeing the 16 digit number I can check whether it is valid or not .(if it is not generated by me it should not be valid) .
this is how I have generated the 10 digit unique random codes :
Guid PinGuid;
byte[] Arr;
UInt32 PINnum = 0;
while (PINnum.ToString().Length != 10)
{
PinGuid = Guid.NewGuid();
Arr = PinGuid.ToByteArray();
PINnum = BitConverter.ToUInt32(Arr, 0);
}
return PINnum.ToString();
I would be grateful if you can give me a hint on how to do it .
First off, I would avoid GUID since some prefixes are reserved for special applications. Which means that these areas of the GUID may not be allocated uniformly on creation, so you may not get exactly 10 digits of randomness like you plan.
Also since your loop waits for the GUID to become the right size you could do it more efficiently.
10 digits = 10**10
Log_2(10) = approx 3322/1000
So you need approx 33 bits for 10 digit number. Since you want your number to be exactly 10 digits, you can either pad numbers less than 10^10 with leading zeroes, or you can generate only numbers between 10^9 and 10^10 - 1.
If you take the latter case you need 9*10^9 numbers in your space -- giving you all numbers from 1 followed by nine zeroes up to 9 followed by 9 9s.
Then you would like to convert this space of numbers into a larger space, to expand it by a factor of 5 and include one more digit as a check digit.
Pick a check digit function as anything you like. You could simply sum (mod 10) the original 10 digits, or choose something more complicated.
Presumably you do not want people to be able to generate valid instances. So if you are really serious about your security, you should modify any suggestions you get from the net before deploying them.
I would do something along the lines of :
Generate a uniform 10digit number with no leading zeroes by
randomTenDigits = 10**9 + rand(9*10**9)
Using an encryption scheme (like AES 256 or even RSA or El-Gamal since their slower speed will no be so important since input length is small ) encrypt this 10 digit number using a secret key only you and others you trust are aware of. Perhaps you can concatenate the 10 digit number 10 times, and then concatenate that result with some other secret that you choose, and then finally encrypt this expanded secret of which the 10 digit number is a part.
Take some choice 5 digits (around 17 bits) of the resulting ciphertext, and append these to your 10 digit number.
Generate 1 digit of check digit by whatever method you desire.
As you will note the real security of this scheme is not from a check digit, it is from the secret key you can use to authenticate the 16 digit number. The test you will use to authenticate it is: does the given 10 digit number when concatenated with other secrets I have, encrypt, using a secret key only I know, to the given 5 digit number presented with it.
Since the difficulty for an attacker of forging one of your numbers depends on the difficulty of
discovering your secret keys and other info
discovering which method of encryption you use
discovering which part of the resulting cipher text you emit for the 5 digit secret, or
simply brute forcing the 5 digits to discover the correct pairing, and since 5 digits is not a big space to search, I would suggest instead generating larger numbers. 10 or 16 digits is not really a huge space to search. So instead of digits I would use upper and lower case letters plus digits plus space and full stop to give you 64 letters in your alphabet. Then if you used 16 you get around 96 bits of security.
However if numbers are non-negotiable and the size of 10 digits for your base space is also non-negotiable, doing it this way is probably the most secure. You may be able to set up your system to deter people from brute forcing it, though you should consider what if someone acquires a piece of your hardware through a vendor. I believe it is easier to design security in rather than design in a mechanism for detecting people trying to brute force query your system.
However if serious dough is on the line ( like millions ) the security you employ should really be first class. Equivalent to the kind of security you would employ to protect a pin number to a million dollar bank account. The more secure you are the longer you can carry on your biz with credibility and trust.
So along these lines I would suggest increasing the size of your secrets to make it infeasible for someone to simply try all combinations and forge a valid one, and in particular thinking about how to design your system to make it difficult to break for people with lots of skills and motivation (money). You really can't be too careful.
I would keep it simple. Put PINnum.ToString() into a buffer. Place a filler digit at 5 intervals. The first four could be random garbage and the last could be a check digit, or you could make each filler a check digit for its section. Here is an example.
buf = PINnum.ToString();
int chkdgit = function to create your checkdigit
Random rnd = new Random();
int i = rnd.Next(1001,9999);
fillbuf = i.toString();
return buf[0] + buf[1] + fillbuf[0] + buf[2] .... chkdgit.toString();
its a rather simple approach, but if your security needs aren't at level 1, it might suffice

Hashtable/Dictionary collisions

Using the standard English letters and underscore only, how many characters can be used at a maximum without causing a potential collision in a hashtable/dictionary.
So strings like:
blur
Blur
b
Blur_The_Shades_Slightly_With_A_Tint_Of_Blue
...
There's no guarantee that you won't get a collision between single letters.
You probably won't, but the algorithm used in string.GetHashCode isn't specified, and could change. (In particular it changed between .NET 1.1 and .NET 2.0, which burned people who assumed it wouldn't change.)
Note that hash code collisions won't stop well-designed hashtables from working - you should still be able to get the right values out, it'll just potentially need to check more than one key using equality if they've got the same hash code.
Any dictionary which relies on hash codes being unique is missing important information about hash codes, IMO :) (Unless it's operating under very specific conditions where it absolutely knows they'll be unique, i.e. it's using a perfect hash function.)
Given a perfect hashing function (which you're not typically going to have, as others have mentioned), you can find the maximum possible number of characters that guarantees no two strings will produce a collision, as follows:
No. of unique hash codes avilable = 2 ^ 32 = 4294967296 (assuming an 32-bit integer is used for hash codes)
Size of character set = 2 * 26 + 1 = 53 (26 lower as upper case letters in the Latin alphabet, plus underscore)
Then you must consider that a string of length l (or less) has a total of 54 ^ l representations. Note that the base is 54 rather than 53 because the string can terminate after any character, adding an extra possibility per char - not that it greatly effects the result.
Taking the no. of unique hash codes as your maximum number of string representations, you get the following simple equation:
54 ^ l = 2 ^ 32
And solving it:
log2 (54 ^ l) = 32
l * log2 54 = 32
l = 32 / log2 54 = 5.56
(Where log2 is the logarithm function of base 2.)
Since string lengths clearly can't be fractional, you take the integral part to give a maximum length of just 5. Very short indeed, but observe that this restriction would prevent even the remotest chance of a collision given a perfect hash function.
This is largely theoretical however, as I've mentioned, and I'm not sure of how much use it might be in the design consideration of anything. Saying that, hopefully it should help you understand the matter from a theoretical viewpoint, on top of which you can add the practical considersations (e.g. non-perfect hash functions, non-uniformity of distribution).
Universal Hashing
To calculate the probability of collisions with S strings of length L with W bits per character to a hash of length H bits assuming an optimal universal hash (1) you could calculate the collision probability based on a hash table of size (number of buckets) 'N`.
First things first we can assume a ideal hashtable implementation (2) that splits the H bits in the hash perfectly into the available buckets N(3). This means H becomes meaningless except as a limit for N.
W and 'L' are simply the basis for an upper bound for S. For simpler maths assume that strings length < L are simply padded to L with a special null character. If we were interested we are interested in the worst case this is 54^L (26*2+'_'+ null), plainly this is a ludicrous number, the actual number of entries is more useful than the character set and the length so we will simply work as if S was a variable in it's own right.
We are left trying to put S items into N buckets.
This then becomes a very well known problem, the birthday paradox
Solving this for various probabilities and number of buckets is instructive but assuming we have 1 billion buckets (so about 4GB of memory in a 32 bit system) then we would need only 37K entries before we hit a 50% chance of their being at least one collision. Given that trying to avoid any collisions in a hashtable becomes plainly absurd.
All this does not mean that we should not care about the behaviour of our hash functions. Clearly these numbers are assuming ideal implementations, they are an upper bound on how good we can get. A poor hash function can give far worse collisions in some areas, waste some of the possible 'space' by never or rarely using it all of which can cause hashes to be less than optimal and even degrade to a performance that looks like a list but with much worse constant factors.
The .NET framework's implementation of the string's hash function is not great (in that it could be better) but is probably acceptable for the vast majority of users and is reasonably efficient to calculate.
An Alternative Approach: Perfect Hashing
If you wish you can generate what are known as perfect hashes this requires full knowledge of the input values in advance however so is not often useful. In a simliar vein to the above maths we can show that even perfect hashing has it's limits:
Recall the limit of of 54 ^ L strings of length L. However we only have H bits (we shall assume 32) which is about 4 billion different numbers. So if you can have truly any string and any number of them then you have to satisfy:
54 ^ L <= 2 ^ 32
And solving it:
log2 (54 ^ L) <= 32
L * log2 54 <= 32
L <= 32 / log2 54 <= 5.56
Since string lengths clearly can't be fractional, you are left with a maximum length of just 5. Very short indeed.
If you know that you will only ever have a set of strings well below 4 Billion in size then perfect hashing would let you handle any value of L, but restricting the set of values can be very hard in practice and you must know them all in advance or degrade to what amounts to a database of string -> hash and add to it as new strings are encountered.
For this exercise the universal hash is optimal as we wish to reduce the probability of any collision i.e. for any input the probability of it having output x from a set of possibilities R is 1/R.
Note that doing an optimal job on the hashing (and the internal bucketing) is quite hard but that you should expect the built in types to be reasonable if not always ideal.
In this example I have avoided the question of closed and open addressing. This does have some bearing on the probabilities involved but not significantly
A hash algorithm isn't supposed to guarantee uniqueness. Given that there are far more potential strings (26^n for n length, even ignoring special chars, spaces, capitalization, non-english chars, etc.) than there are places in your hashtable, there's no way such a guarantee could be fulfilled. It's only supposed to guarantee a good distribution.
If your key is a string (e.g., a Dictionary) then it's GetHashCode() will be used. That's a 32bit integer. Hashtable defaults to a 1 key to value load factor and increases the number of buckets to maintain that load factor. So if you do see collisions they should tend to occur around reallocation boundaries (and decrease shortly after reallocation).

Categories