I need to generate a unique id for file sizes of upto 200-300MB. The condition is that the algo should be quick, it should not take much time. I am selecting the files from a desktop and calculation a hash value as such:
HMACSHA256 myhmacsha256 = new HMACSHA256(key);
byte[] hashValue = myhmacsha256.ComputeHash(fileStream);
filestream is a handle to the file to read content from it. This method is going to take a lot of time for obvious reasons.
Does windows generate a key for a file for its own book keeping that I could directly use ?
Is there any other way to identify if the file is same, instead of matching file name which is not very foolproof.
MD5.Create().ComputeHash(fileStream);
Alternatively, I'd suggest looking at this rather similar question.
How about generating a hash from the info that's readily available from the file itself? i.e. concatenate :
File Name
File Size
Created Date
Last Modified Date
and create your own?
When you compute hashes and compare them, it would require both files to completely go through. My suggestion is to first check the file sizes, if they are identical and then go through the files byte by byte.
If you want a "quick and dirty" check, I would suggest looking at CRC-32. It is extremely fast (the algorithm simply involves doing XOR with table lookups), and if you aren't too concerned about collision resistance, a combination of the file size and the CRC-32 checksum over the file data should be adequate. 28.5 bits are required to represent the file size (that gets you to 379M bytes), which means you get a checksum value of effectively just over 60 bits. I would use a 64-bit quantity to store the file size, for future proofing, but 32 bits would work too in your scenario.
If collision resistance is a consideration, then you pretty much have to use one of the tried-and-true-yet-unbroken cryptographic hash algorithms. I would still concur with what Devils child wrote and also include the file size as a separate (readily accessible) part of the hash, however; if the sizes don't match, there is no chance that the file content can be the same, so in that case the computationally intensive hash calculation can be skipped.
Related
We are trying to convert text "HELLOWORLDTHISISALARGESTRINGCONTENT" into a smaller text. while doing it using MD5 hash we are getting the 16 byte, since it is a one way encryption we are not able to decrypt it. Is there any other way to convert this large string to smaller and revert back the same data? If so please let us know how to do it
Thanks in advance.
Most compression algorithms won't be able to do much with a sequence that short (or may actually make it bigger) - so no: there isn't much you can do to magically shrink it. Your best bet would probably be just generate a guid, and store the full value keyed against the guid (in a database or whatever), and then use the short value as a one-time usage key, to look up the long value (and then erase the record).
It heavily depends on the input data. In general - the worst case - you can't lessen the size of a string through compression if the input data is not long enough and has a high entropy.
Hashing is the wrong approach as a hashing function tries to map a large input data to a short one, but it does not guarantee (by itself) that you can't find a second set of data to map to the same string.
What you can try to do is to imlement a compression algorithm or a lookback table.
Compression can be done by ziplib or any other compression library (just google for it). The lookback approach requires a second place to store the lookup information. For example, when you get the first input string, you map it to the number 1 and save the information 1 maps to {input data} somewhere else. For every subsequent data set you add another mapping entry. If the input data set is finite, this approach may save you space.
I hope that this question will not produce some vagueness. Actually I am working on RFID project and I am using Passive Tags. These Tags store only 4 bytes of Data, 32bits. I am trying to store more information in String in Tag's Data Bank. I searched the internet for String compression Algorithms but I didn't find any of them suitable. Someone please guide me through this issue. How can I save more data in this 4 bytes Data Bank, should I use some other strategy for storing, if yes, then what? Moreover, I am using C# on Handheld Window CE device.
I'll appreciate if someone could help me...
It depends on your tag, for example alien tag http://www.alientechnology.com/docs/products/Alien-Technology-Higgs-3-ALN-9662-Short.pdf , has EPC memory , I think you use your EPC memory but You can also use User Memory in your tag. You don't have to compress anything, just use your User Memory. Furthermore, technically I rather not to save many data on my tag, I use my own coding on 32 bit and relates(map) it to the more Data on my Software, and save my data on my Hard Disk. It is more safe too.
There is obviously no compression that can reduce arbitrary 16 byte values to 4 byte values. That's mathematically impossible, check the Pidgeonhole principle for details.
Store the actual data in some kind of database. Have the 4 bytes encode an integer that acts as a key for the row your want to refer to. For example by using an auto-increment primary key, or an index into an array. Works with up to 4 billion rows.
If you have less than 2^32 strings, simply enumerate them and then save the strings index (in your "dictionary") inside your 4 byte "Data Bank".
A compression scheme can't guarantee such high compression ratios.
The only way I can think of with 32-bits is to store an int in the 32-bits, and construct a local/remote URL out of it, which points to the actual data.
You could also make the stored value point to entries in a local look-up table on the device.
Unless you know a lot about the format of your string, it is impossible to do this. This is evident from the pigeonhole principle: you have a theoretical 2^128 different 16-byte strings, but only 2^32 different values to choose from.
In other words, no compression algorithm will guarantee that an arbitrary string in your possible input set will map to a 4-byte value in the output set.
It may be possible to devise an algorithm which will work in your particular case, but unless your data set is sufficiently restricted (at most 1 in 79,228,162,514,264,337,593,543,950,336 possible strings may be valid) and has a meaningful structure, then your only option is to store some mapping externally.
I have a (what seems like) a large task at hand.
I need to go through different archive volumes of multiple folders (we're talking terabytes of data). Within each folder is a .pst file. Some of these folders (and therefore files) may be exactly the same (name or data within the file). I want to be able to compare more than 2 files at once (if possible) to see if any dulpicates are found.
Once the duplicates are found, I need to delete them and keep the originals and then eventually extract all the unique emails.
I know there are programs out there that can find duplicates, but I'm not sure what arguments they would need to pass in these files and I don't know if they can handle such large volumes of data.
I'd like to program in either C# or VB. I'm at a loss on where I should start. Any suggestions??
Ex...
m:\mail\name1\name.pst
m:\mail\name2\name.pst (same exact data as the one above)
m:\mail\name3\anothername.pst (duplicate file to the other 2)
If you just want to remove entire duplicate files the task is very simple to implement.
You will have to go through all your folders and hash the contents of each file. The hash produced has some bits (e.g 32 to 256 bits). If two file hashes are equal there is an extremely high probability (depending on the collision resistance of your hash function, read number of bits) that the respective files are identical.
Of course, now the implementation is up to you (I am not a C# or VB programmer) but I would suggest you something like the following pseudo-code (Next I explain each step and give you links demonstrating how to do it in C#):
do{
file_byte_array = get_file_contents_into_byte_array(file) 1
hash = get_hash from_byte_array(file_byte_array); 2
if(hashtable_has_elem(hashtable,hash)) 3
remove_file(file); 4
else 5
hashtable_insert_elem(hashtable,hash,file); 6
}while_there_are_files_to evaluate 7
This logic should be executed over all of your .pst files. At line 1 (I assume you have your file opened) you write all the contents of your file into a byte array.
Once you have the byte array of your file, you must hash it using an hash function (line 2). You have plenty of hash functions implementations to choose. In some implementations you must break the file into blocks and hash each block contents (e.g here, here and here). Breaking your file in parts may be the only option, if your files are really huge and do not fit in your memory. On the other hand, you have many functions which accept the whole stream (e.g. here, here an example very similar to your problem,here, here, but I would advise you the super fast MurmurHash3). If you have efficiency requisites, stay away of cryptographic hash functions as they are much heavier and you do not need cryptographic properties to perform your task.
Finally, after computing the hash you just need to get some way in which you save the hashes and compare them, in order to find the identical hashes (read files) and delete them (lines 3-6). I purpose the use of a hash table or a dictionary, where the identifier (the object you use to perform lookups) is the file hash and the object File the entry value.
Notes:
Remember!!!: The more bits the hash value has, the lesser is the probability of collisions. If you want to know more about collision probabilities in hash functions read this excellent article. You must pay attention to this topic since your objective is to delete files. If you have a collision then you will delete one file which is not identical and you will loose it forever. There are many tactics to identify collisions, which you can combine and add to your algorithm (e.g. compare the size of your file, compare file content values at random positions, use more than one hash function). My advice would be to use all these tactics. If you use two hash functions, then for two files be considered identical they must have the hash value of each hash function equal:
file1, file2;
file1_hash1 = hash_function1(file1);
file2_hash1 = hash_function1(file2);
file1_hash2 = hash_function2(file1);
file2_hash2 = hash_function2(file2);
if(file1_hash1 == file2_hash1 &&
file2_hash2 == file2_hash2)
// file1 is_duplicate_of file2;
else
// file1 is_NOT_duplicate_of file2;
I would work thru the process of finding duplicates by first recursively finding all of the PST files, then match on file length, then filter by a fixed prefix of bytes, and finally performing full hash or byte comparison to get actual matches.
Recursively building the list and finding potential matches can be as simple as this:
Func<DirectoryInfo, IEnumerable<FileInfo>> recurse = null;
recurse = di => di.GetFiles("*.pst")
.Concat(di.GetDirectories()
.SelectMany(cdi => recurse(cdi)));
var potentialMatches =
recurse(new DirectoryInfo(#"m:\mail"))
.ToLookup(fi => fi.Length)
.Where(x => x.Skip(1).Any());
The potentialMatches query gives you a complete series of potential matches by file size.
I would then use the following functions (which I'd leave the implementation to you) to filter this list further.
Func<FileInfo, FileInfo, int, bool> prefixBytesMatch = /* your implementation */
Func<FileInfo, FileInfo, bool> hashMatch = /* your implementation */
By limiting the matches by file length and then by a prefix of bytes you will significantly reduce the computation of hashes required for your very large files.
I hope this helps.
Let's say I'm trying to generate a monster for use in a roleplaying game from an arbitrary piece of input data. Think Barcode Battler or a more-recent iPod game whose name escapes me.
It seems to me like the most straightforward way to generate a monster would be to use a hash function on the input data (say, an MP3 file) and use that hash value to pick from some predetermined set of monsters, or use pieces of the hash value to generate statistics for a custom monster.
The question is, are there obvious methods for taking an arbitrary piece of input data and hashing it to one of a fixed set of values? The primary goal of hashing algorithms is, after all, to avoid collisions. Instead, I'm suggesting that we want to guarantee them - that, given a predetermined set of 100 monsters, we want any given MP3 file to map to one of them.
This question isn't bound to a particular language, but I'm working in C#, so that would be my preference for discussion. Thanks!
Hash the file using any hash function of your choice, convert the result into an integer, and take the result modulo 100.
monsterId = hashResult % 100;
Note that if you later decide to add a new monster and change the code to % 101, nearly all hashes will suddenly map to different monsters.
Okay, that's a very nice question. I would say: don't use hash, because this won't be a nice way for the player to predict patterns. From cognitive theory we know that one thing that is interesting in games is that player can learn by trial and error. So if player gives the input of an image of a red dragon and another image of a red dragon with slightly different pixels, he would like to have the same monster appearing, right? If you use hashes that would not be the case.
Instead, I would recommend doing much simpler things. Imagine that your raw piece of input is just a byte[] , it is itself already a list of numbers. Unfortunately it's only a list of numbers from 0 to 255, so if you for example do an average, you can get 1 number from 0 to 255 . That you could map to a number of monsters already, if you need more, you can read pairs of bytes and just compose Int16, that way you will be able to go up to 65536 possible monsters :)
You can use the MD5, SHA1, or SHA2 of a file as a unique finger print for the file. Each hash function will give you a larger, less overlapping fingerprint and each can be obtained by library functions already in the base libraries.
In truth you could probably hash a much smaller portion of the file, for instance the first 1-3MB of the file and still get a fairly unique fingerprint, without the expense of processing a larger file (like an AVI).
Look in the System.Security namespace for the MD5Crypto provider for an example of how to generate a MD5 from a byte sequence.
Edit: If you want to ensure that the hash collides in a relatively short order you can use CRC2, 4, 6, 8, 16, 32 which will collide fairly frequently (especially CRC2 :)) but be the same for the same file. It is easy to generate.
I have a 1GB file containing pairs of string and long.
What's the best way of reading it into a Dictionary, and how much memory would you say it requires?
File has 62 million rows.
I've managed to read it using 5.5GB of ram.
Say 22 bytes overhead per Dictionary entry, that's 1.5GB.
long is 8 bytes, that's 500MB.
Average string length is 15 chars, each char 2 bytes, that's 2GB.
Total is about 4GB, where does the extra 1.5 GB go to?
The initial Dictionary allocation takes 256MB.
I've noticed that each 10 million rows I read, consume about 580MB, which fits quite nicely with the above calculation, but somewhere around the 6000th line, memory usage grows from 260MB to 1.7GB, that's my missing 1.5GB, where does it go?
Thanks.
It's important to understand what's happening when you populate a Hashtable. (The Dictionary uses a Hashtable as its underlying data structure.)
When you create a new Hashtable, .NET makes an array containing 11 buckets, which are linked lists of dictionary entries. When you add an entry, its key gets hashed, the hash code gets mapped on to one of the 11 buckets, and the entry (key + value + hash code) gets appended to the linked list.
At a certain point (and this depends on the load factor used when the Hashtable is first constructed), the Hashtable determines, during an Add operation, that it's encountering too many collisions, and that the initial 11 buckets aren't enough. So it creates a new array of buckets that's twice the size of the old one (not exactly; the number of buckets is always prime), and then populates the new table from the old one.
So there are two things that come into play in terms of memory utilization.
The first is that, every so often, the Hashtable needs to use twice as much memory as it's presently using, so that it can copy the table during resizing. So if you've got a Hashtable that's using 1.8GB of memory and it needs to be resized, it's briefly going to need to use 3.6GB, and, well, now you have a problem.
The second is that every hash table entry has about 12 bytes of overhead: pointers to the key, the value, and the next entry in the list, plus the hash code. For most uses, that overhead is insignificant, but if you're building a Hashtable with 100 million entries in it, well, that's about 1.2GB of overhead.
You can overcome the first problem by using the overload of the Dictionary's constructor that lets you provide an initial capacity. If you specify a capacity big enough to hold all of the entries you're going to be added, the Hashtable won't need to be rebuilt while you're populating it. There's pretty much nothing you can do about the second.
Everyone here seems to be in agreement that the best way to handle this is to read only a portion of the file into memory at a time. Speed, of course, is determined by which portion is in memory and what parts must be read from disk when a particular piece of information is needed.
There is a simple method to handle deciding what's the best parts to keep in memory:
Put the data into a database.
A real one, like MSSQL Express, or MySql or Oracle XE (all are free).
Databases cache the most commonly used information, so it's just like reading from memory. And they give you a single access method for in-memory or on-disk data.
Maybe you can convert that 1 GB file into a SQLite database with two columns key and value. Then create an index on key column. After that you can query that database to get the values of the keys you provided.
Thinking about this, I'm wondering why you'd need to do it... (I know, I know... I shouldn't wonder why, but hear me out...)
The main problem is that there is a huge amount of data that needs to be presumably accessed quickly... The question is, will it essentially be random access, or is there some pattern that can be exploited to predict accesses?
In any case, I would implement this as a sliding cache. E.g. I would load as much as feasibly possible into memory to start with (with the selection of what to load based as much on my expected access pattern as possible) and then keep track of accesses to elements by time last accessed.
If I hit something that wasn't in the cache, then it would be loaded and replace the oldest item in the cache.
This would result in the most commonly used stuff being accessible in memory, but would incur additional work for cache misses.
In any case, without knowing a little more about the problem, this is merely a 'general solution'.
It may be that just keeping it in a local instance of a sql db would be sufficient :)
You'll need to specify the file format, but if it's just something like name=value, I'd do:
Dictionary<string,long> dictionary = new Dictionary<string,long>();
using (TextReader reader = File.OpenText(filename))
{
string line;
while ((line = reader.ReadLine()) != null)
{
string[] bits = line.Split('=');
// Error checking would go here
long value = long.Parse(bits[1]);
dictionary[bits[0]] = value;
}
}
Now, if that doesn't work we'll need to know more about the file - how many lines are there, etc?
Are you using 64 bit Windows? (If not, you won't be able to use more than 3GB per process anyway, IIRC.)
The amount of memory required will depend on the length of the strings, number of entries etc.
I am not familiar with C#, but if you're having memory problems you might need to roll your own memory container for this task.
Since you want to store it in a dict, I assume you need it for fast lookup?
You have not clarified which one should be the key, though.
Let's hope you want to use the long values for keys. Then try this:
Allocate a buffer that's as big as the file. Read the file into that buffer.
Then create a dictionary with the long values (32 bit values, I guess?) as keys, with their values being a 32 bit value as well.
Now browse the data in the buffer like this:
Find the next key-value pair. Calculate the offset of its value in the buffer. Now add this information to the dictionary, with the long as the key and the offset as its value.
That way, you end up with a dictionary which might take maybe 10-20 bytes per record, and one larger buffer which holds all your text data.
At least with C++, this would be a rather memory-efficient way, I think.
Can you convert the 1G file into a more efficient indexed format, but leave it as a file on disk? Then you can access it as needed and do efficient lookups.
Perhaps you can memory map the contents of this (more efficient format) file, then have minimum ram usage and demand-loading, which may be a good trade-off between accessing the file directly on disc all the time and loading the whole thing into a big byte array.
Loading a 1 GB file in memory at once doesn't sound like a good idea to me. I'd virtualize the access to the file by loading it in smaller chunks only when the specific chunk is needed. Of course, it'll be slower than having the whole file in memory, but 1 GB is a real mastodon...
Don't read 1GB of file into the memory even though you got 8 GB of physical RAM, you can still have so many problems. -based on personal experience-
I don't know what you need to do but find a workaround and read partially and process. If it doesn't work you then consider using a database.
If you choose to use a database, you might be better served by a dbm-style tool, like Berkeley DB for .NET. They are specifically designed to represent disk-based hashtables.
Alternatively you may roll your own solution using some database techniques.
Suppose your original data file looks like this (dots indicate that string lengths vary):
[key2][value2...][key1][value1..][key3][value3....]
Split it into index file and values file.
Values file:
[value1..][value2...][value3....]
Index file:
[key1][value1-offset]
[key2][value2-offset]
[key3][value3-offset]
Records in index file are fixed-size key->value-offset pairs and are ordered by key.
Strings in values file are also ordered by key.
To get a value for key(N) you would binary-search for key(N) record in index, then read string from values file starting at value(N)-offset and ending before value(N+1)-offset.
Index file can be read into in-memory array of structs (less overhead and much more predictable memory consumption than Dictionary), or you can do the search directly on disk.