I have a large file of roughly 400 GB of size. Generated daily by an external closed system. It is a binary file with the following format:
byte[8]byte[4]byte[n]
Where n is equal to the int32 value of byte[4].
This file has no delimiters and to read the whole file you would just repeat until EOF. With each "item" represented as byte[8]byte[4]byte[n].
The file looks like
byte[8]byte[4]byte[n]byte[8]byte[4]byte[n]...EOF
byte[8] is a 64-bit number representing a period of time represented by .NET Ticks. I need to sort this file but can't seem to figure out the quickest way to do so.
Presently, I load the Ticks into a struct and the byte[n] start and end positions and read to the end of the file. After this, I sort the List in memory by the Ticks property and then open a BinaryReader and seek to each position in Ticks order, read the byte[n] value, and write to an external file.
At the end of the process I end up with a sorted binary file, but it takes FOREVER. I am using C# .NET and a pretty beefy server, but disk IO seems to be an issue.
Server Specs:
2x 2.6 GHz Intel Xeon (Hex-Core with HT) (24-threads)
32GB RAM
500GB RAID 1+0
2TB RAID 5
I've looked all over the internet and can only find examples where a huge file is 1GB (makes me chuckle).
Does anyone have any advice?
At great way to speed up this kind of file access is to memory-map the entire file into address space and let the OS take care of reading whatever bits from the file it needs to. So do the same thing as you're doing right now, except read from memory instead of using a BinaryReader/seek/read.
You've got lots of main memory, so this should provide pretty good performance (as long as you're using a 64-bit OS).
Use merge sort.
It's online and parallelizes well.
http://en.wikipedia.org/wiki/Merge_sort
If you can learn Erlang or Go, they could be very powerful and scale extremely well, as you have 24 threads. Utilize Async I/O. Merge Sort.
And since you have 32GB of Ram, try to load as much as you can into RAM and sort it there then write back to disk.
I would do this in several passes. On the first pass, I would create a list of ticks, then distribute them evenly into many (hundreds?) buckets. If you know ahead of time that the ticks are evenly distributed, you can skip this initial pass. On a second pass, I would split the records into these few hundred separate files of about same size (these much smaller files represent groups of ticks in the order that you want). Then I would sort each file separately in memory. Then concatenate the files.
It is somewhat similar to the hashsort (I think).
Related
I am really stumped at this problem and as a result I have stopped working for a while. I work with really large pieces of data. I get approx 200gb of .txt data every week. The data can range up to 500 million lines. A lot of these are duplicate. I would guess only 20gb is unique. I have had several custom programs made including hash remove duplicates, external remove duplicates but none seem to work. The latest one was using a temp database but took several days to remove the data.
The problem with all the programs is that they crash after a certain point and after spending a large amount of money on these programs I thought I would come online and see if anyone can help. I understand this has been answered on here before and I have spent the last 3 hours reading about 50 threads on here but none seem to have the same problem as me i.e huge datasets.
Can anyone recommend anything for me? It needs to be super accurate and fast. Preferably not memory based as I only have 32gb of ram to work with.
The standard way to remove duplicates is to sort the file and then do a sequential pass to remove duplicates. Sorting 500 million lines isn't trivial, but it's certainly doable. A few years ago I had a daily process that would sort 50 to 100 gigabytes on a 16 gb machine.
By the way, you might be able to do this with an off-the-shelf program. Certainly the GNU sort utility can sort a file larger than memory. I've never tried it on a 500 GB file, but you might give it a shot. You can download it along with the rest of the GNU Core Utilities. That utility has a --unique option, so you should be able to just sort --unique input-file > output-file. It uses a technique similar to the one I describe below. I'd suggest trying it on a 100 megabyte file first, then slowly working up to larger files.
With GNU sort and the technique I describe below, it will perform a lot better if the input and temporary directories are on separate physical disks. Put the output either on a third physical disk, or on the same physical disk as the input. You want to reduce I/O contention as much as possible.
There might also be a commercial (i.e. pay) program that will do the sorting. Developing a program that will sort a huge text file efficiently is a non-trivial task. If you can buy something for a few hundreds of dollars, you're probably money ahead if your time is worth anything.
If you can't use a ready made program, then . . .
If your text is in multiple smaller files, the problem is easier to solve. You start by sorting each file, removing duplicates from those files, and writing the sorted temporary files that have the duplicates removed. Then run a simple n-way merge to merge the files into a single output file that has the duplicates removed.
If you have a single file, you start by reading as many lines as you can into memory, sorting those, removing duplicates, and writing a temporary file. You keep doing that for the entire large file. When you're done, you have some number of sorted temporary files that you can then merge.
In pseudocode, it looks something like this:
fileNumber = 0
while not end-of-input
load as many lines as you can into a list
sort the list
filename = "file"+fileNumber
write sorted list to filename, optionally removing duplicates
fileNumber = fileNumber + 1
You don't really have to remove the duplicates from the temporary files, but if your unique data is really only 10% of the total, you'll save a huge amount of time by not outputting duplicates to the temporary files.
Once all of your temporary files are written, you need to merge them. From your description, I figure each chunk that you read from the file will contain somewhere around 20 million lines. So you'll have maybe 25 temporary files to work with.
You now need to do a k-way merge. That's done by creating a priority queue. You open each file, read the first line from each file and put it into the queue along with a reference to the file that it came from. Then, you take the smallest item from the queue and write it to the output file. To remove duplicates, you keep track of the previous line that you output, and you don't output the new line if it's identical to the previous one.
Once you've output the line, you read the next line from the file that the one you just output came from, and add that line to the priority queue. You continue this way until you've emptied all of the files.
I published a series of articles some time back about sorting a very large text file. It uses the technique I described above. The only thing it doesn't do is remove duplicates, but that's a simple modification to the methods that output the temporary files and the final output method. Even without optimizations, the program performs quite well. It won't set any speed records, but it should be able to sort and remove duplicates from 500 million lines in less than 12 hours. Probably much less, considering that the second pass is only working with a small percentage of the total data (because you removed duplicates from the temporary files).
One thing you can do to speed the program is operate on smaller chunks and be sorting one chunk in a background thread while you're loading the next chunk into memory. You end up having to deal with more temporary files, but that's really not a problem. The heap operations are slightly slower, but that extra time is more than recaptured by overlapping the input and output with the sorting. You end up getting the I/O essentially for free. At typical hard drive speeds, loading 500 gigabytes will take somewhere in the neighborhood of two and a half to three hours.
Take a look at the article series. It's many different, mostly small, articles that take you through the entire process that I describe, and it presents working code. I'm happy to answer any questions you might have about it.
I am no specialist in such algorithms, but if it is a textual data (or numbers, doesn't matter), you can try to read your big file and write it into several files by first two or three symbols: all lines starting with "aaa" go to aaa.txt, all lines starting with "aab" - to aab.txt, etc. You'll get lots of files within which the data are in the equivalence relation: a duplicate to a word is in the same file as the word itself. Now, just parse each file in the memory and you're done.
Again, not sure that it will work, but i'd try this approach first...
I've a method for creating zip file in my project and its working perfectly. I want to know is there any way to estimate the approximate time for creating that zip file.I know about StopWatch but I dont think I can use that for my requirement. Any ideas????
This is really impossible to answer.
The amount of time a ZIP process will need depends on many factors, for instance:
The compressability of the file(s) to compress. Point in case: XML files zip very nicely, MP3 files hardly at all.
The amount of files to compress.
The algorithm/implementation you use.
Whether the Pc you are using is also doing other work (especially I/O).
...
The best you can do is ZIP a portion of the total data (say, 10%), then extrapolate to get an estimated time, then re-evaluate that estimate, say, every 10% of data or so.
In my C# application i have to read a huge amount of binary files, but at the first run, reading those files using FileStream and BinaryReader, takes a lot of times. But the second times you run the app, reading the files is 4 times faster.
After reading this post "Slow reading hundreds of files" I decided to precache the binary files.
After reading this other post "How can I check if a program is running for the first time?", my app now can detect if it is the first time it is running then I precache the files by using this simple technique "Caching a binary file in C#".
Is there another way of precaching huge amount of binary files?
Edit:
This is how I read and parse the files
f_strm = new FileStream(#location, FileMode.Open, FileAccess.Read);
readBinary = new BinaryReader(f_strm);
Parse(readBinary);
The Parse() function just contains a switch statement that I use to parse the data.
I don't do anything more complicated. As an example, I tried to read and parse 10.000 binary files of 601KB, it took 39 secondes and about 589.000 cycles to read and parse the files.
When I run again the app, it finally took about 45.000 cycles and 1.5 seconds to read and parse.
Edit:
By "huge" amount of files I mean millions of files. It's not always the case, but most of the time I have to deal with at least 10.000 files. The size of those files can be between 600Ko and 700MB.
Just read them once and discard the results. That puts them into the OS cache and makes future reads very fast. Using the OS cache is automatic and very safe.
Or, make yourself a Dictionary<string, byte[]> where you store the file contents keyed by the file path. Be careful not to exhaust available memory or your app will fail or become very slow due to paging.
How to measure characteristics of file (hard-disk) I/O? For example on a machine with a hard-disk (with speed X) and a cpu i7 (or whatever number of cores) and Y amount of ram (with Z Hz BIOS) what would be (on Windows OS):
Optimum number of files that can be written to the HD in parallel?
Optimum number of files that can be read from HD in parallel?
Facilities of file-system helping faster writings. (Like: Is there a feature or tool there that let you write batches of binary data on different sectors (or hards) and then bind them as a file? I do not know much about underlying file I/O in OS. But it would be reasonable to have such tools!)
If there are such tools as part before, are there in .NET too?
I want to write large files (streamed over the web or another source) as fast (and as parallel) as possible! I am coding this in C#. And it acts like a download manager; so if streaming got interrupted, it can carry on later.
The answer (as so often) depends on your usage. The whole operating system is one big tradeoff between different use scenarios. For the NTFS filesystem one could mention block size set to 4k, NTFS storing files less than block size in MTF, size of files, number of files, fragmentation, etc.
If you are planning to write large files then a block size of 64k may be good. That is if you plan to read large amounts of data. If you read smaller amounts of data then smaller sizes are good. The OS works in 4k pages, so 4k is good. Compression (and encryption?) as well as SQL and Exchange only work on 4k pages (iirc).
If you write small files (<4k)they will be stored inside the MFT so you don't have to make "an ekstra jump". This is especially useful in write operations (read may have MFT cached). MFT stores files in sequences (i.e. blocks 1000-1010,2000-2010) so fragmentation will make the MFT bigger. Writing files to disk in parallell is one of the main causes to fragmentation, the other is deleting files. You may pre-allocate the required size for a file and Windows will try to find a suitable place on the disk to counter fragmentation. There are also real-time defragmentation programs like O&O Defrag.
Windows maps a binarystream pretty much directly to the physical location on the disk, so using different read/write methods will not yield as much performance boost as other factors. For maximum speed programs use technieue for direct memory mapping to disk. See http://en.wikipedia.org/wiki/Memory-mapped_file
There is an option in Windows (under Device Manager, Harddisks) to increase caching on the disk. This is dangerous as ut could damage the filesystem if the computer bluescreens or looses power, but gives a big performance boost on writing smaller files (and on all writes). If the disk is busy this is especially valuable as the seek-time will decrease. Windows use what is called the elevator algorithm which basically means it moves the harddisk heads over the surface back and forth serving any application in the direction it is moving.
Hope this helps. :)
There are some text files(Records) which i need to access using C#.Net. But the matter is those files are larger than 1GB. (minimum size is 1 GB)
what should I need to do?
What are the factors which I need to be concentrate on?
Can some one give me an idea to over come from this situation.
EDIT:
Thanks for the fast responses. yes they are fixed length records. These text files coming from a local company. (There last month transaction records)
Is it possible to access these files like normal text files (using normal file stream).
and
How about the memory management????
Expanding on CasperOne's answer
Simply put there is no way to reliably put a 100GB file into memory at one time. On a 32 bit machine there is simply not enough addressing space. In a 64 bit machine there is enough addressing space but during the time in which it would take to actually get the file in memory, your user will have killed your process out of frustration.
The trick is to process the file incrementally. The base System.IO.Stream() class is designed to process a variable (and possibly infinite) stream in distinct quantities. It has several Read methods that will only progress down a stream a specific number of bytes. You will need to use these methods in order to divide up the stream.
I can't give more information because your scenario is not specific enough. Can you give us more details or your record delimeters or some sample lines from the file?
Update
If they are fixed length records then System.IO.Stream will work just fine. You can even use File.Open() to get access to the underlying Stream object. Stream.Read has an overload that requests the number of bytes to be read from the file. Since they are fixed length records this should work well for your scenario.
As long as you don't call ReadAllText() and instead use the Stream.Read() methods which take explicit byte arrays, memory won't be an issue. The underlying Stream class will take care not to put the entire file into memory (that is of course, unless you ask it to :) ).
You aren't specifically listing the problems you need to overcome. A file can be 100GB and you can have no problems processing it.
If you have to process the file as a whole then that is going to require some creative coding, but if you can simply process sections of the file at a time, then it is relatively easy to move to the location in the file you need to start from, process the data you need to process in chunks, and then close the file.
More information here would certainly be helpful.
What are the main problems you are having at the moment? The big thing to remember is to think in terms of streams - i.e. keep the minimum amount of data in memory that you can. LINQ is excellent at working with sequences (although there are some buffering operations you need to avoid, such as OrderBy).
For example, here's a way of handling simple records from a large file efficiently (note the iterator block).
For performing multiple aggregates/analysis over large data from files, consider Push LINQ in MiscUtil.
Can you add more context to the problems you are thinking of?
Expanding on JaredPar's answer.
If the file is a binary file (i.e. ints stored as 4 bytes, fixed length strings etc) you can use the BinaryReader class. Easier than pulling out n bytes and then trying to interrogate that.
Also note, the read method on System.IO.Stream is a non blocking operation. If you ask for 100 bytes it may return less than that, but still not have reached end of file.
The BinaryReader.ReadBytes method will block until it reads the requested number of bytes, or End of file - which ever comes first.
Nice collaboration lads :)
Hey Guys, I realize that this post hasn't been touched in a while, but I just wanted to post a site that has the solution to your problem.
http://thedeveloperpage.wordpress.com/c-articles/using-file-streams-to-write-any-size-file-introduction/
Hope it helps!
-CJ