I am trying to empower users to upload large files. Before I upload a file, I want to chunk it up. Each chunk needs to be a C# object. The reason why is for logging purposes. Its a long story, but I need to create actual C# objects that represent each file chunk. Regardless, I'm trying the following approach:
public static List<FileChunk> GetAllForFile(byte[] fileBytes)
{
List<FileChunk> chunks = new List<FileChunk>();
if (fileBytes.Length > 0)
{
FileChunk chunk = new FileChunk();
for (int i = 0; i < (fileBytes.Length / 512); i++)
{
chunk.Number = (i + 1);
chunk.Offset = (i * 512);
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
chunks.Add(chunk);
chunk = new FileChunk();
}
}
return chunks;
}
Unfortunately, this approach seems to be incredibly slow. Does anyone know how I can improve the performance while still creating objects for each chunk?
thank you
I suspect this is going to hurt a little:
chunk.Bytes = fileBytes.Skip(chunk.Offset).Take(512).ToArray();
Try this instead:
byte buffer = new byte[512];
Buffer.BlockCopy(fileBytes, chunk.Offset, buffer, 0, 512);
chunk.Bytes = buffer;
(Code not tested)
And the reason why this code would likely be slow is because Skip doesn't do anything special for arrays (though it could). This means that every pass through your loop is iterating the first 512*n items in the array, which results in O(n^2) performance, where you should just be seeing O(n).
Try something like this (untested code):
public static List<FileChunk> GetAllForFile(string fileName, FileMode.Open)
{
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName))
{
int i = 0;
while (stream.Position <= stream.Length)
{
var chunk = new FileChunk();
chunk.Number = (i);
chunk.Offset = (i * 512);
Stream.Read(chunk.Bytes, 0, 512);
chunks.Add(chunk);
i++;
}
}
return chunks;
}
The above code skips several steps in your process, preferring to read the bytes from the file directly.
Note that, if the file is not an even multiple of 512, the last chunk will contain less than 512 bytes.
Same as Robert Harvey's answer, but using a BinaryReader, that way I don't need to specify an offset. If you use a BinaryWriter on the other end to reassemble the file, you won't need the Offset member of FileChunk.
public static List<FileChunk> GetAllForFile(string fileName) {
var chunks = new List<FileChunk>();
using (FileStream stream = new FileStream(fileName)) {
BinaryReader reader = new BinaryReader(stream);
int i = 0;
bool eof = false;
while (!eof) {
var chunk = new FileChunk();
chunk.Number = i;
chunk.Offset = (i * 512);
chunk.Bytes = reader.ReadBytes(512);
chunks.Add(chunk);
i++;
if (chunk.Bytes.Length < 512) { eof = true; }
}
}
return chunks;
}
Have you thought about what you're going to do to compensate for packet loss and data corruption?
Since you mentioned that the load is taking a long time then I would use asynchronous file reading in order to speed up the loading process. The hard disk is the slowest component of a computer. Google does asynchronous reads and writes on Google Chrome to improve their load times. I had to do something like this in C# in a previous job.
The idea would be to spawn several asynchronous requests over different parts of the file. Then when a request comes in, take the byte array and create your FileChunk objects taking 512 bytes at a time. There are several benefits to this:
If you have this run in a separate thread, then you won't have the whole program waiting to load the large file you have.
You can process a byte array, creating FileChunk objects, while the hard disk is still trying to for-fill read request on other parts of the file.
You will save on RAM space if you limit the amount of pending read requests you can have. This allows less page faulting to the hard disk and use the RAM and CPU cache more efficiently, which speeds up processing further.
You would want to use the following methods in the FileStream class.
[HostProtectionAttribute(SecurityAction.LinkDemand, ExternalThreading = true)]
public virtual IAsyncResult BeginRead(
byte[] buffer,
int offset,
int count,
AsyncCallback callback,
Object state
)
public virtual int EndRead(
IAsyncResult asyncResult
)
Also this is what you will get in the asyncResult:
// Extract the FileStream (state) out of the IAsyncResult object
FileStream fs = (FileStream) ar.AsyncState;
// Get the result
Int32 bytesRead = fs.EndRead(ar);
Here is some reference material for you to read.
This is a code sample of working with Asynchronous File I/O Models.
This is a MS documentation reference for Asynchronous File I/O.
Related
My program reads x bytes from a file, checks if they are all zeros, repeats the process for 20.000 files, and keeps a list of the files that have non-zero bytes.
Trying to monitor performance, I made the number of bytes it checks for each file definable (byteSize).
The problem is that the first run of the program it takes ~5 minutes for it to complete (byteSize = 8192), but if I run it again it takes only 10 seconds, even if I close and restart the program, so the only cause that comes to my mind is that the byte array remains in memory.
BinaryReader is under a "using" directive, so as far as I know it should close the stream after the loop completes. So why the byte array remains? How can I delete it? I need to do it to measure actual performance each time I run the prog.
byte[] readByte = new byte[byteSize];
for (int i = 0; i < readCycles; i++)
{
using (BinaryReader reader = new BinaryReader(new FileStream(file, FileMode.Open, FileAccess.Read)))
{
reader.BaseStream.Seek(8192 + i * byteSize, SeekOrigin.Begin);
reader.Read(readByte, 0, byteSize);
}
foreach (byte b in readByte)
{
if (b != 0)
{
allZeros = false;
break;
}
else
allZeros = true;
}
if (allZeros == false) break;
}
This almost certainly has nothing to do with anything .NET is doing - it'll be the file system transparently caching for you.
To test this, change your code to just use FileStream and simply loop over the file reading it to a buffer and ignoring the data:
using (var stream = File.OpenRead(...))
{
var buffer = new byte[16384];
while (stream.Read(buffer, 0, buffer.Length) > 0)
{
}
}
I'm sure you'll see the same result - the first read will be relatively slow, then it'll be very fast.
I have a list of float to write to a file. The code below does the thing but it is synchronous.
List<float> samples = GetSamples();
using (FileStream stream = File.OpenWrite("somefile.bin"))
using (BinaryWriter binaryWriter = new BinaryWriter(stream, Encoding.Default, true))
{
foreach (var sample in samples)
{
binaryWriter.Write(sample);
}
}
I want to do the operation asynchronously but the BinaryWriter does not support async operations, which is normal since it just only writes a few bytes each time. But most of the time the operation uses file I/O and I think it can and should be asynchronous.
I tried to write to a MemoryStream with the BinaryWriter and when that finished I copied the MemoryStream to the FileStream with CopyToAsync, however this caused a performance degradation (total time) up to 100% with big files.
How can I convert the whole operation to asynchronous?
Normal write operations usually end up being completed asynchronously anyway. The OS accepts writes immediately into the write cache, and flushes it to disk at some later time. Your application isn't blocked by the actual disk writes.
Of course, if you are writing to a removable drive then write cache is typically disabled and your program will be blocked.
I will recommend that you can dramatically reduce the number of operations by transferring a large block at a time. To wit:
Allocate a new T[BlockSize] of your desired block size.
Allocate a new byte[BlockSize * sizeof (T)]
Use List<T>.CopyTo(index, buffer, 0, buffer.Length) to copy a batch out of the list.
Use Buffer.BlockCopy to get the data into the byte[].
Write the byte[] to your stream in a single operation.
Repeat 3-5 until you reach the end of the list. Careful about the final batch, which may be a partial block.
Your memory stream approach makes sense, just make sure to write in batches rather than waiting for the memory stream to grow to the full size of the file and then writing it all at once.
Something like this should work fine:
var data = new float[10 * 1024];
var helperBuffer = new byte[4096];
using (var fs = File.Create(#"D:\Temp.bin"))
using (var ms = new MemoryStream(4096))
using (var bw = new BinaryWriter(ms))
{
var iteration = 0;
foreach (var sample in data)
{
bw.Write(sample);
iteration++;
if (iteration == 1024)
{
iteration = 0;
ms.Position = 0;
ms.Read(helperBuffer, 0, 1024 * 4);
await fs.WriteAsync(helperBuffer, 0, 1024 * 4).ConfigureAwait(false);
}
}
}
This is just sample code - make sure to handle errors properly etc.
Sometimes, these helper classes are anything but helpful.
Try this:
List<float> samples = GetSamples();
using (FileStream stream = File.OpenWrite("somefile.bin"))
{
foreach (var sample in samples)
{
await stream.WriteAsync(BitConverter.GetBytes(sample), 0, 4);
}
}
OK, I made a C# winform app, it's a File_Splitter_Joiner.
You just give it a file and it splits it for you to a number of pieces you specify.
The splitting is done in a separate thread.
Everything was working pretty fine until I sliced a 1Gig file!
In the task manager, I saw that my program started consuming 1Gigabyte of memory and my computer almost died!
not just that, when slicing finished, the consuming didn't change!
(dunno if this means that the garbage collector isn't working, although I'm pretty sure that I lost all references to what was holding the big data chumps, so it should work)
Here's the Splitter constructor (just to give you a better idea):
public FileSplitter(string FileToSplitPath, string PiecesFolder, int NumberOfPieces, int PieceSize, SplittingMethod Method)
{
FileToSplitInfo = new FileInfo(FileToSplitPath);
this.FileToSplitPath = FileToSplitPath;
this.PiecesFolder = PiecesFolder;
this.NumberOfPieces = NumberOfPieces;
this.PieceSize = PieceSize;
this.Method = Method;
SplitterThread = new Thread(Split);
}
And here is the method that did the actual splitting:
(I'm still a newbie, so what you're about to see 'may not' be done in the best way ever possible, I'm just learning here)
private void Split()
{
int remainingSize = 0;
int remainingPos = -1;
bool isNumberOfPiecesEqualInSize = true;
int fileSize = (int)FileToSplitInfo.Length; // FileToSplitInfo is a FileInfo object
if (fileSize % PieceSize != 0)
{
remainingSize = fileSize % PieceSize;
remainingPos = fileSize - remainingSize;
isNumberOfPiecesEqualInSize = false;
}
byte[] fileBytes = new byte[fileSize];
var _fs = File.Open(FileToSplitPath, FileMode.Open);
BinaryReader br = new BinaryReader(_fs);
br.Read(fileBytes, 0, fileSize);
br.Close();
_fs.Close();
for (int i = 0, index = 0; i < NumberOfPieces; i++, index += PieceSize)
{
var fs = File.Create(PiecesFolder + "\\" + Path.GetFileName(FileToSplitPath) + "." + (i+1).ToString());
var bw = new BinaryWriter(fs);
bw.Write(fileBytes, index, PieceSize);
if(i == NumberOfPieces-1 && !isNumberOfPiecesEqualInSize && Method == SplittingMethod.NumberOfPieces)
bw.Write(fileBytes, remainingPos, remainingSize);
bw.Close();
fs.Close();
}
MessageBox.Show("File has been splitted successfully!");
SplitterThread.Abort();
}
Now, instead of reading the bytes of the file via a BinaryReader, I was first reading it via the File.ReadAllBytes method, it was working fine with small file sizes, but, I got a "SystemOutOfMemory" exception when I dealt with our big guy, dunno why I didn't get that exception when I read the bytes via a BinaryReader.
(that was an in between question)
So, the main question is, how can I load big files (gigs speaking) in a way that doesn't consume so much memory ? I mean, how can I make my program not consume all that memory ?
and how I can I free the used memory after the splitting is done ?
(I actually used
bw.Dispose; fs.Dispose;
instead of
bw.Close(); fs.Close();
it was the same.
I know the Q might not make sense, cuz when we load something, it gets in our memory not somewhere else, but, the reason I asked it like that, is cuz I used another Splitting_Joining program (not written by me) just to see that if it had the same problem, I loaded the file, the program consumed about 5Migs of ram, when I started splitting, it used about 10Migs!!
Now that is a VERY big difference .. Probably that app was in C/C++ ..
So to sum up, who sucks ? is it my code, and if so how can I fix it ? or is it C# when it comes to performance ?
Thank you SOOO much for anything you could hook me up with :)
The following 2 lines will kil you:
int fileSize = (int)FileToSplitInfo.Length; // a FileInfo object
...
byte[] fileBytes = new byte[fileSize];
Your code will fail when the size is over Int32.MaxValue. Unnecessary, just use long fileSize = FileToSplitInfo.Length;
This corrected code will fail when there is not enough contiguous memory. Fragmentation (of the LOH) will bring you down sooner or later.
You allocate memory for the entire file but your only need PieceSize bytes at a time.
You don't even need to know the fileSize, just
byte[] pieceBuffer = new byte[PieceSize];
while (true)
{
int nBytes = br.Read(pieceBuffer, 0, pieceBuffer.Length);
if (nBytes == 0)
break;
// write this piece, the length is nBytes
}
There are different aspects that can be made better:
if you are working with big file, why first read all inside an array and after write into another file ? Just write into the new file while reading from the other.
use using to gurantee disposal of the streams, in any case: either there is an exception or not.
if you begin to work with really big file, like 1GB or even more, I would recommend to look on Memory Mapped Files. So you will laverage incredible memory consuption benefit with some increased performance cost.
I'm trying to design a simple application to be used for calculating a file's CRC32/md5/sha1/sha256/sha384/sha512, and I've run into a bit of a roadblock. This is being done in C#.
I would like to be able to do this as efficiently as possible, so my original thought was to read the file into a memorystream first before processing, but I soon found out that very large files cause me to run out of memory very quickly. So it would seem that I have to use a filestream instead. The problem, as I see it, is that only one hash function can be run at a time, and doing so with a filestream will take a while for each hash to complete.
How might I go about reading a small bit of a file into memory, processing it with all 6 algorithms, and then going onto another chunk... Or does hashing not work that way?
This was my original attempt at reading a file into memory. It failed when I tried to read a CD image into memory prior to running the hashing algorithms on the memorystream:
private void ReadToEndOfFile(string filename)
{
if (File.Exists(filename))
{
FileInfo fi = new FileInfo(filename);
FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[16 * 1024];
//double step = Math.Floor((double)fi.Length / (double)100);
this.toolStripStatusLabel1.Text = "Reading File...";
this.toolStripProgressBar1.Maximum = (int)(fs.Length / buffer.Length);
this.toolStripProgressBar1.Value = 0;
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = fs.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
this.toolStripProgressBar1.Value += 1;
}
_ms = ms;
}
}
}
You're most of the way there, you just don't need to read the whole thing into memory at once.
All of the hashes in .Net derive from the HashAlgorithm class. This has two methods on it: TransformBlock and TransformFinalBlock. So, you should be able to read a chunk for your file, stuff it into the TransformBlock method of whichever hashes you want to use, and then move into the next block. Just remember to call TransformFinalBlock for your last chunk from the file, as that is what gets you the byte array containing the hash.
For now, I would just do each hash one at a time, until it's working, then worry about running the hashes concurrently (using something like the Task Parallel Library)
Hash algorithms are designed in a way that you can calculate the hash value incrementally. You can find a C#/.NET example for that here. You can easily modify the provided code to update multiple hash algorithm instances in each step.
This might be a great opportunity to get your feet wet with the TPL data flow objects. Read the file in one thread and post the data to a BroadcastBlock<T>. The BroadcastBlock<T> will be linked to 6 different ActionBlock<T> instances. Each ActionBlock<T> will correspond to one of your 6 hash strategies.
var broadcast = new BroadcastBlock<byte[]>(x => x);
var strategy1 = new ActionBlock<byte[]>(input => DoHash(input, SHA1.Create()));
var strategy2 = new ActionBlock<byte[]>(input => DoHash(input, MD5.Create()));
// Create the other 4 strategies.
broadcast.LinkTo(strategy1);
broadcast.LinkTo(strategy2);
// Link the other 4.
using (var fs = File.Open(#"yourfile.txt", FileMode.Open, FileAccess.Read))
using (var br = new BinaryReader(fs))
{
while (br.PeekChar() != -1)
{
broadcast.Post(br.ReadBytes(1024 * 16));
}
}
The BroadcastBlock<T> will forward each chunk of data to all linked ActionBlock<T> instances.
Since your question focused more on how to get this all to occur concurrently I will leave the implementation of DoHash up to you.
private void DoHash(byte[] input, HashAlgorithm algorithm)
{
// You will need to implement this.
}
I have a huge file, where I have to insert certain characters at a specific location. What is the easiest way to do that in C# without rewriting the whole file again.
Filesystems do not support "inserting" data in the middle of a file. If you really have a need for a file that can be written to in a sorted kind of way, I suggest you look into using an embedded database.
You might want to take a look at SQLite or BerkeleyDB.
Then again, you might be working with a text file or a legacy binary file. In that case your only option is to rewrite the file, at least from the insertion point up to the end.
I would look at the FileStream class to do random I/O in C#.
You will probably need to rewrite the file from the point you insert the changes to the end. You might be best always writing to the end of the file and use tools such as sort and grep to get the data out in the desired order. I am assuming you are talking about a text file here, not a binary file.
There is no way to insert characters in to a file without rewriting them. With C# it can be done with any Stream classes. If the files are huge, I would recommend you to use GNU Core Utils inside C# code. They are the fastest. I used to handle very large text files with the core utils ( of sizes 4GB, 8GB or more etc ). Commands like head, tail, split, csplit, cat, shuf, shred, uniq really help a lot in text manipulation.
For example if you need to put some chars in a 2GB file, you can use split -b BYTECOUNT, put the ouptut in to a file, append the new text to it, and get the rest of the content and add to it. This should supposedly be faster than any other way.
Hope it works. Give it a try.
You can use random access to write to specific locations of a file, but you won't be able to do it in text format, you'll have to work with bytes directly.
If you know the specific location to which you want to write the new data, use the BinaryWriter class:
using (BinaryWriter bw = new BinaryWriter (File.Open (strFile, FileMode.Open)))
{
string strNewData = "this is some new data";
byte[] byteNewData = new byte[strNewData.Length];
// copy contents of string to byte array
for (var i = 0; i < strNewData.Length; i++)
{
byteNewData[i] = Convert.ToByte (strNewData[i]);
}
// write new data to file
bw.Seek (15, SeekOrigin.Begin); // seek to position 15
bw.Write (byteNewData, 0, byteNewData.Length);
}
You may take a look at this project:
Win Data Inspector
Basically, the code is the following:
// this.Stream is the stream in which you insert data
{
long position = this.Stream.Position;
long length = this.Stream.Length;
MemoryStream ms = new MemoryStream();
this.Stream.Position = 0;
DIUtils.CopyStream(this.Stream, ms, position, progressCallback);
ms.Write(data, 0, data.Length);
this.Stream.Position = position;
DIUtils.CopyStream(this.Stream, ms, this.Stream.Length - position, progressCallback);
this.Stream = ms;
}
#region Delegates
public delegate void ProgressCallback(long position, long total);
#endregion
DIUtils.cs
public static void CopyStream(Stream input, Stream output, long length, DataInspector.ProgressCallback callback)
{
long totalsize = input.Length;
long byteswritten = 0;
const int size = 32768;
byte[] buffer = new byte[size];
int read;
int readlen = length < size ? (int)length : size;
while (length > 0 && (read = input.Read(buffer, 0, readlen)) > 0)
{
output.Write(buffer, 0, read);
byteswritten += read;
length -= read;
readlen = length < size ? (int)length : size;
if (callback != null)
callback(byteswritten, totalsize);
}
}
Depending on the scope of your project, you may want to decide to insert each line of text with your file in a table datastructure. Sort of like a database table, that way you can insert to a specific location at any given moment, and not have to read-in, modify, and output the entire text file each time. This is given the fact that your data is "huge" as you put it. You would still recreate the file, but at least you create a scalable solution in this manner.
It may be "possible" depending on how the filesystem stores files to quickly insert (ie, add additional) bytes in the middle. If it is remotely possible it may only be feasible to do so a full block at a time, and only by either doing low level modification of the filesystem itself or by using a filesystem specific interface.
Filesystems are not generally designed for this operation. If you need to quickly do inserts you really need a more general database.
Depending on your application a middle ground would be to bunch your inserts together, so you only do one rewrite of the file rather than twenty.
You will always have to rewrite the remaining bytes from the insertion point. If this point is at 0, then you will rewrite the whole file. If it is 10 bytes before the last byte, then you will rewrite the last 10 bytes.
In any case there is no function to directly support "insert to file". But the following code can do it accurately.
var sw = new Stopwatch();
var ab = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ";
// create
var fs = new FileStream(#"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None);
sw.Restart();
fs.Seek(0, SeekOrigin.Begin);
for (var i = 0; i < 40000000; i++) fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length);
sw.Stop();
Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds);
fs.Dispose();
// insert
fs = new FileStream(#"d:\test.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite, 262144, FileOptions.None);
sw.Restart();
byte[] b = new byte[262144];
long target = 10, offset = fs.Length - b.Length;
while (offset != 0)
{
if (offset < 0)
{
offset = b.Length - target;
b = new byte[offset];
}
fs.Position = offset; fs.Read(b, 0, b.Length);
fs.Position = offset + target; fs.Write(b, 0, b.Length);
offset -= b.Length;
}
fs.Position = target; fs.Write(ASCIIEncoding.ASCII.GetBytes(ab), 0, ab.Length);
sw.Stop();
Console.WriteLine("{0} ms", sw.Elapsed.TotalMilliseconds);
To gain better performance for file IO, play with "magic two powered numbers" like in the code above. The creation of the file uses a buffer of 262144 bytes (256KB) that does not help at all. The same buffer for the insertion does the "performance job" as you can see by the StopWatch results if you run the code. A draft test on my PC gave the following results:
13628.8 ms for creation and 3597.0971 ms for insertion.
Note that the target byte for insertion is 10, meaning that almost the whole file was rewritten.
Why don't you put a pointer to the end of the file (literally, four bytes above the current size of the file) and then, on the end of file write the length of inserted data, and finally the data you want to insert itself. For example, if you have a string in the middle of the file, and you want to insert few characters in the middle of the string, you can write a pointer to the end of file over some four characters in the string, and then write that four characters to the end together with the characters you firstly wanted to insert. It's all about ordering data. Of course, you can do this only if you are writing the whole file by yourself, I mean you are not using other codecs.