How to access individual items in serialized array? - c#

I want to store an array of timestamps in a binary flat file. One of my requirements is that I can access individual timestamps later on for efficient query purposes without having to read and deserialize the entire array first (I use a binary search algorithm that finds the file position of a start timestamp and end timestamp which in turn determines which bytes are read and deserialized between those two timestamps because the entire binary file can be multiple gigabytes large in size).
Obviously, the simple but slow way is to use BitConverter.GetBytes(timestamp) to convert each timestamp to bytes and to then store them in the file. I can then access each item individually in the file and use my custom binary search algorithm to find the timestamp that matches with the desired timestamp.
However, I found that BinaryFormatter is incredibly efficient (multiple times faster than protobuf-net and any other serializer I tried) regarding serialization/deserialization of value type arrays. Hence I attempted to try to serialize an array of timestamps into binary form. However, apparently that will now prevent me from accessing individual timestamps in the file without having to first deserialize the entire array.
Is there a way to still access individual items in binary form after having serialized an entire array of items via BinaryFormatter?
Here is some code snippet that demonstrates what I mean:
var sampleArray = new int[5] { 1,2,3,4,5};
var serializedSingleValueArray = sampleArray.SelectMany(x => BitConverter.GetBytes(x)).ToArray();
var serializedArrayofSingleValues = Serializers.BinarySerializeToArray(sampleArray);
var deserializesToCorrectValue = BitConverter.ToInt32(serializedSingleValueArray, 0); //value = 1 (ok)
var wrongDeserialization = BitConverter.ToInt32(serializedArrayofSingleValues, 0); //value = 256 (???)
Here the serialization function:
public static byte[]BinarySerializeToArray(object toSerialize)
{
using (var stream = new MemoryStream())
{
Formatter.Serialize(stream, toSerialize);
return stream.ToArray();
}
}
Edit: I do not need to concern myself with efficient memory consumption or file sizes as those are currently by far not the bottlenecks. It is the speed of serialization and deserialization that is the bottleneck for me with multi-gigabyte large binary files and hence very large arrays of primitives.

If your problem is just "how to convert an array of struct,to byte[]" you have other options than BitConverter. BitConverter is for single values, the Buffer class is for arrays.
double[] d = new double[100];
d[4] = 1235;
d[8] = 5678;
byte[] b = new byte[800];
Buffer.BlockCopy(d, 0, b, 0, d.Length*sizeof(double));
// just to test it works
double[] d1 = new double[100];
Buffer.BlockCopy(b, 0, d1, 0, d.Length * sizeof(double));
This does a byte-level copy without converting anything and without iterating over items.
You can put this byte array directly to your stream (not a StreamWriter, not a Formatter)
stream.Write(b, 0, 800);
That's definitly the fastest way to write to a file,but it involves a complete copy, but probably also any other thinkable method, will read an item, store it first for some reason, before it goes to the file.
If this is the only thing you write to your file - you don't need to write the array-length in the file, you can use the file-length for this.
To read the 100th double value in the file:
file.Seek(100*sizeof(double), SeekOrigin.Begin);
byte[] tmp = new byte[8];
f.Read(tmp, 0, 8);
double value = BitConverter.ToDouble(tmp, 0);
Here, for single value, you can use BitConverter.
This is the solution for .NET Framework, C# <= 7.0
For .NET Standard/.NET Core, C# 8.0 you have more options with Span<T>, which gives you access to the internal memory, without copying the Data.

A Bitconverter is not a "slow" version, it's just a way to convert everything to a byte[] sequence. This is actually not costly, it's just interpreting the memory differently.
Computing the position in file, load 8 bytes, convert it to DateTime, you are done.
You should do it only with simple structured files, and with simple structured files you don't need a binary formatter. Just load/save your one array to one file. This way you can be sure your file-positions can be computed.
So in other words. Save your array yourself, Date byte Date, than you can load it also Date by Date.
Writing with one processing style, Reading with another, is always a bad idea.

Related

How to import and read large binary file data in c#?

i have a large binary file that contains different data types, i can access single records in the file but i am not sure how to loop over the binary values and load it in the memory stream byte by byte
i have been using binary reader
BinaryReader binReader = new BinaryReader(File.Open(fileName, FileMode.Open));
Encoding ascii = Encoding.ASCII;
string authorName = binReader.ReadString();
Console.WriteLine(authorName);
Console.ReadLine();
but this won't work since i have a large file with different data types
simply, i need to convert the file to read byte by byte and then read these data either if it's a string or whatsoever.
would appreciate any thought that can help
This will very much depend on what format the file is in. Each byte in the file might represent different things, or it might just represent values from a large array, or some mix of the two.
You need to know what the format looks like to be able to read it, since binary files are not self-descriptive. Reading a simple object might look like
var authorName = binReader.ReadString();
var publishDate = DateTime.FromBinary(binReader.ReadInt64());
...
If you have a list of items it is common to use a length prefix. Something like
var numItems = binReader.ReadInt32();
for(int i = 0; i < numItems; i++){
var title = binReader.ReadString();
...
}
You would then typically create one or more objects from the data that can be used in the rest of the application. I.e.
new Bibliography(authorName, publishDate , books);
If this is a format you do not control I hope you have a detailed specification. Otherwise this is kind of a lost cause for anything but the cludgiest solutions.
If there is more data than can fit in memory you need some kind of streaming mechanism. I.e. read one item, do some processing of the item, save the result, read the next item, etc.
If you do control the format I would suggest alternatives that are easier to manage. I have used protobuf.Net, and I find it quite easy to use, but there are other alternatives. The common way to use these kinds of libraries is to create a class for the data, and add attributes for the fields that should be stored. The library can manage serialization/deserialization automatically, and usually handle things like inheritance and changes to the format in an easy way.
Here's a simple bit of code that shows the most basic way of doing it.
using System;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
namespace binary_read
{
class Program
{
private static readonly int bufferSize = 1024;
static async Task Main(string[] args)
{
var bytesRead = 0;
var totalBytes = 0;
using (var stream = File.OpenRead(args.First()))
{
do
{
var buffer = new byte[bufferSize];
bytesRead = await stream.ReadAsync(buffer, 0, bufferSize);
totalBytes += bytesRead;
// Process buffer
} while (bytesRead > 0);
Console.WriteLine($"Processed {totalBytes} bytes.");
}
}
}
}
The main bit to take note of is within the using block.
Firstly, when working with files/streams/sockets it's best to use using if possible to deterministically clean up after yourself.
Then it's really just a matter of calling Read/ReadAsync on the stream if you're just after the raw data. However there are various 'readers' that provide an abstraction to make working with certain formats easier.
So if you know that you're going to be reading ints and doubles and strings, then you can use the BinaryReader and it's ReadIntxx/ReadDouble/ReadString methods.
If you're reading into a struct, then you can read the properties in a loop as suggested by #JonasH above. Or use the method in this answer.

loop for reading different data types & sizes off very large byte array from file

I have a raw byte stream stored on a file (rawbytes.txt) that I need to parse and output to a CSV-style text file.
The input of raw bytes (when read as characters/long/int etc.) looks something like this:
A2401028475764B241102847576511001200C...
Parsed it should look like:
OutputA.txt
(Field1,Field2,Field3) - heading
A,240,1028475764
OutputB.txt
(Field1,Field2,Field3,Field4,Field5) - heading
B,241,1028475765,1100,1200
OutputC.txt
C,...//and so on
Essentially, it's a hex-dump-style input of bytes that is continuous without any line terminators or gaps between data that needs to be parsed. The data, as seen above, consists of different data types one after the other.
Here's a snippet of my code - because there are no commas within any field, and no need arises to use "" (i.e. a CSV wrapper), I'm simply using TextWriter to create the CSV-style text file as follows:
if (File.Exists(fileName))
{
using (BinaryReader reader = new BinaryReader(File.Open(fileName, FileMode.Open)))
{
inputCharIdentifier = reader.ReadChar();
switch (inputCharIdentifier)
case 'A':
field1 = reader.ReadUInt64();
field2 = reader.ReadUInt64();
field3 = reader.ReadChars(10);
string strtmp = new string(field3);
//and so on
using (TextWriter writer = File.AppendText("outputA.txt"))
{
writer.WriteLine(field1 + "," + field2 + "," + strtmp); // +
}
case 'B':
//code...
My question is simple - how do I use a loop to read through the entire file? Generally, it exceeds 1 GB (which rules out File.ReadAllBytes and the methods suggested at Best way to read a large file into a byte array in C#?) - I considered using a while loop, but peekchar is not suitable here. Also, case A, B and so on have different sized input - in other words, A might be 40 bytes total, while B is 50 bytes. So the use of a fixed size buffer, say inputBuf[1000], or [50] for instance - if they were all the same size - wouldn't work well either, AFAIK.
Any suggestions? I'm relatively new to C# (2 months) so please be gentle.
You could read the file byte by byte which you append to the currentBlock byte array until you find the next block. If the byte identifies a new block you can then parse the currentBlock using you case trick and make the currentBlock = characterJustRead.
This approach works even if the id of the next block is longer than 1 byte - in this case you just parse currentBlock[0,currentBlock.Lenght-lenOfCurrentIdInBytes] - in other words you read a little too much, but you then parse only what is needed and use what is left as the base for the next currentBlock.
If you want more speed you can read the file in chunks of X bytes, but apply the same logic.
You said "The issue is that the data is not 100% kosher - i.e. there are situations where I need to separately deal with the possibility that the character I expect to identify each block is not in the right place." but building a currentBlock still should work. The code surely will have some complications, maybe something like nextBlock, but I'm guessing here without knowing what incorrect data you have to deal with.

Data from byte array

I'm trying to read the bytes in the stream at each frame.
I want to be able to read the position and the timestamp information that is stored on a file I have created.
The stream is a stream of recorded skeleton data and it is in encoded binary format
Stream recordStream;
byte[] results;
using (FileStream SourceStream = File.Open(#".....\Stream01.recorded", FileMode.Open))
{
if (SourceStream.CanRead)
{
results = new byte[recordStream.Length];
SourceStream.Read(results, 0, (int)recordStream.Length);
}
}
The file should be read and the Read method should read the current sequence of bytes before advances the position in the stream.
Is there a way to pull out the data (position and timestamp) I want from the bytes read, and save it in separate variables before it advances?
Could using the binary reader give me the capabilities to do this.
BinaryReader br1 = new BinaryReader(recordStream);
I have save the file as .recorded. I have also saved it as .txt to see what is contained in the file, but since it is encoded, it is not understandable.
Update:
I tried running the code with breakpoints to see if it enters the function with my binaryreader and it crashes with an error: ArgumentException was unhandled. Stream was not readable, on the BinaryReader initialization and declaration
BinaryReader br1 = new BinaryReader(recordStream);
The file type was .recorded.
You did not provide any information about the format of the data you are trying to read.
However, using the BinaryReader is exactly what you need to do.
It exposes methods to read data from the stream and convert them to various types.
Consider the following example:
var filename = "pathtoyourfile";
using (var stream = File.Open(filename, FileMode.Open))
using(var reader = new BinaryReader(stream))
{
var x = reader.ReadByte();
var y = reader.ReadInt16();
var z = reader.ReadBytes(10);
}
It really depends on the format of your data though.
Update
Even though I feel I've already provided all the information you need,
let's use your data.
You say each record in your data starts with
[long: timestamp][int: framenumber]
using (var stream = File.Open(filename, FileMode.Open))
using(var reader = new BinaryReader(stream))
{
var timestamp = reader.ReadInt64();
var frameNumber = reader.ReadInt32();
//At this point you have the timestamp and the frame number
//you can now do whatever you want with it and decide whether or not
//to continue, after that you just continue reading
}
How you continue reading depends on the format of the remaining part of the records
If all fields in a record have a specific length, then you either (depending on the
choice you made knowing the values of the timestamp and the frame number) continue
reading all the fields for that record OR you simply advance to a position in the stream
that contains the next record. For example if each record is 100 bytes long, if you want to skip this record after you got the first two fields:
stream.Seek(88, SeekOrigin.Current);
//88 here because the first two fields take 12 bytes -> (100 - 8 + 4)
If the records have a variable length the solution is similar, but you'll have to
take into account the length of the various fields (which should be defined by
length fields preceding the variable length fields)
As for knowing if the first 8 bytes really do represent a timestamp,
there's no real way of knowing for sure... remember in the end the stream just contains
a series of individual bytes that have no meaning whatsoever except for the meaning
given to them by your file format. Either you have to revise the file format or you could
try checking if the value of 'timestamp' in the example above even makes sense.
Is this a file format you have defined yourself, if so... perhaps you are making it to complicated and might want to look at solutions such as Google Protocol Buffers or Apache Thrift.
If this is still not what you are looking for, you will have to redefine your question.
Based on your comments:
You need to know the exact definition of the entire file. You create a struct based on this file format:
struct YourFileFormat {
[FieldOffset(0)]
public long Timestamp;
[FieldOffset(8)]
public int FrameNumber;
[FieldOffset(12)]
//.. etc..
}
Then, using a BinaryReader, you can either read each field individually for each frame:
// assume br is an instantiated BinaryReader..
YourFileFormat file = new YourFileFormat();
file.Timestamp = br.ReadInt64();
file.FrameNumber = br.ReadInt32();
// etc..
Or, you can read the entire file in and have the Marshalling classes copy everything into the struct for you..
byte[] fileContent = br.ReadBytes(sizeof(YourFileFormat));
GCHandle gcHandle = GCHandle.Alloc(fileContent, GCHandleType.Pinned); // or pinning it via the "fixed" keyword in an unsafe context
file = (YourFileFormat)Marshal.PtrToStructure(gcHandle.AddrOfPinnedObject(), typeof(YourFileFormat));
gcHandle.Free();
However, this assumes you'll know the exact size of the file. With this method though.. each frame (assuming you know how many there are) can be a fixed size array within this struct for that to work.
Bottom line: Unless you know the size of what you want to skip.. you can't hope to get the data from the file you require.

C# - Creating byte array of unknown size?

I'm trying to create a class to manage the opening of a certain file. I would one of the properties to be a byte array of the file, but I don't know how big the file is going to be. I tried declaring the byte array as :
public byte[] file;
...but it won't allow me to set it the ways I've tried. br is my BinaryReader:
file = br.ReadBytes(br.BaseStream.Length);
br.Read(file,0,br.BaseStream.Length);
Neither way works. I assume it's because I have not initialized my byte array, but I don't want to give it a size if I don't know the size. Any ideas?
edit: Alright, I think it's because the Binary Reader's BaseStream length is a long, but its readers take int32 counts. If I cast the 64s into 32s, is it possible I will lose bytes in larger files?
I had no problems reading a file stream:
byte[] file;
var br = new BinaryReader(new FileStream("c:\\Intel\\index.html", FileMode.Open));
file = br.ReadBytes((int)br.BaseStream.Length);
Your code doesn't compile because the Length property of BaseStream is of type long but you are trying to use it as an int. Implicit casting which might lead to data loss is not allowed so you have to cast it to int explicitly.
Update
Just bear in mind that the code above aims to highlight your original problem and should not be used as it is. Ideally, you would use a buffer to read the stream in chunks. Have a look at this question and the solution suggested by Jon Skeet
You can't create unknown sized array.
byte []file=new byte[br.BaseStream.Length];
PS: You should have to repeatedly read chunks of bytes for larger files.
BinaryReader.ReadBytes returns a byte[]. There is no need to initialize a byte array because that method already does so internally and returns the complete array to you.
If you're looking to read all the bytes from a file, there's a handy method in the File class:
http://msdn.microsoft.com/en-us/library/system.io.file.readallbytes.aspx

Which is faster reading binary data or just plain text data?

I have some data that I know its exact structure. It has to be inserted in files second by second.
The structs contain fields of double, but they have different names. The same number of struct have to be written to file every second
The thing is ..
Which is a better appraoch when it comes to reading the data
1- Convert the Structs to bytes then insert it while indexing the byte that marks the end of the second
2- Writing CSV data and index the byte that marks the end of second
The data is requested at random basis from the file.
So in both cases I will set the position of the FileStream to the byte of the second.
In the first case I will use the following for each of the struct in that second to get the whole data
_filestream.Read(buffer, 0, buffer.Length);
GCHandle handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
oReturn = (object)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), _oType);
the previous approach is applied X number of times because there's around 100 struct every second
In the second case I will use string.Split(',') then I will fill in the data accordingly since I know the exact order of my data
file.Read(buffer, 0, buffer.Length);
string val = System.Text.ASCIIEncoding.ASCII.GetString(buffer);
string[] row = val.Split(',');
edit
using the profiler is not showing a difference, but I cannot simulate the exact real life scenario because the file size might get really huge. I am looking for theoratical information for now

Categories