I'm trying to read the bytes in the stream at each frame.
I want to be able to read the position and the timestamp information that is stored on a file I have created.
The stream is a stream of recorded skeleton data and it is in encoded binary format
Stream recordStream;
byte[] results;
using (FileStream SourceStream = File.Open(#".....\Stream01.recorded", FileMode.Open))
{
if (SourceStream.CanRead)
{
results = new byte[recordStream.Length];
SourceStream.Read(results, 0, (int)recordStream.Length);
}
}
The file should be read and the Read method should read the current sequence of bytes before advances the position in the stream.
Is there a way to pull out the data (position and timestamp) I want from the bytes read, and save it in separate variables before it advances?
Could using the binary reader give me the capabilities to do this.
BinaryReader br1 = new BinaryReader(recordStream);
I have save the file as .recorded. I have also saved it as .txt to see what is contained in the file, but since it is encoded, it is not understandable.
Update:
I tried running the code with breakpoints to see if it enters the function with my binaryreader and it crashes with an error: ArgumentException was unhandled. Stream was not readable, on the BinaryReader initialization and declaration
BinaryReader br1 = new BinaryReader(recordStream);
The file type was .recorded.
You did not provide any information about the format of the data you are trying to read.
However, using the BinaryReader is exactly what you need to do.
It exposes methods to read data from the stream and convert them to various types.
Consider the following example:
var filename = "pathtoyourfile";
using (var stream = File.Open(filename, FileMode.Open))
using(var reader = new BinaryReader(stream))
{
var x = reader.ReadByte();
var y = reader.ReadInt16();
var z = reader.ReadBytes(10);
}
It really depends on the format of your data though.
Update
Even though I feel I've already provided all the information you need,
let's use your data.
You say each record in your data starts with
[long: timestamp][int: framenumber]
using (var stream = File.Open(filename, FileMode.Open))
using(var reader = new BinaryReader(stream))
{
var timestamp = reader.ReadInt64();
var frameNumber = reader.ReadInt32();
//At this point you have the timestamp and the frame number
//you can now do whatever you want with it and decide whether or not
//to continue, after that you just continue reading
}
How you continue reading depends on the format of the remaining part of the records
If all fields in a record have a specific length, then you either (depending on the
choice you made knowing the values of the timestamp and the frame number) continue
reading all the fields for that record OR you simply advance to a position in the stream
that contains the next record. For example if each record is 100 bytes long, if you want to skip this record after you got the first two fields:
stream.Seek(88, SeekOrigin.Current);
//88 here because the first two fields take 12 bytes -> (100 - 8 + 4)
If the records have a variable length the solution is similar, but you'll have to
take into account the length of the various fields (which should be defined by
length fields preceding the variable length fields)
As for knowing if the first 8 bytes really do represent a timestamp,
there's no real way of knowing for sure... remember in the end the stream just contains
a series of individual bytes that have no meaning whatsoever except for the meaning
given to them by your file format. Either you have to revise the file format or you could
try checking if the value of 'timestamp' in the example above even makes sense.
Is this a file format you have defined yourself, if so... perhaps you are making it to complicated and might want to look at solutions such as Google Protocol Buffers or Apache Thrift.
If this is still not what you are looking for, you will have to redefine your question.
Based on your comments:
You need to know the exact definition of the entire file. You create a struct based on this file format:
struct YourFileFormat {
[FieldOffset(0)]
public long Timestamp;
[FieldOffset(8)]
public int FrameNumber;
[FieldOffset(12)]
//.. etc..
}
Then, using a BinaryReader, you can either read each field individually for each frame:
// assume br is an instantiated BinaryReader..
YourFileFormat file = new YourFileFormat();
file.Timestamp = br.ReadInt64();
file.FrameNumber = br.ReadInt32();
// etc..
Or, you can read the entire file in and have the Marshalling classes copy everything into the struct for you..
byte[] fileContent = br.ReadBytes(sizeof(YourFileFormat));
GCHandle gcHandle = GCHandle.Alloc(fileContent, GCHandleType.Pinned); // or pinning it via the "fixed" keyword in an unsafe context
file = (YourFileFormat)Marshal.PtrToStructure(gcHandle.AddrOfPinnedObject(), typeof(YourFileFormat));
gcHandle.Free();
However, this assumes you'll know the exact size of the file. With this method though.. each frame (assuming you know how many there are) can be a fixed size array within this struct for that to work.
Bottom line: Unless you know the size of what you want to skip.. you can't hope to get the data from the file you require.
Related
I am using the code below to read binary data from text file and divide it into small chunks. I want to do the same with a text file with alphanumeric data which is obviously not working with the binary reader. Which reader would be best to achieve that stream,string or text and how to implement that in the following code?
public static IEnumerable<IEnumerable<byte>> ReadByChunk(int chunkSize)
{
IEnumerable<byte> result;
int startingByte = 0;
do
{
result = ReadBytes(startingByte, chunkSize);
startingByte += chunkSize;
yield return result;
} while (result.Any());
}
public static IEnumerable<byte> ReadBytes(int startingByte, int byteToRead)
{
byte[] result;
using (FileStream stream = File.Open(#"C:\Users\file.txt", FileMode.Open, FileAccess.Read, FileShare.Read))
using (BinaryReader reader = new BinaryReader(stream))
{
int bytesToRead = Math.Max(Math.Min(byteToRead, (int)reader.BaseStream.Length - startingByte), 0);
reader.BaseStream.Seek(startingByte, SeekOrigin.Begin);
result = reader.ReadBytes(bytesToRead);
}
return result;
}
I can only help you get the general process figured out:
String/Text is the 2nd worst data format to read, write or process. It should be reserved for output towards and input from the user exclusively. It has some serious issues as a storage and retreival format.
If you have to transmit, store or retreive something as text, make sure you use a fixed Encoding and Culture Format (usually invariant) at all endpoints. You do not want to run into issues with those two.
The worst data fromat is raw binary. But there is a special 0th place for raw binary that you have to interpret into text, to then further process. To quote the most importnt parts of what I linked on encodings:
It does not make sense to have a string without knowing what encoding it uses. [...]
If you have a string, in memory, in a file, or in an email message, you have to know what encoding it is in or you cannot interpret it or display it to users correctly.
Almost every stupid “my website looks like gibberish” or “she can’t read my emails when I use accents” problem comes down to one naive programmer who didn’t understand the simple fact that if you don’t tell me whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), you simply cannot display it correctly or even figure out where it ends. There are over a hundred encodings and above code point 127, all bets are off.
i have a large binary file that contains different data types, i can access single records in the file but i am not sure how to loop over the binary values and load it in the memory stream byte by byte
i have been using binary reader
BinaryReader binReader = new BinaryReader(File.Open(fileName, FileMode.Open));
Encoding ascii = Encoding.ASCII;
string authorName = binReader.ReadString();
Console.WriteLine(authorName);
Console.ReadLine();
but this won't work since i have a large file with different data types
simply, i need to convert the file to read byte by byte and then read these data either if it's a string or whatsoever.
would appreciate any thought that can help
This will very much depend on what format the file is in. Each byte in the file might represent different things, or it might just represent values from a large array, or some mix of the two.
You need to know what the format looks like to be able to read it, since binary files are not self-descriptive. Reading a simple object might look like
var authorName = binReader.ReadString();
var publishDate = DateTime.FromBinary(binReader.ReadInt64());
...
If you have a list of items it is common to use a length prefix. Something like
var numItems = binReader.ReadInt32();
for(int i = 0; i < numItems; i++){
var title = binReader.ReadString();
...
}
You would then typically create one or more objects from the data that can be used in the rest of the application. I.e.
new Bibliography(authorName, publishDate , books);
If this is a format you do not control I hope you have a detailed specification. Otherwise this is kind of a lost cause for anything but the cludgiest solutions.
If there is more data than can fit in memory you need some kind of streaming mechanism. I.e. read one item, do some processing of the item, save the result, read the next item, etc.
If you do control the format I would suggest alternatives that are easier to manage. I have used protobuf.Net, and I find it quite easy to use, but there are other alternatives. The common way to use these kinds of libraries is to create a class for the data, and add attributes for the fields that should be stored. The library can manage serialization/deserialization automatically, and usually handle things like inheritance and changes to the format in an easy way.
Here's a simple bit of code that shows the most basic way of doing it.
using System;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
namespace binary_read
{
class Program
{
private static readonly int bufferSize = 1024;
static async Task Main(string[] args)
{
var bytesRead = 0;
var totalBytes = 0;
using (var stream = File.OpenRead(args.First()))
{
do
{
var buffer = new byte[bufferSize];
bytesRead = await stream.ReadAsync(buffer, 0, bufferSize);
totalBytes += bytesRead;
// Process buffer
} while (bytesRead > 0);
Console.WriteLine($"Processed {totalBytes} bytes.");
}
}
}
}
The main bit to take note of is within the using block.
Firstly, when working with files/streams/sockets it's best to use using if possible to deterministically clean up after yourself.
Then it's really just a matter of calling Read/ReadAsync on the stream if you're just after the raw data. However there are various 'readers' that provide an abstraction to make working with certain formats easier.
So if you know that you're going to be reading ints and doubles and strings, then you can use the BinaryReader and it's ReadIntxx/ReadDouble/ReadString methods.
If you're reading into a struct, then you can read the properties in a loop as suggested by #JonasH above. Or use the method in this answer.
I want to store an array of timestamps in a binary flat file. One of my requirements is that I can access individual timestamps later on for efficient query purposes without having to read and deserialize the entire array first (I use a binary search algorithm that finds the file position of a start timestamp and end timestamp which in turn determines which bytes are read and deserialized between those two timestamps because the entire binary file can be multiple gigabytes large in size).
Obviously, the simple but slow way is to use BitConverter.GetBytes(timestamp) to convert each timestamp to bytes and to then store them in the file. I can then access each item individually in the file and use my custom binary search algorithm to find the timestamp that matches with the desired timestamp.
However, I found that BinaryFormatter is incredibly efficient (multiple times faster than protobuf-net and any other serializer I tried) regarding serialization/deserialization of value type arrays. Hence I attempted to try to serialize an array of timestamps into binary form. However, apparently that will now prevent me from accessing individual timestamps in the file without having to first deserialize the entire array.
Is there a way to still access individual items in binary form after having serialized an entire array of items via BinaryFormatter?
Here is some code snippet that demonstrates what I mean:
var sampleArray = new int[5] { 1,2,3,4,5};
var serializedSingleValueArray = sampleArray.SelectMany(x => BitConverter.GetBytes(x)).ToArray();
var serializedArrayofSingleValues = Serializers.BinarySerializeToArray(sampleArray);
var deserializesToCorrectValue = BitConverter.ToInt32(serializedSingleValueArray, 0); //value = 1 (ok)
var wrongDeserialization = BitConverter.ToInt32(serializedArrayofSingleValues, 0); //value = 256 (???)
Here the serialization function:
public static byte[]BinarySerializeToArray(object toSerialize)
{
using (var stream = new MemoryStream())
{
Formatter.Serialize(stream, toSerialize);
return stream.ToArray();
}
}
Edit: I do not need to concern myself with efficient memory consumption or file sizes as those are currently by far not the bottlenecks. It is the speed of serialization and deserialization that is the bottleneck for me with multi-gigabyte large binary files and hence very large arrays of primitives.
If your problem is just "how to convert an array of struct,to byte[]" you have other options than BitConverter. BitConverter is for single values, the Buffer class is for arrays.
double[] d = new double[100];
d[4] = 1235;
d[8] = 5678;
byte[] b = new byte[800];
Buffer.BlockCopy(d, 0, b, 0, d.Length*sizeof(double));
// just to test it works
double[] d1 = new double[100];
Buffer.BlockCopy(b, 0, d1, 0, d.Length * sizeof(double));
This does a byte-level copy without converting anything and without iterating over items.
You can put this byte array directly to your stream (not a StreamWriter, not a Formatter)
stream.Write(b, 0, 800);
That's definitly the fastest way to write to a file,but it involves a complete copy, but probably also any other thinkable method, will read an item, store it first for some reason, before it goes to the file.
If this is the only thing you write to your file - you don't need to write the array-length in the file, you can use the file-length for this.
To read the 100th double value in the file:
file.Seek(100*sizeof(double), SeekOrigin.Begin);
byte[] tmp = new byte[8];
f.Read(tmp, 0, 8);
double value = BitConverter.ToDouble(tmp, 0);
Here, for single value, you can use BitConverter.
This is the solution for .NET Framework, C# <= 7.0
For .NET Standard/.NET Core, C# 8.0 you have more options with Span<T>, which gives you access to the internal memory, without copying the Data.
A Bitconverter is not a "slow" version, it's just a way to convert everything to a byte[] sequence. This is actually not costly, it's just interpreting the memory differently.
Computing the position in file, load 8 bytes, convert it to DateTime, you are done.
You should do it only with simple structured files, and with simple structured files you don't need a binary formatter. Just load/save your one array to one file. This way you can be sure your file-positions can be computed.
So in other words. Save your array yourself, Date byte Date, than you can load it also Date by Date.
Writing with one processing style, Reading with another, is always a bad idea.
This question already has answers here:
C# - How do I read and write a binary file?
(4 answers)
Closed 9 years ago.
The application I'm attempting to create would read the binary code of any file and create a file with the exact same binary code, creating a copy.
While writing a program that reads a file and writes it somewhere else, I was running into encoding issues, so I hypothesize that reading as straight binary will overcome this.
The file being read into the application is important, as after I get this to work I will add additional functionality to search within or manipulate the file's data as it is read.
Update:
I'd like to thank everyone that took the time to answer, I now have a working solution. Wolfwyrd's answer was exactly what I needed.
BinaryReader will handle reading the file into a byte buffer. BinaryWriter will handle dumping those bytes back out to another file. Your code will be something like:
using (var binReader = new System.IO.BinaryReader(System.IO.File.OpenRead("PATHIN")))
using (var binWriter = new System.IO.BinaryWriter(System.IO.File.OpenWrite("PATHOUT")))
{
byte[] buffer = new byte[512];
while (binReader.Read(buffer, 0, 512) != 0)
{
binWriter.Write(buffer);
}
}
Here we cycle a buffer of 512 bytes and immediately write it out to the other file. You would need to choose sensible sizes for your own buffer (there's nothing stopping you reading the entire file if it's reasonably sized). As you mentioned doing pattern matching you will need to consider the case where a pattern overlaps a buffered read if you do not load the whole file into a single byte array.
This SO Question has more details on best practices on reading large files.
Look at MemoryStream and BinaryReader/BinaryWriter:
http://www.dotnetperls.com/memorystream
http://msdn.microsoft.com/en-us/library/system.io.binaryreader.aspx
http://msdn.microsoft.com/en-us/library/system.io.binarywriter.aspx
Have a look at using BinaryReader Class
Reads primitive data types as binary values in a specific encoding.
and maybe BinaryReader.ReadBytes Method
Reads the specified number of bytes from the current stream into a
byte array and advances the current position by that number of bytes.
also BinaryWriter Class
Writes primitive types in binary to a stream and supports writing
strings in a specific encoding.
Another good example C# - Copying Binary Files
for instance, one char at a time.
using (BinaryReader writer = new BinaryWrite(File.OpenWrite("target"))
{
using (BinaryReader reader = new BinaryReader(File.OpenRead("source"))
{
var nextChar = reader.Read();
while (nextChar != -1)
{
writer.Write(Convert.ToChar(nextChar));
nextChar = reader.Read();
}
}
}
The application I'm attempting to create would read the binary code of any file and create a file with the exact same binary code, creating a copy.
Is this for academic purposes? Or do you actually just want to copy a file?
If the latter, you'll want to just use the System.IO.File.Copy method.
I'm new to programming in general (My understanding of programming concepts is still growing.). So this question is about learning, so please provide enough info for me to learn but not so much that I can't, thank you.
(I would also like input on how to make the code reusable with in the project.)
The goal of the project I'm working on consists of:
Read binary file.
I have known offsets I need to read to find a particular chunk of data from within this file.
First offset is first 4 bytes(Offset for end of my chunk).
Second offset is 16 bytes from end of file. I read for 4 bytes.(Gives size of chunk in hex).
Third offset is the 4 bytes following previous, read for 4 bytes(Offset for start of chunk in hex).
Locate parts in the chunk to modify by searching ASCII text as well as offsets.
Now I have the start offset, end offset and size of my chunk.
This should allow me to read bytes from file into a byte array and know the size of the array ahead of time.
(Questions: 1. Is knowing the size important? Other than verification. 2. Is reading part of a file into a byte array in order to change bytes and overwrite that part of the file the best method?)
So far I have managed to read the offsets from the file using BinaryReader on a MemoryStream. I then locate the chunk of data I need and read that into a byte array.
I'm stuck in several ways:
What are the best practices for binary Reading / Writing?
What's the best storage convention for the data that is read?
When I need to modify bytes how do I go about that.
Should I be using FileStream?
Since you want to both read and write, it makes sense to use the FileStream class directly (using FileMode.Open and FileAccess.ReadWrite). See FileStream on MSDN for a good overall example.
You do need to know the number of bytes that you are going to be reading from the stream. See the FileStream.Read documentation.
Fundamentally, you have to read the bytes into memory at some point if you're going to use and later modify their contents. So you will have to make an in-memory copy (using the Read method is the right way to go if you're reading a variable-length chunk at a time).
As for best practices, always dispose your streams when you're done; e.g.:
using (var stream = File.Open(FILE_NAME, FileMode.Open, FileAccess.ReadWrite))
{
//Do work with the FileStream here.
}
If you're going to do a large amount of work, you should be doing the work asynchronously. (Let us know if that's the case.)
And, of course, check the FileStream.Read documentation and also the FileStream.Write documentation before using those methods.
Reading bytes is best done by pre-allocating an in-memory array of bytes with the length that you're going to read, then reading those bytes. The following will read the chunk of bytes that you're interested in, let you do work on it, and then replace the original contents (assuming the length of the chunk hasn't changed):
EDIT: I've added a helper method to do work on the chunk, per the comments on variable scope.
using (var stream = File.Open(FILE_NAME, FileMode.Open, FileAccess.ReadWrite))
{
var chunk = new byte[numOfBytesInChunk];
var offsetOfChunkInFile = stream.Position; // It sounds like you've already calculated this.
stream.Read(chunk, 0, numOfBytesInChunk);
DoWorkOnChunk(ref chunk);
stream.Seek(offsetOfChunkInFile, SeekOrigin.Begin);
stream.Write(chunk, 0, numOfBytesInChunk);
}
private void DoWorkOnChunk(ref byte[] chunk)
{
//TODO: Any mutation done here to the data in 'chunk' will be written out to the stream.
}