How to write file header using FileStream in C# - c#

I am creating my own video file format and would like to write out a file header and frame headers.
At the moment I just have placeholders defined as such:
byte[] fileHeader = new byte[FILE_HEADER_SIZE * sizeof(int)];
byte[] frameHeader = new byte[FRAME_HEADER_SIZE * sizeof(int)];
I write them out using the following for the file header:
fsVideoWriter.Write(fileHeader, 0, FILE_HEADER_SIZE);
and this for the frame headers:
fsVideoWriter.Write(frameHeader, 0, FRAME_HEADER_SIZE);
Now that I actually need to make proper use of these headers, I'm not sure if this would be the most convenient way to write them, as I am not sure if it will be easy to read in the individual fields I need into separate variables from the headers.
I thought about doing something like the following:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct FileHeader
{
public int x;
public int y;
public int z;
// etc. etc.
}
I would like to define it in such a way that I can upgrade easily as the file format evolves, (i.e. including a version number). Is this the recommended way to define a file/frame header? If so, how should I read/write it using the .NET FileStream class? If this is not the recommended way, please suggest the proper way to do this, as maybe someone has already created a generic video file-related class that handles this sort of thing?

I settled upon the following solution:
Writing out file header
public static bool WriteFileHeader(FileStream fileStream, FileHeader fileHeader)
{
try
{
byte[] buffer = new byte[FILE_HEADER_SIZE];
GCHandle gch = GCHandle.Alloc(buffer, GCHandleType.Pinned);
Marshal.StructureToPtr(fileHeader, gch.AddrOfPinnedObject(), false);
gch.Free();
fileStream.Seek(0, SeekOrigin.Begin);
fileStream.Write(buffer, 0, FILE_HEADER_SIZE);
return true;
}
catch (Exception ex)
{
throw ex;
}
}
Reading in file header
public static bool ReadFileHeader(FileStream fileStream, out FileHeader fileHeader)
{
try
{
fileHeader = new FileHeader();
byte[] buffer = new byte[FILE_HEADER_SIZE];
fileStream.Seek(0, SeekOrigin.Begin);
fileStream.Read(buffer, 0, FILE_HEADER_SIZE);
GCHandle gch = GCHandle.Alloc(buffer, GCHandleType.Pinned);
Marshal.PtrToStructure(gch.AddrOfPinnedObject(), fileHeader);
gch.Free();
// test for valid data
boolean isSuccessful = IsValidHeader(fileHeader);
return isSuccessful;
}
catch (Exception ex)
{
throw ex;
}
}
I used a similar approach for the frame headers as well. The idea is basically to make use of byte buffers and Marshal.

You may want to try the BinaryFormatter Class. But it is more or less a black box. If you need precise control of your file format, you can write your own Formatter and use it to serialize your header object.

Related

How to get the created file bytes from CreateDocument in Android Document Provider c# Xamarin

I am trying to get to grips with creating a custom document provider on Android Xamarin.
I have got an example base template from here...
https://learn.microsoft.com/en-us/samples/xamarin/monodroid-samples/storageprovider/
This creates a dummy provider using the internal memory of the phone, I would instead like to swap this out to use a cloud provider and then read/write the data to cloud storage.
I have managed to get it reading ok, but when I try and save I cannot work out how to get the data to then save to the cloud.
When you press save on the phone it hits the following method in the example provider code.
public override string CreateDocument (string parentDocumentId, string mimeType, string displayName)
{
Log.Verbose(TAG, "createDocument");
File parent = GetFileForDocId (parentDocumentId);
var file = new File (parent, displayName);
try {
file.CreateNewFile ();
file.SetWritable (true);
file.SetReadable (true);
} catch (IOException) {
throw new FileNotFoundException ("Failed to create document with name " +
displayName +" and documentId " + parentDocumentId);
}
return GetDocIdForFile (file);
}
If I then step past this, it goes off to external code and then returns into this method, I am assuming the file create perhaps goes on in between these steps?
public override ICursor QueryDocument(string documentId, string[] projection)
{
Log.Verbose(TAG, "queryDocument");
// Create a cursor with the requested projection, or the default projection.
var result = new MatrixCursor(ResolveDocumentProjection(projection));
IncludeFile(result, documentId, null);
return result;
}
I would like to be able to get the byte array of the file being saved, so I can then call the API to upload to the cloud server.
Is this possible and does anyone have any pointers/examples on how to do this? I have tried getting the bytes like this..
var bytes = fullyReadFileToBytes(file);
using the following method, but this always returns 0 bytes. I am assuming that the file creation in this method is just creating the placeholder for the file but not actually creating it here. Is there another method I can override which is called after CreateDocument perhaps where I can get this byte array from?
public byte[] fullyReadFileToBytes(File f)
{
int size = (int)f.Length();
byte[] bytes = new byte[size];
byte[] tmpBuff = new byte[size];
FileInputStream fis = new FileInputStream(f); ;
try
{
int read = fis.Read(bytes, 0, size);
if (read < size)
{
int remain = size - read;
while (remain > 0)
{
read = fis.Read(tmpBuff, 0, remain);
System.Array.Copy(tmpBuff, 0, bytes, size - remain, read);
remain -= read;
}
}
}
catch (IOException e)
{
throw e;
}
finally
{
fis.Close();
}
return bytes;
}
Any help would be appreciated.
Thanks in advance
I've found the answer the file is only properly written after it closes it's handle.
So if you try and read the bytes in the OnClose method, this brings back the file data needed.
class MyOnCloseListener : Java.Lang.Object, ParcelFileDescriptor.IOnCloseListener
{
string documentID;
public MyOnCloseListener(string documentId)
{
documentID = documentId;
}
public void OnClose(Java.IO.IOException e)
{
// Update the file with the cloud server. The client is done writing.
var file = GetFileForDocId(documentID);
var bytes = fullyReadFileToBytes(file);
// do stuff here to save to cloud provider
}
}

Efficient reading structured binary data from a file

I have the following code fragment that reads a binary file and validates it:
FileStream f = File.OpenRead("File.bin");
MemoryStream memStream = new MemoryStream();
memStream.SetLength(f.Length);
f.Read(memStream.GetBuffer(), 0, (int)f.Length);
f.Seek(0, SeekOrigin.Begin);
var r = new BinaryReader(f);
Single prevVal=0;
do
{
r.ReadUInt32();
var val = r.ReadSingle();
if (prevVal!=0) {
var diff = Math.Abs(val - prevVal) / prevVal;
if (diff > 0.25)
Console.WriteLine("Bad!");
}
prevVal = val;
}
while (f.Position < f.Length);
It unfortunately works very slowly, and I am looking to improve this. In C++, I would simply read the file into a byte array and then recast that array as an array of structures:
struct S{
int a;
float b;
}
How would I do this in C#?
define a struct (possible a readonly struct) with explicit layout ([StructLayout(LayoutKind.Explicit)]) that is precisely the same as your C++ code, then one of:
open the file as a memory-mapped file, get the pointer to the data; use either unsafe code on the raw pointer, or use Unsafe.AsRef<YourStruct> on the data, and Unsafe.Add<> to iterate
open the file as a memory-mapped file, get the pointer to the data; create a custom memory over the pointer (of your T), and iterate over the span
open the file as a byte[]; create a Span<byte> over the byte[], then use MemoryMarshal.Cast<,> to create a Span<YourType>, and iterate over that
open the file as a byte[]; use fixed to pin the byte* and get a pointer; use unsafe code to walk the pointer
something involve "pipelines" - a Pipe that is the buffer, maybe using StreamConnection on a FileStream for filling the pipe, and a worker loop that dequeues from the pipe; complication: the buffers can be discontiguous and may split at inconvenient places; solvable, but subtle code required whenever the first span isn't at least 8 bytes
(or some combination of those concepts)
Any of those should work much like your C++ version. The 4th is simple, but for very large data you probably want to prefer memory-mapped files
This is what we use (compatible with older versions of C#):
public static T[] FastRead<T>(FileStream fs, int count) where T: struct
{
int sizeOfT = Marshal.SizeOf(typeof(T));
long bytesRemaining = fs.Length - fs.Position;
long wantedBytes = count * sizeOfT;
long bytesAvailable = Math.Min(bytesRemaining, wantedBytes);
long availableValues = bytesAvailable / sizeOfT;
long bytesToRead = (availableValues * sizeOfT);
if ((bytesRemaining < wantedBytes) && ((bytesRemaining - bytesToRead) > 0))
{
Debug.WriteLine("Requested data exceeds available data and partial data remains in the file.");
}
T[] result = new T[availableValues];
GCHandle gcHandle = GCHandle.Alloc(result, GCHandleType.Pinned);
try
{
uint bytesRead;
if (!ReadFile(fs.SafeFileHandle, gcHandle.AddrOfPinnedObject(), (uint)bytesToRead, out bytesRead, IntPtr.Zero))
{
throw new IOException("Unable to read file.", new Win32Exception(Marshal.GetLastWin32Error()));
}
Debug.Assert(bytesRead == bytesToRead);
}
finally
{
gcHandle.Free();
}
GC.KeepAlive(fs);
return result;
}
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Interoperability", "CA1415:DeclarePInvokesCorrectly")]
[DllImport("kernel32.dll", SetLastError=true)]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool ReadFile
(
SafeFileHandle hFile,
IntPtr lpBuffer,
uint nNumberOfBytesToRead,
out uint lpNumberOfBytesRead,
IntPtr lpOverlapped
);
NOTE: This only works for structs that contain only blittable types, of course. And you must use [StructLayout(LayoutKind.Explicit)] and declare the packing to ensure that the struct layout is identical to the binary format of the data in the file.
For recent versions of C#, you can use Span as mentioned by Marc in the other answer!
Thank you everyone for very helpful comments and answers. Given this input, this is my preferred solution:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct Data
{
public UInt32 dummy;
public Single val;
};
static void Main(string[] args)
{
byte [] byteArray = File.ReadAllBytes("File.bin");
ReadOnlySpan<Data> dataArray = MemoryMarshal.Cast<byte, Data>(new ReadOnlySpan<byte>(byteArray));
Single prevVal=0;
foreach( var v in dataArray) {
if (prevVal!=0) {
var diff = Math.Abs(v.val - prevVal) / prevVal;
if (diff > 0.25)
Console.WriteLine("Bad!");
}
prevVal = v.val;
}
}
}
It indeed works much faster than the original implementation.
You are actually not using the MemoryStream at all currently. Your BinaryReader accesses the file directly. To have the BinaryReader use the MemoryStream instead:
Replace
f.Seek(0, SeekOrigin.Begin);
var r = new BinaryReader(f);
...
while (f.Position < f.Length);
with
memStream.Seek(0, SeekOrigin.Begin);
var r = new BinaryReader(memStream);
...
while(r.BaseStream.Position < r.BaseStream.Length)

FileStream.copyTo(Net.ConnectStream) what happens intern?

this code works fine. My question is what happens within the Net.ConnectionStream when i use the CopyTo() method?
System.Net.HttpWebRequest request
using (FileStream fileStream = new FileStream("C:\\myfile.txt")
{
using (Stream str = request.GetRequestStream())
{
fileStream.CopyTo(str);
}
}
More specific: What happens to the data?
1. write into the memory and upload then? (what's with big files?)
2. write into the network directly? (how does that work?)
Thanks for your answers
It creates a byte[] buffer and calls Read on the source and Write on the destination until the source doesn't have anymore data.
So when doing this with big files you don't need to be concerned about running out of memory because you'll only allocate as much as the buffer size, 81920 bytes by default.
Here's the actual implementation -
public void CopyTo(Stream destination)
{
// ... a bunch of argument validation stuff (omitted)
this.InternalCopyTo(destination, 81920);
}
private void InternalCopyTo(Stream destination, int bufferSize)
{
byte[] array = new byte[bufferSize];
int count;
while ((count = this.Read(array, 0, array.Length)) != 0)
{
destination.Write(array, 0, count);
}
}

WCF service - support for streaming files with Range: bytes support?

I have a WCF service that can return a stream via a WebGet. This is working fine as far.
But what I would like to implement is support for the Range header, so that only parts of the file are returned.
This is my code this far:
public System.IO.Stream GetStream(string mElementID)
{
// build the filePath
FileInfo file = GetFile(mElementID);
try
{
FileStream videoStream = File.OpenRead(file.FullName);
if (request.Headers.AllKeys.Contains("Range"))
{
long startRange = ...; // get the start range from the header
long endRange = ...; // get the end range from the header
videoStream.Position = startRange;
// how can I set the end of the range?
//TODO: Don't forget to add the Content-Range header to the response!
}
WebOperationContext.Current.OutgoingResponse.ContentType = GetMimeType(file);
WebOperationContext.Current.OutgoingResponse.Headers.Add("Accept-Ranges", "bytes");
return videoStream;
}
catch (FileNotFoundException){}
catch (IOException ex)
{
throw ex;
}
// throw a 404
throw new WebFaultException(System.Net.HttpStatusCode.NotFound);
}
I just create a FileStream, and return that. Now I wonder what is the best way to get a range of that Stream.
I think I could set videoStream.Position to the start value of the Range, but what is the best way to get a part from somwehere in the file to somewhere in the file?
Do I have to create a MemoryStream and write the relevant bytes into that?
The files that are streamed here are video files, so can be quite big.
You can do as you've suggested yourself. With the filestream, set the position to the start of the range. Create a byte array set to the length of the range you want. Then do
videoStream.Read(myByteArray, 0, myByteArray.Length)
Alternatively, you can set the position to the start of the filestream, and use the second parameter when calling read to offset yourself from the beginning of the fileStream.
Once you've read into the buffer (byte array) you can place it into a new memory stream (which has an overloaded constructor that accepts a byte array). You can then return the derived memoryStream.

File.ReadAllBytes Code Refactoring

I came across this piece of code today:
public static byte[] ReadContentFromFile(String filePath)
{
FileInfo fi = new FileInfo(filePath);
long numBytes = fi.Length;
byte[] buffer = null;
if (numBytes > 0)
{
try
{
FileStream fs = new FileStream(filePath, FileMode.Open);
BinaryReader br = new BinaryReader(fs);
buffer = br.ReadBytes((int)numBytes);
br.Close();
fs.Close();
}
catch (Exception e)
{
System.Console.WriteLine(e.StackTrace);
}
}
return buffer;
}
My first thought is to refactor it down to this:
public static byte[] ReadContentFromFile(String filePath)
{
return File.ReadAllBytes(filePath);
}
System.IO.File.ReadAllBytes is documented as:
Opens a binary file, reads the
contents of the file into a byte
array, and then closes the file.
... but am I missing some key difference?
The original code returns a null reference if the file is empty, and won't throw an exception if it can't be read. Personally I think it's better to return an empty array, and to not swallow exceptions, but that's the difference between refactoring and redesigning I guess.
Oh, also, if the file length is changed between finding out the length and reading it, then the original code will read the original length. Again, I think the File.ReadAllBytes behaviour is better.
What do you want to happen if the file doesn't exist?
That's basically the same method if you add the try {...} catch{...} block. The method name, ReadContentFromFile, further proves the point.
What a minute... isn't that something a unit test should tell?
In this case, no you are not missing anything at all. from a file operation standpoint. Now you know that your lack of exception handling will change the behavior of the system.
It is a streamlined way of reading the bytes of a file.
NOTE: that if you need to set any custom options on the read, then you would need the long form.

Categories