How to put three files into one in c#? - c#

I'm trying to make a map for a game that I'm planning to create. The map should have two data files, and a picture file.
I want to put them together, to form a single file, and I only want to use the default libraries.
How can I do this, and still be able to separate them later?
A solution would be compression, but I couldn't find a way to compress multiple files using the gzipstreamer class.

You could use SharpZipLib to create a ZIP file.

Did you consider embedding the files as resources in the assembly (or in a separate assembly?)
A lot depends on the reasons why you want to group them.
Compression will cost time and CPU power.

I think you should consider embedding the resources in the assembly as Erno suggests.
But if you really want to pack them into a single file, you could do so by simply writing the length of each stream before the stream itself. You could then read the length byte and afterwards return the next length bytes as a Stream. Reading/writing with ugly methods below. The target stream could eventually be gzipped. Note that the naive methods below reads and writes the entire string to a single buffer and assumes that no file is larger than int.MaxValue.
But I would not recommend using just the standard libraries.
static void Append(Stream source, Stream target)
{
BinaryWriter writer = new BinaryWriter(target);
BinaryReader reader = new BinaryReader(source);
writer.Write((long)source.Length);
byte[] buffer = new byte[1024];
int read;
do
{
read = reader.Read(buffer, 0, buffer.Length);
writer.Write(buffer, 0, read);
}
while (read > 0);
writer.Flush();
}
static Stream ReadNextStream(Stream packed)
{
BinaryReader reader = new BinaryReader(packed);
int streamLength = (int)reader.ReadInt64();
MemoryStream result = new MemoryStream();
byte[] buffer = new byte[streamLength];
reader.Read(buffer, 0, buffer.Length);
BinaryWriter writer = new BinaryWriter(result);
writer.Write(buffer, 0, buffer.Length);
writer.Flush();
result.Seek(0, SeekOrigin.Begin);
return result;
}

Gzip compression only works on one file (it only ever has). You could try ZIP, 7-ZIP or some other archive format that allows multiple files. Alternately you can TAR the files together first, which was common practice for the compression scheme Gzip was invented to replace.

I had a simiar question a while ago here about saving 2 XML files in one file.
See my answer with code.
"I ended up writing my own Stream, which can be thought of as a multistream. It allows you to treat one stream as multiple streams in succession. i.e. pass a multistream to an xml parser (or anything else) and it'll read up to a marker, which says 'this is the end of the stream'. If you then pass that same stream to another xml parser, it'll read from that marker, to the next one or EOF"
Your basic usage would be:
Writing:
Open File Stream
Create MultiStream passing in File Stream in constructor
Write data file to multistream
Call write end of stream marker on multistream
Write 2nd data file to multistream
Call write end of stream marker on multistream
Save picture to multistream
Close multistream
Close file stream
Reading:
Open File Stream
Create MultiStream passing in File Stream in constructor
Read data file
Call advance to next stream on multistream
Read 2nd data file
Call advance to next stream on multistream
Read image (Image.FromStream() etc.)
Close multistream
Close file stream

Related

Do I need to 'rewind' a stream before reading what was written to it?

With this code:
using (var stream = new MemoryStream())
{
thumbnail.Save(stream); // you get the idea
stream.Position = 0; // <- is this needed?
WriteStreamToDisk(stream);
}
If I have a method writing to a memory stream, and then I want to write that stream to disk, do I need to set the position to 0?
Or, do streams have different read / write pointers?
A stream has only a single position which is used for both reading and writing. So, assuming that...
Thumbnail.Save(O); doesn't rewind the stream after it's done writing to the stream, and
WriteStreamToDisk(O); doesn't rewind the stream before it starts reading from the stream,
then yes, you will need to rewind the stream yourself.

Can't Use Stream.Write method while playing same file in MediaElement

So lets say I have a file which has around 2MB already downloaded and written which is being played using a MediaElement. So while the media is being played, I want to download and write the rest of the file.
If I use this method, I get an IOExecption error indicating the file is already in use.
using (Stream WriteStream = new FileStream(filename, FileMode.OpenOrCreate))
{
WriteStream.Seek(seekpos, SeekOrigin.Begin);
WriteStream.Write(buffer, 0, buffer.Length);
WriteStream.Close();
}
But if I use this method, it works fine.
FileStream1 = new System.IO.FileStream(filename, System.IO.FileMode.Append, System.IO.FileAccess.Write, System.IO.FileShare.ReadWrite);
FileStream1.Write(buffer, 0, buffer.Length);
So I could use the second method, but I want to be able to seek and write at certain positions which I can't do using the second method. So is there anyway in which I can use the first method. Does it have something to do with the FILEMODE or FILEACCESS?
Thanks :)
The closest you might get might be displayed here with what's known as a synchronized stream. Essentially, it's multiple threads acting on the same stream. You'd have to get the locking issue resolved, especially since you may have no way of making the MediaElement open the file with a shared lock.
Another approach might be to write to one file while the MediaElement plays from another. When the MediaElement's done with file A, play B and stream downloads to new file C. Repeat. Then, at the end, merge them together.
Never-mind I figured it out. I can use this to seek and write at any position. Stupid of me to not realise this.
FileStream1 = new System.IO.FileStream(filename, System.IO.FileMode.Append, System.IO.FileAccess.Write, System.IO.FileShare.ReadWrite);
FileStream1.Seek(seekpos,SeekOrigin.Begin);
FileStream1.Write(buffer, 0, buffer.Length);

Delete last 4 bytes of a file without opening the file?

This is a question which was asked to me in an interview and still could not find a way to do it-
Suppose I have a .txt file and I want to delete the last 4 characters from the content of that file without opening the file. The first question is- Is it really doable? If yes, what is the way to do it?
I guess you can't read the content of the file. So if you can "open" it with write only access you could do:
using (var fileStream = File.Open("initDoc.txt", FileMode.Open, FileAccess.Write))
{
fileStream.SetLength(fileStream.Length - 4);
}
Of course you would need additional checks to make sure you are subtracting the correct number of bytes depending on the encoding, not subtracting more than the length etc.
If you can't use FileMode.Open, you can use an overload of the FileStream constructor that uses a SafeFileHandle. To acquire a SafeFileHandle to a file, you need to use C# Interop. In this example below i have wrapped the interop code to get a file handle in a class called "UnmanagedFileLoader":
var unmanagedFileLoader = new UnmanagedFileLoader("initDoc.txt");
using (var fileStream = new FileStream(unmanagedFileLoader.Handle, FileAccess.Write))
{
fileStream.SetLength(fileStream.Length - 4);
}
The UnmanagedFileLoader internally uses the unmanaged CreateFile function to open an existing file with write permissions:
handleValue = CreateFile(Path, GENERIC_WRITE, 0, IntPtr.Zero, OPEN_EXISTING, 0, IntPtr.Zero);
For more info how to acquire a SafeFileHandle you can check out this link:
http://msdn.microsoft.com/en-us/library/microsoft.win32.safehandles.safefilehandle%28v=vs.110%29.aspx
If you want to skip the FileStream ways, the third way to do it would be to use StreamReader and StreamWriter, and then read a file with StreamReader without the last 4 bytes, and then write it using a StreamWriter. But i would still recommend using the FileStream examples above.
EDIT: I assume "opening the file" means "getting a handle to the file".
Sure it's possible :
Open a handle to the drive that contains the file
Get the file system type
Scan the content of the structure that contains information about all files: the MFT (for NTFS), the FAT records..etc.
Find the entry that corresponds to your file
Updates the entry (write) by subtracting 4 to the value that stores the "file size" information :)
If your concern is about reading all the data for a long file: that isn't necessary. If we assume you really do mean bytes, simply:
using (var file = File.Open(path, FileMode.Open, FileAccess.Write)) {
file.SetLength(file.Length - 4);
}
this does not read the contents of the file.
If you mean characters, then you need to think very carefully about the encoding - 4 characters is not necessarily 4 bytes.

Why does gzip/deflate compressing a small file result in many trailing zeroes?

I'm using the following code to compress a small (~4kB) HTML file in C#.
byte[] fileBuffer = ReadFully(inFile, ResponsePacket.maxResponsePayloadLength); // Read the entire requested HTML file into a memory buffer
inFile.Close(); // Close the requested HTML file
byte[] payload;
using (MemoryStream compMS = new MemoryStream()) // Create a new memory stream to hold the compressed HTML data
{
using (GZipStream gzip = new GZipStream(compMS, CompressionMode.Compress)) // Create a new GZip object pointing to the empty memory stream
{
gzip.Write(fileBuffer, 0, fileBuffer.Length); // Compress the file buffer and write it to the empty memory stream
gzip.Close(); // Close the GZip object
}
payload = compMS.GetBuffer(); // Write the compressed file buffer data in the memory stream to a byte buffer
}
The resulting compressed data is about 2k, but about half of it is just zeroes. This is for a very bandwidth sensitive application (which is why I'm bothering to compress 4kB in the first place), so the extra 1kB of zeroes is wasted valuable space. My best guess would be that the compression algorithm is padding out the data to a block boundary. If so, is there any way to override this behavior or change the block size? I get the same results with vanilla .NET GZipStream and zlib's GZipStream, as well as DeflateStream.
Wrong MemoryStream method. GetBuffer() returns the underlying buffer, it is always larger (or exactly as large) as the data in the stream. Very efficient because no copy needs to be made.
But you need the ToArray() method here. Or use the Length property.

Reliable way to convert a file to a byte[]

I found the following code on the web:
private byte [] StreamFile(string filename)
{
FileStream fs = new FileStream(filename, FileMode.Open,FileAccess.Read);
// Create a byte array of file stream length
byte[] ImageData = new byte[fs.Length];
//Read block of bytes from stream into the byte array
fs.Read(ImageData,0,System.Convert.ToInt32(fs.Length));
//Close the File Stream
fs.Close();
return ImageData; //return the byte data
}
Is it reliable enough to use to convert a file to byte[] in c#, or is there a better way to do this?
byte[] bytes = System.IO.File.ReadAllBytes(filename);
That should do the trick. ReadAllBytes opens the file, reads its contents into a new byte array, then closes it. Here's the MSDN page for that method.
byte[] bytes = File.ReadAllBytes(filename)
or ...
var bytes = File.ReadAllBytes(filename)
Not to repeat what everyone already have said but keep the following cheat sheet handly for File manipulations:
System.IO.File.ReadAllBytes(filename);
File.Exists(filename)
Path.Combine(folderName, resOfThePath);
Path.GetFullPath(path); // converts a relative path to absolute one
Path.GetExtension(path);
All these answers with .ReadAllBytes(). Another, similar (I won't say duplicate, since they were trying to refactor their code) question was asked on SO here: Best way to read a large file into a byte array in C#?
A comment was made on one of the posts regarding .ReadAllBytes():
File.ReadAllBytes throws OutOfMemoryException with big files (tested with 630 MB file
and it failed) – juanjo.arana Mar 13 '13 at 1:31
A better approach, to me, would be something like this, with BinaryReader:
public static byte[] FileToByteArray(string fileName)
{
byte[] fileData = null;
using (FileStream fs = File.OpenRead(fileName))
{
var binaryReader = new BinaryReader(fs);
fileData = binaryReader.ReadBytes((int)fs.Length);
}
return fileData;
}
But that's just me...
Of course, this all assumes you have the memory to handle the byte[] once it is read in, and I didn't put in the File.Exists check to ensure the file is there before proceeding, as you'd do that before calling this code.
looks good enough as a generic version. You can modify it to meet your needs, if they're specific enough.
also test for exceptions and error conditions, such as file doesn't exist or can't be read, etc.
you can also do the following to save some space:
byte[] bytes = System.IO.File.ReadAllBytes(filename);
Others have noted that you can use the built-in File.ReadAllBytes. The built-in method is fine, but it's worth noting that the code you post above is fragile for two reasons:
Stream is IDisposable - you should place the FileStream fs = new FileStream(filename, FileMode.Open,FileAccess.Read) initialization in a using clause to ensure the file is closed. Failure to do this may mean that the stream remains open if a failure occurs, which will mean the file remains locked - and that can cause other problems later on.
fs.Read may read fewer bytes than you request. In general, the .Read method of a Stream instance will read at least one byte, but not necessarily all bytes you ask for. You'll need to write a loop that retries reading until all bytes are read. This page explains this in more detail.
string filePath= #"D:\MiUnidad\testFile.pdf";
byte[] bytes = await System.IO.File.ReadAllBytesAsync(filePath);

Categories