I'm using this code
for (int i = 0; i < 3; ++i)
{
List<int> tl = new List<int>();
tl.Add(5);
tl.Add(4);
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Append))
{
var bFormatter = new BinaryFormatter();
bFormatter.Serialize(fileStream, tl);
//fileStream.Close();
}
var list = new List<int>();
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Open))
{
var bFormatter = new BinaryFormatter();
//while (fileStream.Position != fileStream.Length)
//{
// list.Add((int)bFormatter.Deserialize(fileStream));
//}
list = (List<int>)bFormatter.Deserialize(fileStream);
//fileStream.Close();
}
}
I expect .dat file will be
5 4 5 4 5 4
but it's only
5 4
also this code return also
5 4
List<int> tl = new List<int>();
tl.Add(5);
tl.Add(4);
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Append))
{
var bFormatter = new BinaryFormatter();
bFormatter.Serialize(fileStream, tl);
}
tl.Clear();
tl.Add(3);
tl.Add(2);
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Append))
{
var bFormatter = new BinaryFormatter();
bFormatter.Serialize(fileStream, tl);
}
var list = new List<int>();
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Open))
{
var bFormatter = new BinaryFormatter();
list = (List<int>)bFormatter.Deserialize(fileStream);
}
it looks like it de-serialize only first portion that was appended.
why data don't append?
UPDATE:
So the solution is:
var list = new List<int>();
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Open))
{
var bFormatter = new BinaryFormatter();
while (fileStream.Position != fileStream.Length)
{
var t = (List<int>)(bFormatter.Deserialize(fileStream));
list.AddRange(t);
}
}
You are adding three lists of ints, one after the other, and only reading the first one back. I think what your intent may be is to append to a (single) existing list, in which case you'd have to
Read your list back into memory
Add your new elements
Write the list back out to the file in overwrite (not append) mode
BinaryFormatter is not listed as being appendable. As it happens, you can usually get away with deserializing multiple times until you get to the EOF (and merging manually), but: there are other serializers which are explicitly designed to be appendable. For example, protocol buffers is an appendable format: concatenation is identical to merge. Further: if the outer element is a list, appending to the file is identical to adding to the composes list.
With protobuf-net, this is just:
for (int i = 0; i < 3; ++i)
{
List<int> tl = new List<int>();
tl.Add(5);
tl.Add(4);
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Append))
{
Serializer.Serialize(fileStream, tl);
}
using (var fileStream = new FileStream(#"C:\file.dat", FileMode.Open))
{
list = Serializer.Deserialize<List<int>>(fileStream);
}
}
At the end of each loop iteration, list (i.e. after deserialization) has 2, then 4, then 6 elements.
As Marc Gravell mentioned BinaryFormatter isn't appendable which means you'll need to reserialize every time you need to modify the file.
Example:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.Serialization.Formatters.Binary;
class Program
{
const string file = #"C:\temp\file.dat";
static void Main()
{
for (int i = 0; i < 3; ++i)
{
List<int> tl = new List<int>();
tl.Add(5);
tl.Add(4);
AppendToDisk(tl);
}
var list = ReadFromDisk<int>();
foreach (var item in list)
{
Console.Write(item);
}
}
private static void AppendToDisk<T>(IEnumerable<T> collection)
{
var existing = ReadFromDisk<T>().ToList();
existing.AddRange(collection);
PersistToDisk(existing);
}
private static void PersistToDisk<T>(ICollection<T> value)
{
if (!File.Exists(file))
{
using (File.Create(file)) { };
}
var bFormatter = new BinaryFormatter();
using (var stream = File.OpenWrite(file))
{
bFormatter.Serialize(stream, value);
}
}
private static ICollection<T> ReadFromDisk<T>()
{
if (!File.Exists(file)) return Enumerable.Empty<T>().ToArray();
var bFormatter = new BinaryFormatter();
using (var stream = File.OpenRead(file))
{
return (ICollection<T>)bFormatter.Deserialize(stream);
}
}
}
Related
using (var fileStream = new FileStream("data.bin", FileMode.Append, FileAccess.Write))
using (var bw1 = new BinaryWriter(fileStream))
{
bw1.Write(jmeno);
bw1.Write(date);
bw1.Write(rodnecislo);
bw1.Write(sarze);
}
using (var fileStream = new FileStream("data.bin", FileMode.Open, FileAccess.Read))
using (var br1 = new BinaryReader(fileStream))
{
string readname = br1.ReadString();
int readdate = br1.ReadInt32();
long readcislo = br1.ReadInt64();
long readsarze = br1.ReadInt64();
Console.WriteLine(readname);
Console.WriteLine(readdate);
Console.WriteLine(readcislo);
Console.WriteLine(readsarze);
}
This is how the BinaryWriter looks like. I don't know how to make the console write this whole binary file.
OK, then here is how to do it:
using (var fileStream = new FileStream("data.bin", FileMode.Open, FileAccess.Read))
using (var br1 = new BinaryReader(fileStream))
{
while (br1.BaseStream.Position != br1.BaseStream.Length)
{
string readname = br1.ReadString();
int readdate = br1.ReadInt32();
long readcislo = br1.ReadInt64();
long readsarze = br1.ReadInt64();
Console.WriteLine(readname);
Console.WriteLine(readdate);
Console.WriteLine(readcislo);
Console.WriteLine(readsarze);
}
}
For an explanation of the while condition see e.g. here
I have a bunch of Jpg images in byte array form. I want to add these to a zip file, turn the zip file into a byte array, and pass it to somewhere else. In a method, I have this code:
var response = //some response object that will hold a byte array
using (var ms = new MemoryStream())
{
using (var zipArchive = new ZipArchive(ms, ZipArchiveMode.Create, true))
{
var i = 1;
foreach (var image in images) // some collection that holds byte arrays.
{
var entry = zipArchive.CreateEntry(i + ".jpg");
using (var entryStream = entry.Open())
using (var compressStream = new MemoryStream(photo.ImageOriginal))
{
compressStream.CopyTo(entryStream);
}
i++;
}
response.ZipFile = ms.ToArray();
}
using (var fs = new FileStream(#"C:\Users\MyName\Desktop\image.zip", FileMode.Create))
{
ms.Position = 0;
ms.CopyTo(fs);
}
}
return response;
Now, I've added a filestream near the bottom to write it to a zipfile right away for testing purposes. This works, I get a zip file with 1 or more images in it on my desktop. However: response.ZipFile can not be made into a valid zipfile in the same way. I have tried this:
using (var ms2 = new MemoryStream(response.ZipFile))
using (var fs = new FileStream(#"C:\Users\Bara\Desktop\image.zip", FileMode.Create))
{
ms2.Position = 0;
ms2.CopyTo(fs);
}
But that creates a zipfile that can not be opened.
What I'm trying to do: turn response.ZipFileinto an array that can be turned into a working zipfile again. What am I doing wrong in this code?
How do you know that ZipArchive's Dispose doesn't write more to the underlying stream?
You should move this line to be after disposing the ZipArchive:
response.ZipFile = ms.ToArray();
Full code:
var response = //some response object that will hold a byte array
using (var ms = new MemoryStream())
{
using (var zipArchive = new ZipArchive(ms, ZipArchiveMode.Create, true))
{
var i = 1;
foreach (var image in images) // some collection that holds byte arrays.
{
var entry = zipArchive.CreateEntry(i + ".jpg");
using (var entryStream = entry.Open())
using (var compressStream = new MemoryStream(photo.ImageOriginal))
{
compressStream.CopyTo(entryStream);
}
i++;
}
}
response.ZipFile = ms.ToArray();
}
return response;
My application serializes an object using Json.Net, compresses the resulting JSON, then saves this to file. Additionally the application can load an object from one of these files. These objects can be tens of Mb in size and I'm concerned about memory usage, due to the way the existing code creates large strings and byte arrays:-
public void Save(MyClass myObject, string filename)
{
var json = JsonConvert.SerializeObject(myObject);
var bytes = Compress(json);
File.WriteAllBytes(filename, bytes);
}
public MyClass Load(string filename)
{
var bytes = File.ReadAllBytes(filename);
var json = Decompress(bytes);
var myObject = JsonConvert.DeserializeObject<MyClass>(json);
}
private static byte[] Compress(string s)
{
var bytes = Encoding.Unicode.GetBytes(s);
using (var ms = new MemoryStream())
{
using (var gs = new GZipStream(ms, CompressionMode.Compress))
{
gs.Write(bytes, 0, bytes.Length);
gs.Close();
return ms.ToArray();
}
}
}
private static string Decompress(byte[] bytes)
{
using (var msi = new MemoryStream(bytes))
{
using (var mso = new MemoryStream())
{
using (var gs = new GZipStream(msi, CompressionMode.Decompress))
{
gs.CopyTo(mso);
return Encoding.Unicode.GetString(mso.ToArray());
}
}
}
}
I was wondering if the Save/Load methods could be replaced with streams? I've found examples of using streams with Json.Net but am struggling to get my head around how to fit in the additional compression stuff.
JsonSerializer has methods to serialize from a JsonTextReader and to a StreamWriter, both of which can be created on top of any sort of stream, including a GZipStream. Using them, you can create the following extension methods:
public static partial class JsonExtensions
{
// Buffer sized as recommended by Bradley Grainger, https://faithlife.codes/blog/2012/06/always-wrap-gzipstream-with-bufferedstream/
// But anything smaller than 85,000 bytes should be OK, since objects larger than that go on the large object heap. See:
// https://learn.microsoft.com/en-us/dotnet/standard/garbage-collection/large-object-heap
const int BufferSize = 8192;
// Disable writing of BOM as per https://datatracker.ietf.org/doc/html/rfc8259#section-8.1
static readonly Encoding DefaultEncoding = new UTF8Encoding(false);
public static void SerializeToFileCompressed(object value, string path, JsonSerializerSettings settings = null)
{
using (var fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read))
SerializeCompressed(value, fs, settings);
}
public static void SerializeCompressed(object value, Stream stream, JsonSerializerSettings settings = null)
{
using (var compressor = new GZipStream(stream, CompressionMode.Compress))
using (var writer = new StreamWriter(compressor, DefaultEncoding, BufferSize))
{
var serializer = JsonSerializer.CreateDefault(settings);
serializer.Serialize(writer, value);
}
}
public static T DeserializeFromFileCompressed<T>(string path, JsonSerializerSettings settings = null)
{
using (var fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read))
return DeserializeCompressed<T>(fs, settings);
}
public static T DeserializeCompressed<T>(Stream stream, JsonSerializerSettings settings = null)
{
using (var compressor = new GZipStream(stream, CompressionMode.Decompress))
using (var reader = new StreamReader(compressor))
using (var jsonReader = new JsonTextReader(reader))
{
var serializer = JsonSerializer.CreateDefault(settings);
return serializer.Deserialize<T>(jsonReader);
}
}
}
See Performance Tips: Optimize Memory Usage in the Json.NET documentation.
For those looking for an idea how to use the extensions from #dbc in uwp apps, I modified the code to this - where the StorageFile is a file you have access to write to.
public static async void SerializeToFileCompressedAsync(object value, StorageFile file, JsonSerializerSettings settings = null)
{
using (var stream = await file.OpenStreamForWriteAsync())
SerializeCompressed(value, stream, settings);
}
public static void SerializeCompressed(object value, Stream stream, JsonSerializerSettings settings = null)
{
using (var compressor = new GZipStream(stream, CompressionMode.Compress))
using (var writer = new StreamWriter(compressor))
{
var serializer = JsonSerializer.CreateDefault(settings);
serializer.Serialize(writer, value);
}
}
public static async Task<T> DeserializeFromFileCompressedAsync<T>(StorageFile file, JsonSerializerSettings settings = null)
{
using (var stream = await file.OpenStreamForReadAsync())
return DeserializeCompressed<T>(stream, settings);
}
public static T DeserializeCompressed<T>(Stream stream, JsonSerializerSettings settings = null)
{
using (var compressor = new GZipStream(stream, CompressionMode.Decompress))
using (var reader = new StreamReader(compressor))
using (var jsonReader = new JsonTextReader(reader))
{
var serializer = JsonSerializer.CreateDefault(settings);
return serializer.Deserialize<T>(jsonReader);
}
}
I'm trying to create a ZIP archive with multiple text files as follows:
Dictionary<string, string> Values = new Dictionary<string, string>();
using (var memoryStream = new MemoryStream())
{
string zip = #"C:\Temp\ZipFile.zip";
foreach (var item in Values)
{
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
var file = archive.CreateEntry(item.Key + ".txt");
using (var entryStream = file.Open())
using (var streamWriter = new StreamWriter(entryStream))
{
streamWriter.Write(item.Value);
}
}
}
using (var fileStream = new FileStream(zip, FileMode.Create))
{
memoryStream.Seek(0, SeekOrigin.Begin);
memoryStream.CopyTo(fileStream);
}
}
However, the ZIP is created with only the last text file, what's wrong?
You are creating ZipArchive on each iteration. Swapping foreach and using should solve it:
Dictionary<string, string> Values = new Dictionary<string, string>();
using (var memoryStream = new MemoryStream())
{
string zip = #"C:\Temp\ZipFile.zip";
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (var item in Values)
{
var file = archive.CreateEntry(item.Key + ".txt");
using (var entryStream = file.Open())
using (var streamWriter = new StreamWriter(entryStream))
{
streamWriter.Write(item.Value);
}
}
}
using (var fileStream = new FileStream(zip, FileMode.Create))
{
memoryStream.Seek(0, SeekOrigin.Begin);
memoryStream.CopyTo(fileStream);
}
}
Each time your foreach loop runs it has the ZipArchiveMode as Create. That should be the problem, so it generates new zip everytime with new content on it, such as the last text file. Create an exception for each loop run after the first one it should be solved.
None of the similar questions are quite what I'm looking for!
What's wrong with the following code?
files is a text array of file contents, and fileNames is the corresponding filename array.
This code always fails at the second-last line with the Save method, but I can't see why the stream would be closed!
result = new MemoryStream();
using (ZipFile zipFile = new ZipFile())
{
for (int i = 0; i < files.Count(); i++)
{
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(files[i]);
using (MemoryStream fs = new MemoryStream(bytes))
{
zipFile.AddEntry(fileNames[i], fs);
}
}
zipFile.Save(result);
}
Thanks for any help - getting desperate here!
This is my solution based on #spender's first comment, although his solution posted below is possibly nicer.
try
{
result = new MemoryStream();
List<Stream> streams = new List<Stream>();
if (files.Count > 0)
{
using (ZipFile zipFile = new ZipFile())
{
for (int i = 0; i < files.Count(); i++)
{
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(files[i]);
streams.Add(new MemoryStream(bytes));
zipFile.AddEntry(fileNames[i], streams[i]);
}
zipFile.Save(result);
}
}
}
catch (Exception ex)
{
throw;
}
It seems that calling Save is the point when the source streams are read. This means you have to keep them undisposed until after the save. Abandon using statement in this case as it is impossible to extend its scope beyond the loop. Instead, collect your IDisposables and dispose of them once the save is completed.
result = new MemoryStream();
using (ZipFile zipFile = new ZipFile())
{
List<IDisposable> memStreams = new List<IDisposable>();
try
{
for (int i = 0; i < files.Count(); i++)
{
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(files[i]);
MemoryStream fs = new MemoryStream(bytes);
zipFile.AddEntry(fileNames[i], fs);
memStreams.Add(fs);
}
zipFile.Save(result);
}
finally
{
foreach(var x in memStreams)
{
x.Dispose();
}
}
}
#spender answer is spot on but as my 2 cents if you want to be more synonymous with using pattern you can do something like
class
public class DisposableBucket : IDisposable
{
readonly List<IDisposable> listOfDisposables = new List<IDisposable>();
public TClass Using<TClass>(TClass disposable) where TClass : IDisposable
{
listOfDisposables.Add(disposable);
return disposable;
}
public void Dispose()
{
foreach (var listOfDisposable in listOfDisposables)
{
listOfDisposable.Dispose();
}
}
}
How to use
result = new MemoryStream();
using(var bucket = new DisposableBucket())
{
using (var zipFile = new ZipFile())
{
List<IDisposable> memStreams = new List<IDisposable>();
for (int i = 0; i < files.Count(); i++)
{
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
Byte[] bytes = encoding.GetBytes(files[i]);
var fs = bucket.Using(new MemoryStream(bytes));
zipFile.AddEntry(fileNames[i], fs);
}
zipFile.Save(result);
}
}