I reading a file in a zip file using streams like this :
public static Stream ReadFile(Models.Document document, string path, string password, string fileName)
{
FileStream fsIn = System.IO.File.OpenRead(Path.Combine(path, $"{document.guid}.crypt"));
var zipFile = new ZipFile(fsIn)
{
//Password = password,
IsStreamOwner = true
};
var zipEntry = zipFile.GetEntry(fileName);
//zipEntry.AESKeySize = 256;
Stream zipStream = zipFile.GetInputStream(zipEntry);
return zipStream;
}
I'm having trouble closing the filestream fsIn as it is not available when I return from the ReadFile method and if I close it within the method the stream zipStream that I'm returning will be closed.
How can I close fsIn but still read the data-stream returned from my method?
You should change your return type to be an object that contains both the parent FileStream and the inner file-stream so that the consumer of your function can close both streams when they need to - alternatively you could subclass Stream with your own type that then assumes ownership of both streams. It's probably simpler to do the former as otherwise you'd have to proxy all Stream methods.
Like so:
sealed class ReadFileResult : IDisposable
// (use a `sealed` class with a non-virtual Dispose method for simpler compliance with IDisposable)
{
private readonly FileStream zipFileStream;
public Stream InnerFileStream { get; }
internal ReadFileResult( FileStream zipFileStream, Stream innerFileStream )
{
this.zipeFileStream = zipFileStream;
this.InnerFileStream = innerFileStream;
}
public void Dispose()
{
this.InnerFileStream.Dispose();
this.zipFileStream.Dispose();
}
}
public static ReadFileResult ReadFile(Models.Document document, string path, string password, string fileName)
{
// ...
return new ReadFileResult( zipFileStream: fsIn, innerFileStream: zipStream );
}
Consumed like so:
void Foo()
{
using( ReadFileResult rfr = ReadFile( ... ) )
{
Stream s = rdr.InnerFileStream;
// ..
}
}
Related
I've got a function that makes something equivalent to a web request, and it returns a formatted CSV. My goal is to now import this data into CsvHelper. However, I can't seem to get CSVParser to read from static text, only from a stream.
I could write the output to a file then read it back, but I feel that doesn't make much sense here.
I'm not tied down at all to CsvHelper, however I can't seem to find a CSV library that supports this behavior. How should I do this?
var csvString = functionThatReturnsCsv()
/* as string:
columnA,columnB
dataA,dataB
*/
// my goal
???.parse(csvString)
You can convert the string to a Stream in-memory and then use that as the source for your CSV reader:
public static Stream StringAsStream(string value)
{
return StringAsStream(value, System.Text.Encoding.UTF8);
}
public static Stream StringAsStream(string value, System.Text.Encoding encoding)
{
var bytes = encoding.GetBytes(value);
return new MemoryStream(bytes);
}
Usage:
using (var stream = StringAsStream("hello"))
{
// csv reading code here
}
or
using (var stream = StringAsStream("hello", Encoding.Ascii))
{
// csv reading code here
}
Try it online
Note If you are reading from a source that can return a Stream (like a web request), you should use that Stream rather than doing this.
You could use StringReader. The CsvReader constructor takes a TextReader argument rather than a Stream. If you did have a stream instead of a string, just replace StringReader with StreamReader.
public static void Main(string[] args)
{
using (var reader = new StringReader(FunctionThatReturnsCsv()))
using (var csv = new CsvReader(reader))
{
var results = csv.GetRecords<Foo>().ToList();
}
}
public static string FunctionThatReturnsCsv()
{
return "columnA,columnB\ndataA,dataB";
}
public class Foo
{
public string columnA { get; set; }
public string columnB { get; set; }
}
Using Azure storage, I'm writing to a blob using a stream. I have a method something like this:
public async Task<BlobSteamContainer> GetBlobStreamAsync(string filename, string contentType = "text/csv")
{
var blob = container.GetBlockBlobReference($"{filename}--{Guid.NewGuid().ToString()}.csv");
blob.Properties.ContentType = contentType;
return new BlobSteamContainer(blob.Uri.ToString(), await blob.OpenWriteAsync());
}
Where BlobStreamContainer is just a simple object so I can keep track of the filename and the stream together:
public class BlobSteamContainer : IDisposable
{
public CloudBlobStream Stream { get; private set; }
public string Filename { get; private set; }
public BlobSteamContainer(string filename, CloudBlobStream stream)
{
Stream = stream;
Filename = filename;
}
public void Dispose()
{
Stream.Close();
Stream?.Dispose();
}
}
And then I use it something like this:
using (var blobStream = await GetBlobStreamAsync(filename))
using (var outputStream = new StreamWriter(blobStream.Stream))
using (var someInputStream = ...)
{
try
{
outputStream.WriteLine("write some stuff...");
//....processing
if (someCondition) {
throw new MyException("can't write the file");
}
//....more processing
outputStream.Flush();
}
catch(MyException e)
{
// what to do here? I want to stop writing
// and remove any trace of the file in azure
throw; // let the higher ups handle this
}
}
Where somecondition is something that I know before hand (obviously there's more going on involving processing an input stream and writing out as I go). If everything is fine, then this works great. My problem is on figuring out the best way to handle the case where an exception is thrown during writing.
I tried just deleting the file in the catch like this:
DeleteBlob(blobStream.Filename);
where:
public void DeleteBlob(string filename)
{
var blob = container.GetBlobReference(filename);
blob.Delete();
}
But the problem is that the file might not have been created yet and so this will throw a Microsoft.WindowsAzure.Storage.StorageException telling me the file wasn't found (and then the file will end up getting created anyway!).
So what would be the cleanest way to handle this?
My code:
public class UplaodedFile
{
public UploadedFile File = null;
public string Description = null;
public string OriginalFileName = null;
public byte[] inputStream ;
public UplaodedFile(UploadedFile file, string desc, string FileName, byte[] inputStream)
{
File = file;
Description = desc;
OriginalFileName = FileName;
inputStream = inputStream;
}
}
I am creating an object as below:
UplaodedFile uploadedfile = new UplaodedFile(uploaded_file, description, originalFileName, file_contents);
and when I try to access the uploadedfile.inputStream, I am getting null.
What am I doing wrong?
You are referring to constructor's argument in constructor
Instead of
inputStream = inputStream;
You need to write
this.inputStream = inputStream;
By setting
inputStream = inputStream;
you only assign to the parameter inputStream. As your class field is also named inputStream you need to tell the compiler to set it by using this:
this.inputStream = inputStream;
Is there possible to writte protobuf serialized content directly to SharpZipLib stream? When I try to do this, looks like the provided stream is not filled with the data from protobuf. Later I would need to get back the deserialized entity from provided Zip stream.
My code looks like this:
private byte[] ZipContent(T2 content)
{
const short COMPRESSION_LEVEL = 4; // 0-9
const string ENTRY_NAME = "DefaultEntryName";
byte[] result = null;
if (content == null)
return result;
IStreamSerializerProto<T2> serializer = this.GetSerializer(content.GetType());
using (MemoryStream outputStream = new MemoryStream())
{
using (ZipOutputStream zipOutputStream = new ZipOutputStream(outputStream))
{
zipOutputStream.SetLevel(COMPRESSION_LEVEL);
ZipEntry entry = new ZipEntry(ENTRY_NAME);
entry.DateTime = DateTime.Now;
zipOutputStream.PutNextEntry(entry);
serializer.Serialize(zipOutputStream, content);
}
result = outputStream.ToArray();
}
return result;
}
private class ProtobufStreamSerializer<T3> : IStreamSerializerProto<T3>
{
public ProtobufStreamSerializer()
{
ProtoBuf.Serializer.PrepareSerializer<T3>();
}
public void Serialize(Stream outputStream, T3 content)
{
Serializer.Serialize(outputStream, content);
}
public T3 Deserialize(Stream inputStream)
{
T3 deserializedObj;
using (inputStream)
{
deserializedObj = ProtoBuf.Serializer.Deserialize<T3>(inputStream);
}
return deserializedObj;
}
}
Sample of a class which I'm trying to serialize:
[Serializable]
[ProtoContract]
public class Model
{
[XmlElement("ModelCode")]
[ProtoMember(1)]
public int ModelCode { get; set; }
...
}
This is the problem, I believe (in the original code in the question):
public void Serialize(Stream outputStream, T3 content)
{
using (var stream = new MemoryStream())
{
Serializer.Serialize(stream, content);
}
}
You're completely ignoring outputStream, instead writing the data just to a new MemoryStream which is then ignored.
I suspect you just want:
public void Serialize(Stream outputStream, T3 content)
{
Serializer.Serialize(outputStream, content);
}
I'd also suggest removing the using statement from your Deserialize method: I'd expect the caller to be responsible for disposing of the input stream when they're finished with it. Your method can be simplified to:
public T3 Deserialize(Stream inputStream)
{
return ProtoBuf.Serializer.Deserialize<T3>(inputStream);
}
The code (with the edit pointed out by Jon) looks fine. Here it is working:
static void Main()
{
var obj = new Bar{ X = 123, Y = "abc" };
var wrapper = new Foo<Bar>();
var blob = wrapper.ZipContent(obj);
var clone = wrapper.UnzipContent(blob);
}
where Bar is:
[ProtoContract]
class Bar
{
[ProtoMember(1)]
public int X { get; set; }
[ProtoMember(2)]
public string Y { get; set; }
}
and Foo<T> is your class (I didn't know the name), where I have added:
public T2 UnzipContent(byte[] data)
{
using(var ms = new MemoryStream(data))
using(var zip = new ZipInputStream(ms))
{
var entry = zip.GetNextEntry();
var serializer = this.GetSerializer(typeof(T2));
return serializer.Deserialize(zip);
}
}
Also, note that compression is double-edged. In the example I give above, the underlying size (i.e. if we just write to a MemoryStream) is 7 bytes. ZipInputStream "compresses" this 7 bytes down to 179 bytes. Compression works best on larger objects, usually when there is lots of text content.
I have this method I created :
public static bool DeleteFile(FileInfo fileInfo)
{
try
{
fileInfo.Delete();
return true;
}
catch (Exception exception)
{
LogManager.LogError(exception);
return false;
}
}
Now I wrote the following unittest:
[TestMethod]
public void DeleteFileSuccessFul()
{
string fileName = "c:\\Temp\\UnitTest3.txt";
FileInfo fileInfo = new FileInfo(fileName);
File.Create(Path.Combine(fileName));
bool success = FileActions.DeleteFile(fileInfo);
Assert.IsTrue(success);
}
This test fails because the file is in use by a different proces.
The test fails on het bool success = FileActions.DeleteFile(fileInfo); because the file is in use by a different process.
How can I change my test so it works ?
You have to call Dispose method on the FileStream object returned by the File.Create method to release the handle to that file:
[TestMethod]
public void DeleteFileSuccessFul()
{
string fileName = "c:\\Temp\\UnitTest3.txt";
FileInfo fileInfo = new FileInfo(fileName);
using (File.Create(Path.Combine(fileName)))
{
}
bool success = FileActions.DeleteFile(fileInfo);
Assert.IsTrue(success);
}
UPDATE: using block provides a convenient syntax that ensures the Dispose method of an IDisposable object is get called after leaving the scope of the block even if an exception occurs. The equivalent to the above code could be re-written with try-finally block:
[TestMethod]
public void DeleteFileSuccessFul()
{
string fileName = "c:\\Temp\\UnitTest3.txt";
FileInfo fileInfo = new FileInfo(fileName);
FileStream fileStream = null;
try
{
fileStream = File.Create(Path.Combine(fileName));
}
finally
{
if (fileStream != null)
fileStream.Dispose();
}
bool success = FileActions.DeleteFile(fileInfo);
Assert.IsTrue(success);
}