I'm saving an uploaded image using this code:
using (var fileStream = File.Create(savePath))
{
stream.CopyTo(fileStream);
}
When the image is saved to its destination folder, it's empty, 0 kb. What could possible be wrong here? I've checked the stream.Length before copying and its not empty.
There is nothing wrong with your code. The fact you say "I've checked the stream.Length before copying and its not empty" makes me wonder about the stream position before copying.
If you've already consumed the source stream once then although the stream isn't zero length, its position may be at the end of the stream - so there is nothing left to copy.
If the stream is seekable (which it will be for a MemoryStream or a FileStream and many others), try putting
stream.Position = 0
just before the copy. This resets the stream position to the beginning, meaning the whole stream will be copied by your code.
I would recommend to put the following before CopyTo()
fileStream.Position = 0
Make sure to use the Flush() after this, to avoid empty file after copy.
fileStream.Flush()
This problem started for me after migrating my project from to .NET Core 1 to 2.2.
I fixed this issue by setting the Position of my filestream to zero.
using (var fileStream = new FileStream(savePath, FileMode.Create))
{
fileStream.Position = 0;
await imageFile.CopyToAsync(fileStream);
}
Related
I want to convert an mp3 file to pcm using MP3Sharp (https://github.com/ZaneDubya/MP3Sharp) in a web app, where the mp3 file is passed in as IFormFile
It works if I save the file to disk first like this ...
using (Stream fileStream = new FileStream("file.mp3", FileMode.Create))
{
await file.CopyToAsync(fileStream);
}
MP3Stream stream = new MP3Stream("file.mp3");
... but when I try to do it as a stream without writing to file it doesn't work:
using (var fileStream = new MemoryStream())
{
await file.CopyToAsync(fileStream);
MP3Stream stream = new MP3Stream(fileStream);
}
The MP3Stream constructor throws this exception:
MP3Sharp.MP3SharpException: 'Unhandled channel count rep: -1 (allowed values are 1-mono and 2-stereo).'
... Any ideas on what I'm doing wrong?
After your code does file.CopyToAsync(fileStream), the MemoryStream's read/write position is pointing after the written MP3 data at the end of the MemoryStream.
The MP3Stream then trying to read from the MemoryStream will only notice that the end of the MemoryStream has been reached (because its read/write position is already at the end) and throws an exception.[1]
Thus, after copying the MP3 data into the MemoryStream, set the MemoryStream's read/write position back to where it was before you copied the MP3 data into it (in the case of your example code it's the beginning of the MemoryStream, position 0):
await file.CopyToAsync(fileStream);
fileStream.Position = 0;
Side note: Like FileStream and MemoryStream, MP3Stream is also a Stream and therefore an IDisposable, too. And like you already did with the FileStream and the MemoryStream, you should use the using statement for the MP3Stream as well.
[1] The exception you got from the MP3Sharp library is misleading, and as such is kind of a bug in the library. Because, when attempting to read a byte from a stream and the stream is at its end, the Stream.ReadByte method will return -1 to indicate end-of-stream. And as apparent by the exception message, it seems the MP3Sharp library does not properly treat the -1 value here as simply meaning "the end of the stream has been reached and no (further) data could be read" and misinterprets it as a channel count value.
Hope you're all doing well!
Lets say I'm downloading a file from an HTTP API endpoint and file size is quite large. API returns application/octet-stream i.e. HttpContent in my download method.
when I use
using (FileStream fs = new FileStrean(somepath, FileMode.Create))
{
// this operation takes a few seconds to write to disk
await httpContent.CopyToAsync(fs);
}
As soon as the using statement is executed - I see the file created on the file system at given path, although it is 0 KB at this point, but when CopyToAsync() finishes executing, file size is as expected.
Problem is there's another service running which is constantly polling the folder where above files are saved and often times 0 KB are picked up or sometimes even partial files (this seems to be the case when I use WriteAsync(bytes[]).
Is there a way to not save the file on file system until its ready to be saved...?
One weird work around I could think of was:
using (var memStream = new MemoryStream())
{
await httpContent.CopyToAsync(memStream);
using (FileStream file = new FileStream(destFilePath, FileMode.Create, FileAccess.Write))
{
memStream.Position = 0;
await memStream.CopyToAsync(file);
}
}
I copy the HttpContent over to a MemoryStream and then copy the memorystream over to FileStream... this seems to have worked but there's a cost to memory consumption...
Another work around I could think of was to first save the files into a secondary location and when operation is complete, Move the file over to Primary folder.
Thank you in Advance,
Johny
I ended up saving the file into a temporary folder and when the operation is complete, I move the downloaded file to my primary folder. Since Move is atomic I do not have this issue anymore.
Thank you for those who commented!
A helper method to turn a string into a zipped up text file:
public static System.Net.Mail.Attachment CreateZipAttachmentFromString(string content, string filename)
{
using (MemoryStream memoryStream = new MemoryStream())
{
using (ZipArchive zipArchive = new ZipArchive(memoryStream, ZipArchiveMode.Update))
{
ZipArchiveEntry zipArchiveEntry = zipArchive.CreateEntry(filename);
using (StreamWriter streamWriter = new StreamWriter(zipArchiveEntry.Open()))
{
streamWriter.Write(content);
}
}
MemoryStream memoryStream2 = new MemoryStream(memoryStream.ToArray(), false);
return new Attachment(memoryStream2, filename + ".zip", MediaTypeNames.Application.Zip);
}
}
I was really hoping to avoid turning the first memory stream into an array, making another memory stream on it to read it, and passing that to attachment. My logic was, why copy X megabytes to another place in memory to establish another stream pointing to the copy, when it's essentially just what we started out with.. It's the multi-megabyte equivalent of redundancy like if(myBool == true)
So I figured instead I would Seek back to the start of the first memory stream, and then attachment could just read that.. Or I would establish another memorystream pointing to the buffer of the first, and with the offset and length parameters set so it would know what to read
Neither of these approaches work out because it seems that ZipArchive only pushes data into the memory stream (in my case maybe) when control falls out of the using block and the ziparchive is disposed. Disposing it also disposes the MemoryStream and nearly everything (other than ToArray() and GetBuffer()) throw ObjectDisposedException.
Ultimately I can't seek it or get its length after the ZipArchive pumps data into it and before it pumps it in, the offset is usually zero and the length is definitely zero so the values are useless
Is there a nice optimal way, short of configuring my own over-large buffer (which then makes it non expandable by MemoryStream), to avoid having to burn up around 2x the memory bytes of the archive size with this method?
Most well designed streams and stream-users in .NET have an additional boolean parameter that can be used to instruct them to leave the "base stream" (terrible name) open when disposing.
This is ZipArchive's constructor:
public ZipArchive(
Stream stream,
ZipArchiveMode mode,
bool leaveOpen
)
There is no need for a second MemoryStream. You need to do two things:
Ensure, that the MemoryStream is not disposed before the last usage point. This is harmless. Disposing a MemoryStream does nothing helpful and for compatibility reasons can never do anything in the future. The .NET Framework has a very high compatibility bar. They often don't even dare to rename fields.
Seek to offset zero.
So remove the using around the MemoryStream and use the ctor for ZipArchive that allows you to leave the stream open.
Since the Attachment you are returning makes use of the MemoryStream you can't dispose it before exiting the method. Again, this is harmless. The only negative point is that the code becomes less obvious.
There's an entirely different approach: You can write your own Stream class that creates the bytes on demand. That way there is no need to buffer the string and ZIP bytes at all. This is much more work, of course. And it does not detract from the fact that the whole string must sit in memory at once, so it's still not a O(1) space solution.
public static System.Net.Mail.Attachment CreateZipAttachmentFromString(string content, string filename)
{
MemoryStream memoryStream = new MemoryStream();
using (ZipArchive zipArchive = new ZipArchive(memoryStream, ZipArchiveMode.Update, true))
{
ZipArchiveEntry zipArchiveEntry = zipArchive.CreateEntry(filename);
using (StreamWriter streamWriter = new StreamWriter(zipArchiveEntry.Open()))
{
streamWriter.Write(content);
}
}
memoryStream.Position = 0;
return new Attachment(memoryStream, filename + ".zip", MediaTypeNames.Application.Zip);
}
I am using the following code to uncompress a GZipStream (using DotNetZip library), where fs is a filestream pointing to a gz file (with FileMode.Open, FileAccess.Read, FileShare.ReadWrite):
using (var gz = new GZipStream(fs, CompressionMode.Decompress)) {
using (var sr = new StreamReader(gz)) {
header = sr.ReadLine();
}
}
But if the file is not read till the end (which I prefer to do when not needed as the file can be huge), it throws
ZlibException("Bad CRC32 in GZIP trailer. (actual(EC084966)!=expected(8FC3EF16))")
on the first closing bracket (actually when trying to Close() the StreamReader.
Now if call ReadToEnd() before closing the streamreader (or I read all lines using a while(!sr.EndOfStream) loop), it works.
I have observed the same behaviour with a 500 MB and 200 kB compressed file, so it seems it is not related to the file size.
Your insight is very welcome!
Here is a link to a simple dedicated test project.
It works with the System.IO.GZipStream library, so this is very strange.
As a conjecture, I suspect that if the CRC block is at the end of the file, then if I abort reading the stream, it cannot have verified the integrity while disposing the stream and therefore throws the exception.
However this would not explain why it works when using System.IO.GzipStream.
I found the relevant part of the source code of DotNetZip here, but it seems they are checking that the stream is read to the end (see // Make sure we have read to the end of the stream). Then, they do compute a CRC32 as the exception message shows one.
Check to make sure the drive you're writing to is not out of space. I had this error and it took me awhile but I figured out I was really out of space.
I'm trying to zip a memory stream into another memory stream so I can upload to a rest API. image is the initial memory stream containing a tif image.
WebRequest request = CreateWebRequest(...);
request.ContentType = "application/zip";
MemoryStream zip = new MemoryStream();
GZipStream zipper = new GZipStream(zip, CompressionMode.Compress);
image.CopyTo(zipper);
zipper.Flush();
request.ContentLength = zip.Length; // zip.Length is returning 0
Stream reqStream = request.GetRequestStream();
zip.CopyTo(reqStream);
request.GetResponse().Close();
zip.Close();
To my understand, anything I write to the GZipStream will be compressed and written to whatever stream was passed into it's constructor. When I copy the image stream into zipper, it appears nothing is actually copied (image is 200+ MB). This is my first experience with GZipStream so it's likely I'm missing something, any advice as to what would be greatly appreciated.
EDIT:
Something I should note that was a problem for me, in the above code, image's position was at the very end of the stream... Thus when I called image.CopyTo(zipper); nothing was copied due to the position.
[Edited: to remove incorrect info on GZipStream and it's constructor args, and updated with the real answer :) ]
After you've copied to the zipper, you need to shift the position of the MemoryStream back to zero, as the process of the zipper writing to the memory stream advances it's "cursor" as well as the stream being read:
WebRequest request = CreateWebRequest(...);
request.ContentType = "application/zip";
MemoryStream zip = new MemoryStream();
GZipStream zipper = new GZipStream(zip, CompressionMode.Compress);
image.CopyTo(zipper);
zipper.Flush();
zip.Position = 0; // reset the zip position as this will have advanced when written to.
...
One other thing to note is that the GZipStream is not seekable, so calling .Length will throw an exception.
I don't know anything about C# and its libraries, but I would try to use Close instead of (or after) Flush first.
(Java's GZipOutputStream has the same problem that it doesn't properly flush, until Java 7.)
See this example:
http://msdn.microsoft.com/en-us/library/system.io.compression.gzipstream.flush.aspx#Y300
You shouldn't be calling flush on the stream.