We have a code snippet that is converting Stream to byte[] and later displaying that as an image in aspx page.
Problem is when first time page loads image is being displayed, but not displaying for later requests (reload etc).
Only difference I observed is Stream position in 'input' (ConvertStreamtoByteArray) is 0 for the first time and subsequent calls is > 0. How do I fix this?
context.Response.Clear();
context.Response.ContentType = "image/pjpeg";
context.Response.BinaryWrite(ConvertStreamtoByteArray(imgStream));
context.Response.End();
private static byte[] ConvertStreamtoByteArray(Stream input)
{
var buffer = new byte[16 * 1024];
using (var ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
I think the source is : Creating a byte array from a stream
I think the code snippet is from above link, I see everything matches except for method name.
You're (most likely) holding a reference to imgStream, so the same stream is being used every time ConvertStreamtoByteArray is called.
The problem is that streams track their Position. This starts at 0 when the stream is new, and ends up at the end when you read the entire stream.
Usually the solution in this case is to set the Position back to 0 prior to copying the content of the stream.
In your case, you should probably 1) convert imgStream to a byte array the first time it's needed 2) cache this byte array and not the stream 3) dispose and throw away imgStream and 4) pass the byte array to the Response from this point onwards.
See, this is what happens when you copypasta code from the internets. Weird stuff like this, repeatedly converting the same stream to a byte array (waste of time!), and you end up not using the framework to do your work for you. Manually copying streams is so 2000s.
Related
I am working on simulating a filesystem. I am having a difficult time reading and writing bytes to/from the filestream. I am aiming to toggle the first bit to be a '1' indicating that it does in fact have data in it. I have set up a test scenario to represent what I am trying to achieve.
The problem is that it appears to turn the bit on and write it to _FileStream, however, when i go to read it out - I do not see my change.
_Filestream = new FileStream(volumeName, FileMode.Open);
_Filestream.Seek(0, SeekOrigin.Begin);
//Test lines
byte[] testAsBytes = new byte[_DirectoryUnitSize];
testAsBytes[0] = 1;
byte[] newDirectoryByteArray = new byte[_DirectoryUnitSize];
_Filestream.Write(testAsBytes, 0, newDirectoryByteArray.Length);
_Filestream.Flush();
int bytesRead;
byte[] buffer = new byte[64];
char[] charBuffer = new char[64];
List<byte> data = new List<byte>();
while ((bytesRead = _Filestream.Read(buffer, 0, buffer.Length)) > 0) {
if (!string.IsNullOrEmpty(Encoding.ASCII.GetString(buffer, 0, bytesRead))) {
data = Encoding.ASCII.GetBytes(charBuffer, 0, 1).ToList();
}
}
You have serval issues here
1.why you create another byte[] array - newDirectoryByteArray. Seems unnecessary, you can simply write
_Filestream.Write(testAsBytes, 0, testAsBytes.Length);
2. When you write to file the cursor moves along with it, if you want to use the same FileStream to read from a file, you must seek your desired location. Meaning before _Filestream.Read(...) you must call, for example. Filestream.Seek(0, SeekOrigin.Begin);
From MSDN
If the write operation is successful, the current position of the
stream is advanced by the number of bytes written. If an exception
occurs, the current position of the stream is unchanged.
the line data = Encoding.ASCII.GetBytes(charBuffer, 0, 1).ToList(); will always reutns 0 becuse no data is written to charBuffer . Your data is at buffer which will contaion 1 at index 0 and 0 at all other indices
you can see it with Console.WriteLine(buffer[0]); will output 1 and Console.WriteLine(data[0]); will output 0
This question already has answers here:
How to get all data from NetworkStream
(8 answers)
Closed 4 years ago.
I am trying to simply download an object from my bucket using C# just like we can find in S3 examples, and I can't figure out why the stream won't be entirely copied to my byte array. Only the first 8192 bytes are copied instead of the whole stream.
I have tried with with an Amazon.S3.AmazonS3Client and with an Amazon.S3.Transfer.TransferUtility, but in both cases only the first bytes are actually copied into the buffer.
var stream = await _transferUtility.OpenStreamAsync(BucketName, key);
using (stream)
{
byte[] content = new byte[stream.Length];
stream.Read(content, 0, content.Length);
// Here content should contain all the data from the stream, but only the first 8192 bytes are actually populated.
}
When debugging, I see the stream type is Amazon.Runtime.Internal.Util.Md5Stream, and inside the stream, before calling Read() the property CurrentPosition = 0. After the call, CurrentPosition becomes 8192, which seems to indeed indicate only the first 8K of data was read. The total Length of the stream is 104042.
If I make more calls to stream.Read(), I see more data gets read and CurrentPosition increases in value. But CurrentPosition is not a public property, and I cannot access it in my code to make a while() loop (and having to code such loops to read all the data seems a bit clunky).
Why are only the first 8K read in my code? How should I proceed to read the entire stream?
I tried calling stream.Flush(), but it did not fix the problem.
EDIT 1
I have modified my code so it does the following:
var stream = await _transferUtility.OpenStreamAsync(BucketName, key);
using (stream)
{
byte[] content = new byte[stream.Length];
var bytesRead = 0;
while (bytesRead < stream.Length)
bytesRead += stream.Read(content, bytesRead, content.Length - bytesRead);
}
And it works. But still looks clunky. Is it normal I have to do this?
EDIT 2
Final solution is to create a MemoryStream of the correct size and then call CopyTo(). So no clunky loop anymore and no risk of infinite loop if Read() starts returning 0 before the whole stream has been read:
var stream = await _transferUtility.OpenStreamAsync(BucketName, key);
using (stream)
{
using (var memoryStream = new MemoryStream((int)stream.Length))
{
stream.CopyTo(memoryStream);
var myBuffer = memoryStream.GetBuffer();
}
}
stream.Read() returns the number of bytes read. You can then keep track of the total number of bytes read until you have reached the end of the file (content.Length).
You could also just loop until the returned value is 0 meaning error / no more bytes left.
You will need to keep track of the current offset for your content buffer so that you are not overwriting data for each call.
I hope someone here will be able to help me out with this.
What I'm trying to do is decompress a zlib compressed file in C# using ZlibNet. (I've also tried DotNetZip and SharpZipLib)
The problem that I'm having is that it'll decompress only the first 256kb, or rather the first 262144 bytes.
Here's my Decompress method, taken from here:
public static byte[] Decompress(byte[] gzip)
{
using (var stream = new Ionic.Zlib.ZlibStream(new MemoryStream(gzip), Ionic.Zlib.CompressionMode.Decompress))
{
var outStream = new MemoryStream();
const int size = 999999; //Playing around with various sizes didn't help
byte[] buffer = new byte[size];
int read;
while ((read = stream.Read(buffer, 0, size)) > 0)
{
outStream.Write(buffer, 0, read);
read = 0;
}
return outStream.ToArray();
}
}
Basically, the int (read) gets set to 262144 on the first time the while loop executes, it writes, and then the next pass of the while loop, read gets said to 0, thus making the loop exit and the function return the outStream as an array. (Even though there are still bytes left to be read!)
Thanks in advance to anyone who could help with this!
Upon further inspection of the originally packed data, it turns out that the script responsible for (de)compressing the data in the original application would split the zlib stream of a file into chunks of 262144 bytes each.
This is why the various libraries I tested always stopped at 262144 bytes-- it was the end of the zlib stream, but not the end of the file it was supposed to extract. (Each zlib stream was also seperated by a 32-bit unsigned int that indicated the amount of bytes the next zlib stream would contain)
My only guess is that they did this so that if they had a very large file, they wouldn't need to load all of it into memory for decompression. (But that's just a guess.)
I got a file stream which has content read from a disk.
Stream input = new FileStream("filename");
This stream is to be passed to a third party library which after reading the stream, keeps the Stream's position pointer at the end of the file (as ususal).
My requirement is not to load the file from the desk everytime, instead I want to maintain MemoryStream, which will be used everytime.
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[32768];
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, read);
}
}
I have tried the above code. It works for the first very time to copy the input stream to output stream, but subsequent calls to CopyStream will not work as the source's Position will be at the end of the stream after the first call.
Are there other alternatives which copy the content of the source stream to another stream irrespective of the source stream's current Position.
And this code needs to run in thread safe manner in a multi threaded environment.
You can use .NET 4.0 Stream.CopyTo to copy your steam to a MemoryStream. The MemoryStream has a Position property you can use to move its postition to the beginning.
var ms = new MemoryStream();
using (Stream file = File.OpenRead(#"filename"))
{
file.CopyTo(ms);
}
ms.Position = 0;
To make a thread safe solution, you can copy the content to a byte array, and make a new MemoryStream wrapping the byte array for each thread that need access:
byte[] fileBytes = ms.ToArray();
var ms2 = new MemoryStream(fileBytes);
You should check the input stream's CanSeek property. If that returns false, you can only read it once anyway. If CanSeek returns true, you can set the position to zero and copy away.
if (input.CanSeek)
{
input.Position = 0;
}
You may also want to store the old position and restore it after copying.
ETA: Passing the same instance of a Stream around is not the safest thing to do. E.g. you can't be sure the Stream wasn't disposed when you get it back. I'd suggest to copy the FileStream to a MemoryStream in the beginning, but only store the byte content of the latter by calling ToArray(). When you need to pass a Stream somewhere, just create a new one each time with new MemoryStream(byte[]).
Edit: Solution is at bottom of post
I am trying my luck with reading binary files. Since I don't want to rely on byte[] AllBytes = File.ReadAllBytes(myPath), because the binary file might be rather big, I want to read small portions of the same size (which fits nicely with the file format to read) in a loop, using what I would call a "buffer".
public void ReadStream(MemoryStream ContentStream)
{
byte[] buffer = new byte[sizePerHour];
for (int hours = 0; hours < NumberHours; hours++)
{
int t = ContentStream.Read(buffer, 0, sizePerHour);
SecondsToAdd = BitConverter.ToUInt32(buffer, 0);
// further processing of my byte[] buffer
}
}
My stream contains all the bytes I want, which is a good thing. When I enter the loop several things cease to work.
My int t is 0although I would presume that ContentStream.Read() would process information from within the stream to my bytearray, but that isn't the case.
I tried buffer = ContentStream.GetBuffer(), but that results in my buffer containing all of my stream, a behaviour I wanted to avoid by using reading to a buffer.
Also resetting the stream to position 0 before reading did not help, as did specifying an offset for my Stream.Read(), which means I am lost.
Can anyone point me to reading small portions of a stream to a byte[]? Maybe with some code?
Thanks in advance
Edit:
Pointing me to the right direction was the answer, that .Read() returns 0 if the end of stream is reached. I modified my code to the following:
public void ReadStream(MemoryStream ContentStream)
{
byte[] buffer = new byte[sizePerHour];
ContentStream.Seek(0, SeekOrigin.Begin); //Added this line
for (int hours = 0; hours < NumberHours; hours++)
{
int t = ContentStream.Read(buffer, 0, sizePerHour);
SecondsToAdd = BitConverter.ToUInt32(buffer, 0);
// further processing of my byte[] buffer
}
}
And everything works like a charm. I initially reset the stream to its origin every time I iterated over hour and giving an offset. Moving the "set to beginning-Part" outside my look and leaving the offset at 0 did the trick.
Read returns zero if the end of the stream is reached. Are you sure, that your memory stream has the content you expect? I´ve tried the following and it works as expected:
// Create the source of the memory stream.
UInt32[] source = {42, 4711};
List<byte> sourceBuffer = new List<byte>();
Array.ForEach(source, v => sourceBuffer.AddRange(BitConverter.GetBytes(v)));
// Read the stream.
using (MemoryStream contentStream = new MemoryStream(sourceBuffer.ToArray()))
{
byte[] buffer = new byte[sizeof (UInt32)];
int t;
do
{
t = contentStream.Read(buffer, 0, buffer.Length);
if (t > 0)
{
UInt32 value = BitConverter.ToUInt32(buffer, 0);
}
} while (t > 0);
}