zipping memory stream in silverlight - c#

im using the SLsharpziplip to try to compress a byte[] before sending it on the network to a server. the byte[] contains jpeg data which is already compressed by the jpeg encoder.
you may ask , if jpeg already compress the image, why do i need to compress it more, well because i tried it and it worked.
here is what happened:
I wrote the bytes in the byte[] to a txt file , the size of the txt file is ~5k , i compressed it with winzip and the result file was ~2k , so thats about 50% reduction in the file size. however , when i try to do it with the byte[] and use the slsharziplip to compress the byte[] , the reduction in size is minimal.
here is the code i used:
MemoryStream msCompressed = new MemoryStream();
GZipOutputStream gzCompressed = new GZipOutputStream(msCompressed);
gzCompressed.SetLevel(9);
// allframes is a byte array.
gzCompressed.Write(allframes, 0, allframes.Length);
gzCompressed.Finish();
gzCompressed.IsStreamOwner = false;
gzCompressed.Close();
// i used byte[] compresseddata = msCompressed.ToArray() but i thought i'll try this too.
msCompressed.Seek(0, SeekOrigin.Begin);
byte[] compresseddata = new byte[msCompressed.Length];
msCompressed.Read(compresseddata, 0, compresseddata.Length);
==================================================================================
from debugging the code, i can see that the difference of size between allframes.Length and compresseddata.lenght is minimal. but if that same data is written to a text file and zipped with winzip its size is reduced by 50%.
this is how i write the same data to a txt file:
TextWriter tw = new StreamWriter(MainPage.fs); // fs is a filestream.
foreach (byte b in allframes )
{
tw.Write(b);
}
===============================================================================
am i doing something wrong?! am i misunderstanding something!!
thanks up front :)

You are not comparing like with like.
There is no point in compressing JPEG image data as it is compressed already. Writing it out to a text file won't give you the same file size as writing it to a binary file.

Probably not, I would imagine WinZip has a superior zip algorithm to SLSharpZipLib. You can try varying the compression ratio but other than that, I would try different Silverlight compatible zip libraries.
JPEG as you've correctly pointed out is already a highly compressed file type, so finding a compression algorithm that can find further redundancy is going to be difficult.
Best regards,

Related

How to Read .DSS format audio files into Byte array

in My application, i read .DSS format audio Files into Byte Array,with following code
byte[] bt = File.ReadAllBytes(Filepath);
but i am unable to get data into Byte's. but In the Audio player it is playing ,
here how can i read the files into Byte Array.
Here i am attaching Snap, what bt have, it show's 255 for all bytes.
TIA
To ensure this is not the issue with File.ReadAllBytes, try to read file using stream, like this:
using (var fileStream = new FileStream(FilePath, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[fileStream.Length];
fileStream.Read(buffer, 0, (int) fileStream.Length);
// use buffer;
}
UPDATE: as it's not working too, there should be issue with your file. Try to find any process that may be blocking and using it at the moment. Also, try to open the file with any HEX editor and see if there really any meaningful data present. I'd also create clean testing app/sandbox to test if it's working.
Well, the Dss format is copyrighted, and you'll likely not find a lot of information about it.
255 or 0xFF is commonly used in Dss files to indicate that a byte is not in use. You will see many of them in the header of the Dss file, later in the audio part they will be more sparse.
That means: a value of 255 in the region of bytes 83-97 which you show does NOT mean that something went wrong.

What the best way or lib for creating thumbnail from memory stream (video) in C#?

I have to create multiple thumbnails and save on amazon s3 in c#. I don't find library which can create thumbnail from memory stream only from source file. But can't have file source, I have only the stream from the upload and I don't want save localy the file.
Thx for ur help!
Simple c# code can do this for you. You can create a thumbnail from byte array.
First convert your memory stream object to byte array by following code
byte[] bytes = memoryStream.ToArray();
and then you can use below method
public byte[] MakeThumbnail(byte[] myImage, int thumbWidth, int thumbHeight)
{
using (MemoryStream ms = new MemoryStream())
using (Image thumbnail = Image.FromStream(new MemoryStream(myImage)).GetThumbnailImage(thumbWidth, thumbHeight, null, new IntPtr()))
{
thumbnail.Save(ms, System.Drawing.Imaging.ImageFormat.Png);
return ms.ToArray();
}
}
I don't find library which can create thumbnail from memory stream only from
source file
Really? Because I do not need a library for that - I can do that with standard C#.
System.Drawing.Bitmap has a constructor accepting a stream.
But can't have file source,
Actually you can - it is called a temporary file and there is a whole mechanism for that in .NET (and windows). Really.
But it is not needed. Read the manual, create a bitmap (by packing your byte array into a MemoryStream) and it should work.

how to convert a jpeg in an array into a bitmap

I have a functioning application in c#/.net that currently accepts raw image data in a bayer format from a set of embedded cameras and converts them to jpeg images. To save transmission time, I have modified the embedded devices to encode the images as jpegs prior to transmission. I'm an experienced embedded programmer but a total c#/.net noob. I have managed to modify the application to save the arrays to file with a jpeg name using this snippet: ( the offset of 5 is to skip header data in the transmission frame)
FileStream stream = File.Create(fileName);
BinaryWriter writer = new BinaryWriter(stream);
writer.Write(multiBuff.msgData, 5, multiBuff.dataSize - 5);
writer.Close();
The files open up fine, but now I want to treat the data as a bitmap without having to save & load from file. I tried the following on the data array:
MemoryStream stream = new MemoryStream(data);
BinaryReader reader = new BinaryReader(stream);
byte[] headerData = reader.ReadBytes(5);
Bitmap bmpImage = new Bitmap(stream);
But this throws a parameter not valid exception. As a newbie, I'm a little overwhelmed with all the classes and methods for images and it seems like what I'm doing should be commonplace, but I can't find any examples in the usual places. Any ideas?
I think you are looking for Bitmap.FromStream() :
Bitmap bmpImage = (Bitmap)Bitmap.FromStream(stream);
Actually using new Bitmap(stream) should have worked as well - this means that the data in the stream does not constitute a valid image - are you sure the jpg is valid? Can you save it to disk and open it i.e. in Paint to test?
You use the Image class.
Image image;
using (MemoryStream stream = new MemoryStream(data))
{
image = Image.FromStream(stream);
}
FYI it didn't work because reader.ReadBytes(5) returns the 5 first bytes of stream not the bytes after position 5

Why does gzip/deflate compressing a small file result in many trailing zeroes?

I'm using the following code to compress a small (~4kB) HTML file in C#.
byte[] fileBuffer = ReadFully(inFile, ResponsePacket.maxResponsePayloadLength); // Read the entire requested HTML file into a memory buffer
inFile.Close(); // Close the requested HTML file
byte[] payload;
using (MemoryStream compMS = new MemoryStream()) // Create a new memory stream to hold the compressed HTML data
{
using (GZipStream gzip = new GZipStream(compMS, CompressionMode.Compress)) // Create a new GZip object pointing to the empty memory stream
{
gzip.Write(fileBuffer, 0, fileBuffer.Length); // Compress the file buffer and write it to the empty memory stream
gzip.Close(); // Close the GZip object
}
payload = compMS.GetBuffer(); // Write the compressed file buffer data in the memory stream to a byte buffer
}
The resulting compressed data is about 2k, but about half of it is just zeroes. This is for a very bandwidth sensitive application (which is why I'm bothering to compress 4kB in the first place), so the extra 1kB of zeroes is wasted valuable space. My best guess would be that the compression algorithm is padding out the data to a block boundary. If so, is there any way to override this behavior or change the block size? I get the same results with vanilla .NET GZipStream and zlib's GZipStream, as well as DeflateStream.
Wrong MemoryStream method. GetBuffer() returns the underlying buffer, it is always larger (or exactly as large) as the data in the stream. Very efficient because no copy needs to be made.
But you need the ToArray() method here. Or use the Length property.

Reliable way to convert a file to a byte[]

I found the following code on the web:
private byte [] StreamFile(string filename)
{
FileStream fs = new FileStream(filename, FileMode.Open,FileAccess.Read);
// Create a byte array of file stream length
byte[] ImageData = new byte[fs.Length];
//Read block of bytes from stream into the byte array
fs.Read(ImageData,0,System.Convert.ToInt32(fs.Length));
//Close the File Stream
fs.Close();
return ImageData; //return the byte data
}
Is it reliable enough to use to convert a file to byte[] in c#, or is there a better way to do this?
byte[] bytes = System.IO.File.ReadAllBytes(filename);
That should do the trick. ReadAllBytes opens the file, reads its contents into a new byte array, then closes it. Here's the MSDN page for that method.
byte[] bytes = File.ReadAllBytes(filename)
or ...
var bytes = File.ReadAllBytes(filename)
Not to repeat what everyone already have said but keep the following cheat sheet handly for File manipulations:
System.IO.File.ReadAllBytes(filename);
File.Exists(filename)
Path.Combine(folderName, resOfThePath);
Path.GetFullPath(path); // converts a relative path to absolute one
Path.GetExtension(path);
All these answers with .ReadAllBytes(). Another, similar (I won't say duplicate, since they were trying to refactor their code) question was asked on SO here: Best way to read a large file into a byte array in C#?
A comment was made on one of the posts regarding .ReadAllBytes():
File.ReadAllBytes throws OutOfMemoryException with big files (tested with 630 MB file
and it failed) – juanjo.arana Mar 13 '13 at 1:31
A better approach, to me, would be something like this, with BinaryReader:
public static byte[] FileToByteArray(string fileName)
{
byte[] fileData = null;
using (FileStream fs = File.OpenRead(fileName))
{
var binaryReader = new BinaryReader(fs);
fileData = binaryReader.ReadBytes((int)fs.Length);
}
return fileData;
}
But that's just me...
Of course, this all assumes you have the memory to handle the byte[] once it is read in, and I didn't put in the File.Exists check to ensure the file is there before proceeding, as you'd do that before calling this code.
looks good enough as a generic version. You can modify it to meet your needs, if they're specific enough.
also test for exceptions and error conditions, such as file doesn't exist or can't be read, etc.
you can also do the following to save some space:
byte[] bytes = System.IO.File.ReadAllBytes(filename);
Others have noted that you can use the built-in File.ReadAllBytes. The built-in method is fine, but it's worth noting that the code you post above is fragile for two reasons:
Stream is IDisposable - you should place the FileStream fs = new FileStream(filename, FileMode.Open,FileAccess.Read) initialization in a using clause to ensure the file is closed. Failure to do this may mean that the stream remains open if a failure occurs, which will mean the file remains locked - and that can cause other problems later on.
fs.Read may read fewer bytes than you request. In general, the .Read method of a Stream instance will read at least one byte, but not necessarily all bytes you ask for. You'll need to write a loop that retries reading until all bytes are read. This page explains this in more detail.
string filePath= #"D:\MiUnidad\testFile.pdf";
byte[] bytes = await System.IO.File.ReadAllBytesAsync(filePath);

Categories