How to Read .DSS format audio files into Byte array - c#

in My application, i read .DSS format audio Files into Byte Array,with following code
byte[] bt = File.ReadAllBytes(Filepath);
but i am unable to get data into Byte's. but In the Audio player it is playing ,
here how can i read the files into Byte Array.
Here i am attaching Snap, what bt have, it show's 255 for all bytes.
TIA

To ensure this is not the issue with File.ReadAllBytes, try to read file using stream, like this:
using (var fileStream = new FileStream(FilePath, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[fileStream.Length];
fileStream.Read(buffer, 0, (int) fileStream.Length);
// use buffer;
}
UPDATE: as it's not working too, there should be issue with your file. Try to find any process that may be blocking and using it at the moment. Also, try to open the file with any HEX editor and see if there really any meaningful data present. I'd also create clean testing app/sandbox to test if it's working.

Well, the Dss format is copyrighted, and you'll likely not find a lot of information about it.
255 or 0xFF is commonly used in Dss files to indicate that a byte is not in use. You will see many of them in the header of the Dss file, later in the audio part they will be more sparse.
That means: a value of 255 in the region of bytes 83-97 which you show does NOT mean that something went wrong.

Related

Zlib compression incompatibile C vs C# implementations

I'm trying to decompress data compressed with zlib algorithm in C# using 2 most legitimate libraries compatible with zlib algorithm and I got similar exception thrown.
Using DotNetZip:
Ionic.Zlib.ZlibException: Bad state (invalid stored block lengths)
Using Zlib.Net:
inflate: invalid stored block lenghts
but using same data as input to zlib-flate command on linux using only default parameters, works great and decompressed without any warnings (output is correct):
zlib-flate -uncompress < ./dbgZlib
Any suggestions what I can do in order to decompress this data in C# or why actually decompression failing in this case?
Compressed data as hex:
root#localhost:~# od -t x1 -An ./dbgZlib |tr -d '\n '
789c626063520b666060606262d26160d05307329999e70a6400e93c2066644080cf8c938c0c0c4d0d0d0d2d839c437c02dcfd0c0c0c11d28ea121013e7e41860ce18e210640e06810141669c080051840012eb970d790800090f99eee409ea189025e806c8e8b5354a89b13d81c136ca60f3a000e5fd6af0fb14a3221873e96400506374cd6c7d52dc8d98980657e7e06460ace0a4ce86e80da9f0249030edf816c16481ab06b60404f03931169c0cdc728c0db0fd928681a3042a481480347336c6e21320d78fb8155195a9090067ca3420387771a400a546aa70100000000ffff
Compressed data as base64:
root#localhost:~# base64 ./dbgZlib
eJxiYGNSC2ZgYGBiYtJhYNBTBzKZmecKZADpPCBmZECAz4yTjAwMTQ0NDS2DnEN8Atz9DAwMEdKO
oSEBPn5BhgzhjiEGQOBoEBQWacCABRhAAS65cNeQgACQ+Z7uQJ6hiQJegGyOi1NUqJsT2BwTbKYP
OgAOX9avD7FKMiGHPpZABQY3TNbH1S3I2YmAZX5+BkYKzgpM6G6A2p8CSQMO34FsFkgasGtgQE8D
kxFpwM3HKMDbD9koaBowQqSBSANHM2xuITINePuBVRlakJAGfKNCA4d3GkAKVGqnAQAAAAD//w==
Data after decompression, encoded with base64 look like this:
root#localhost:~# zlib-flate -uncompress < ./dbgZlib | base64
AAYCJlMAAAACAgIsAAAuJwAAAAMDnRBoAAAAbgAAAAEAAAAAAAAAAAAAAPMBkjIwMTUxMTE5UkNU
TFBHTjAwMQAAAAAAAAAAAABBVVRQTE5SMQBXQVQwMDAwQTBSVlkwAAAAAAAAAAAAAAAAAAAAAAAA
AAAwMDAwMDAwMAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMFdFVFBQTFBHTklHMDAwMTQgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAwMAAAAAAAAAAAAABEQlpVRkIAAAAAMDQAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAX14QAAAAAAAAAA
AAAAAAAAAAAAAAAAAAIBAAAAAAAAAAAAAABBVDAwMDBBMFJWWTAAAAAAAAAAUExOAAAAAAAAAABM
RUZSQ0IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAATk4wMiBDIAIAAAAAAAAAAAAAAAAA
AAABBfXhAAAAAGQCAgIsAABA9wAAAAQDnRBoAAA+gAAAAAEAAAAAAAAAAAAAAPMBkzIwMTUxMTE5
UkNGTDJQS04AAAAAAAAAAAAAAABBVVRQTE5SMgBXQVQwMDAwQTBZMEE2AAAAAAAAAAAAAAAAAAAA
AAAAAAAwMDAwMDAwMAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMFdFVFBQTFBLTjAwMDAwMTggICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAwMAAAAAAAAAAAAABETVpVUUIAAAAAMDQAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAX14QAAAAAA
AAAAAAAAAAAAAAAAAAAAAAIBAAAAAAAAAAAAAABBVDAwMDBBMFkwQTYAAAAAAAAAUExOAAAAAAAA
AABMRUZSQ0IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAATk4wMiBDIAIAAAAAAAAAAAAA
AAAAAAABBfXhAAAAAGQ=
The problem is that you are using zlib-flate as a general-purpose compression algorithm which, according to the manpage for it, you should not do:
This program should not be used as a general purpose compression tool.
Use something like gzip(1) instead.
So perhaps you should follow the instructions given by your tools and not use them for things that they are not intended for. Use gzip and the System.IO.Compression.GZipStream instead, it's much simpler, especially when you're looking for cross-platform compatible compression algorithms.
That said...
The reason that you can't inflate the data is that it lacks a correct GZIP header. If you add the right header to it you will get something that can be decompressed.
For instance:
public static byte[] DecompressZLibRaw(byte[] bCompressed)
{
byte[] bHdr = new byte[] { 0x1F, 0x8b, 0x08, 0, 0, 0, 0, 0 };
using (var sOutput = new MemoryStream())
using (var sCompressed = new MemoryStream())
{
sCompressed.Write(bHdr, 0, bHdr.Length);
sCompressed.Write(bCompressed, 0, bCompressed.Length);
sCompressed.Position = 0;
using (var decomp = new GZipStream(sCompressed, CompressionMode.Decompress))
{
decomp.CopyTo(sOutput);
}
return sOutput.ToArray();
}
}
Adding the header makes all the difference.
NB: There are two bytes in the 10-byte GZIP header that are not stripped from your source. These are normally used to store the compression flags and the source file system. In the compressed data you present they are invalid values. Additionally the file footer is abbreviated to 5 bytes instead of 8 bytes... all of which is not actually required for decompression. Which probably has a lot to do with why the manpage says not to use this for general compression.
The stream you provided is not complete. It appears that you ended it with a Z_SYNC_FLUSH or Z_FULL_FLUSH in your C# code, instead of a Z_FINISH like you're supposed to. That is causing the error. If you terminate the stream properly, you won't have a problem.
zlib-flate is simply ignoring that error.
If you are not in control of the generation of the stream, you can still use zlib to decompress what's there. You just need to use it at a lower level where you operate on blocks of data and get the decompressed data available given the provided input.

zipping memory stream in silverlight

im using the SLsharpziplip to try to compress a byte[] before sending it on the network to a server. the byte[] contains jpeg data which is already compressed by the jpeg encoder.
you may ask , if jpeg already compress the image, why do i need to compress it more, well because i tried it and it worked.
here is what happened:
I wrote the bytes in the byte[] to a txt file , the size of the txt file is ~5k , i compressed it with winzip and the result file was ~2k , so thats about 50% reduction in the file size. however , when i try to do it with the byte[] and use the slsharziplip to compress the byte[] , the reduction in size is minimal.
here is the code i used:
MemoryStream msCompressed = new MemoryStream();
GZipOutputStream gzCompressed = new GZipOutputStream(msCompressed);
gzCompressed.SetLevel(9);
// allframes is a byte array.
gzCompressed.Write(allframes, 0, allframes.Length);
gzCompressed.Finish();
gzCompressed.IsStreamOwner = false;
gzCompressed.Close();
// i used byte[] compresseddata = msCompressed.ToArray() but i thought i'll try this too.
msCompressed.Seek(0, SeekOrigin.Begin);
byte[] compresseddata = new byte[msCompressed.Length];
msCompressed.Read(compresseddata, 0, compresseddata.Length);
==================================================================================
from debugging the code, i can see that the difference of size between allframes.Length and compresseddata.lenght is minimal. but if that same data is written to a text file and zipped with winzip its size is reduced by 50%.
this is how i write the same data to a txt file:
TextWriter tw = new StreamWriter(MainPage.fs); // fs is a filestream.
foreach (byte b in allframes )
{
tw.Write(b);
}
===============================================================================
am i doing something wrong?! am i misunderstanding something!!
thanks up front :)
You are not comparing like with like.
There is no point in compressing JPEG image data as it is compressed already. Writing it out to a text file won't give you the same file size as writing it to a binary file.
Probably not, I would imagine WinZip has a superior zip algorithm to SLSharpZipLib. You can try varying the compression ratio but other than that, I would try different Silverlight compatible zip libraries.
JPEG as you've correctly pointed out is already a highly compressed file type, so finding a compression algorithm that can find further redundancy is going to be difficult.
Best regards,

Reliable way to convert a file to a byte[]

I found the following code on the web:
private byte [] StreamFile(string filename)
{
FileStream fs = new FileStream(filename, FileMode.Open,FileAccess.Read);
// Create a byte array of file stream length
byte[] ImageData = new byte[fs.Length];
//Read block of bytes from stream into the byte array
fs.Read(ImageData,0,System.Convert.ToInt32(fs.Length));
//Close the File Stream
fs.Close();
return ImageData; //return the byte data
}
Is it reliable enough to use to convert a file to byte[] in c#, or is there a better way to do this?
byte[] bytes = System.IO.File.ReadAllBytes(filename);
That should do the trick. ReadAllBytes opens the file, reads its contents into a new byte array, then closes it. Here's the MSDN page for that method.
byte[] bytes = File.ReadAllBytes(filename)
or ...
var bytes = File.ReadAllBytes(filename)
Not to repeat what everyone already have said but keep the following cheat sheet handly for File manipulations:
System.IO.File.ReadAllBytes(filename);
File.Exists(filename)
Path.Combine(folderName, resOfThePath);
Path.GetFullPath(path); // converts a relative path to absolute one
Path.GetExtension(path);
All these answers with .ReadAllBytes(). Another, similar (I won't say duplicate, since they were trying to refactor their code) question was asked on SO here: Best way to read a large file into a byte array in C#?
A comment was made on one of the posts regarding .ReadAllBytes():
File.ReadAllBytes throws OutOfMemoryException with big files (tested with 630 MB file
and it failed) – juanjo.arana Mar 13 '13 at 1:31
A better approach, to me, would be something like this, with BinaryReader:
public static byte[] FileToByteArray(string fileName)
{
byte[] fileData = null;
using (FileStream fs = File.OpenRead(fileName))
{
var binaryReader = new BinaryReader(fs);
fileData = binaryReader.ReadBytes((int)fs.Length);
}
return fileData;
}
But that's just me...
Of course, this all assumes you have the memory to handle the byte[] once it is read in, and I didn't put in the File.Exists check to ensure the file is there before proceeding, as you'd do that before calling this code.
looks good enough as a generic version. You can modify it to meet your needs, if they're specific enough.
also test for exceptions and error conditions, such as file doesn't exist or can't be read, etc.
you can also do the following to save some space:
byte[] bytes = System.IO.File.ReadAllBytes(filename);
Others have noted that you can use the built-in File.ReadAllBytes. The built-in method is fine, but it's worth noting that the code you post above is fragile for two reasons:
Stream is IDisposable - you should place the FileStream fs = new FileStream(filename, FileMode.Open,FileAccess.Read) initialization in a using clause to ensure the file is closed. Failure to do this may mean that the stream remains open if a failure occurs, which will mean the file remains locked - and that can cause other problems later on.
fs.Read may read fewer bytes than you request. In general, the .Read method of a Stream instance will read at least one byte, but not necessarily all bytes you ask for. You'll need to write a loop that retries reading until all bytes are read. This page explains this in more detail.
string filePath= #"D:\MiUnidad\testFile.pdf";
byte[] bytes = await System.IO.File.ReadAllBytesAsync(filePath);

C# - Bogus data when loading a *.wav file to byte array

I am trying to load a *.wav file to a byte array using C# 3.0 and .NET 3.5 like this:
var fs = File.Open(filedialog.FileName, FileMode.Open,FileAccess.Read);
long numBytes = new FileInfo(filedialog.FileName).Length;
BinaryReader br = new BinaryReader(fs);
byte[] bytes = br.ReadBytes((int)numBytes);
From byte[58] and to the end (~50k bytes) all values are 127 or 128 (I guess the first ~58 bytes are header stuff?).
The wave file is playing fine in Windows media player and other players, and I am sure it is nothing wrong with it (it's recorded with the sound recorder in WinXP).
Wave file info:
BitRate: 176kbps
Audio sample size: 8bit
Audio sample rate: 22kHz
Audio format: PCM
When I try to play the byte stream using the .NET SoundPlayer it sounds terrible :-)
Any idèas?
[SOLVED]
This was not the problem after all, so I'll have to continue my search for the real bug.
The code looks all right, as far as I can see.
You could try the simpler code:
byte[] bytes = File.ReadAllBytes(filedialog.FileName);

Joining Binary files that have been split via download

I am trying to join a number of binary files that were split during download. The requirement stemmed from the project http://asproxy.sourceforge.net/. In this project author allows you to download files by providing a url.
The problem comes through where my server does not have enough memory to keep a file that is larger than 20 meg in memory.So to solve this problem i modified the code to not download files larger than 10 meg's , if the file is larger it would then allow the user to download the first 10 megs. The user must then continue the download and hopefully get the second 10 megs. Now i have got all this working , except when the user needs to join the files they downloaded i end up with corrupt files , as far as i can tell something is either being added or removed via the download.
I am currently join the files together by reading all the files then writing them to one file.This should work since i am reading and writing in bytes. The code i used to join the files is listed here http://www.geekpedia.com/tutorial201_Splitting-and-joining-files-using-C.html
I do not have the exact code with me atm , as soon as i am home i will post the exact code if anyone is willing to help out.
Please let me know if i am missing out anything or if there is a better way to do this , i.e what could i use as an alternative to a memory stream. The source code for the original project which i made changes to can be found here http://asproxy.sourceforge.net/download.html , it should be noted i am using version 5.0. The file i modified is called WebDataCore.cs and i modified line 606 to only too till 10 megs of data had been loaded the continue execution.
Let me know if there is anything i missed.
Thanks
You shouldn't split for memory reasons... the reason to split is usually to avoid having to re-download everything in case of failure. If memory is an issue, you are doing it wrong... you shouldn't be buffering in memory, for example.
The easiest way to download a file is simply:
using(WebClient client = new WebClient()) {
client.DownloadFile(remoteUrl, localPath);
}
Re your split/join code - again, the problem is that you are buffering everything in memory; File.ReadAllBytes is a bad thing unless you know you have small files. What you should have is something like:
byte[] buffer = new byte[8192]; // why not...
int read;
while((read = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, read);
}
This uses a moderate buffer to pump data between the two as a stream. A lot more efficient. The loop says:
try to read some data (at most, the buffer-size)
(this will read at least 1 byte, or we have reached the end of the stream)
if we read something, write this many bytes from the buffer to the output
In the end i have found that by using a FTP request i was able to get arround the memory issue and the file is saved correctly.
Thanks for all the help
That example is loading each entire chunk into memory, instead you could do something like this:
int bufSize = 1024 * 32;
byte[] buffer = new byte[bufSize];
using (FileStream outputFile = new FileStream(OutputFileName, FileMode.OpenOrCreate,
FileAccess.Write, FileShare.None, bufSize))
{
foreach (string inputFileName in inputFiles)
{
using (FileStream inputFile = new FileStream(inputFileName, FileMode.Append,
FileAccess.Write, FileShare.None, buffer.Length))
{
int bytesRead = 0;
while ((bytesRead = inputFile.Read(buffer, 0, buffer.Length)) != 0)
{
outputFile.Write(buffer, 0, bytesRead);
}
}

Categories