I have a program in C# .net which writes 1 integer and 3 strings to a file, using BinaryWriter.Write().
Now I am programming in Java (for Android, and I'm new in Java), and I have to access the data which were previously written to a file using C#.
I tried using DataInputStream.readInt() and DataInputStream.readUTF(), but I can't get proper results. I usually get a UTFDataFormatException:
java.io.UTFDataFormatException: malformed input around byte 21
or the String and int I get is wrong...
FileInputStream fs = new FileInputStream(strFilePath);
DataInputStream ds = new DataInputStream(fs);
int i;
String str1,str2,str3;
i=ds.readInt();
str1=ds.readUTF();
str2=ds.readUTF();
str3=ds.readUTF();
ds.close();
What is the proper way of doing this?
I wrote a quick example on how to read .net's binaryWriter format in java here
excerpt from link:
/**
* Get string from binary stream. >So, if len < 0x7F, it is encoded on one
* byte as b0 = len >if len < 0x3FFF, is is encoded on 2 bytes as b0 = (len
* & 0x7F) | 0x80, b1 = len >> 7 >if len < 0x 1FFFFF, it is encoded on 3
* bytes as b0 = (len & 0x7F) | 0x80, b1 = ((len >> 7) & 0x7F) | 0x80, b2 =
* len >> 14 etc.
*
* #param is
* #return
* #throws IOException
*/
public static String getString(final InputStream is) throws IOException {
int val = getStringLength(is);
byte[] buffer = new byte[val];
if (is.read(buffer) < 0) {
throw new IOException("EOF");
}
return new String(buffer);
}
/**
* Binary files are encoded with a variable length prefix that tells you
* the size of the string. The prefix is encoded in a 7bit format where the
* 8th bit tells you if you should continue. If the 8th bit is set it means
* you need to read the next byte.
* #param bytes
* #return
*/
public static int getStringLength(final InputStream is) throws IOException {
int count = 0;
int shift = 0;
boolean more = true;
while (more) {
byte b = (byte) is.read();
count |= (b & 0x7F) << shift;
shift += 7;
if((b & 0x80) == 0) {
more = false;
}
}
return count;
}
As its name implies, BinaryWriter writes in binary format. .Net binary format to be precise, and as java is not a .Net language, it has no way of reading it. You have to use an interoperable format.
You can choose an existing format, like xml or json or any other interop format.
Or you can create your own, providing your data is simple enough to make it this way (it seems to be the case here). Just write a string to your file (using a StreamWriter for instance), provided you know your string's format. Then read your file from java as a string and parse it.
There is a very good explanation of the format used by BinaryWriter in this question Right Here it should be possible to read the data with a ByteArrayInputStream and write a simple translator.
Related
I am trying to read a WAV file into a buffer array in c# but am having some problems. I am using a file stream to manage the audio file. Here is what I have...
FileStream WAVFile = new FileStream(#"test.wav", FileMode.Open);
//Buffer for the wave file...
BinaryReader WAVreader = new BinaryReader(WAVFile);
//Read information from the header.
chunkID = WAVreader.ReadInt32();
chunkSize = WAVreader.ReadInt32();
RiffFormat = WAVreader.ReadInt32();
...
channels = WAVreader.ReadInt16();
samplerate = WAVreader.ReadInt32();
byteRate = WAVreader.ReadInt32();
blockAllign = WAVreader.ReadInt16();
bitsPerSample = WAVreader.ReadInt16();
dataID = WAVreader.ReadInt32();
dataSize = WAVreader.ReadInt32();
The above is reading data from the WAV file header. Then I have this:
musicalData = WAVreader.ReadBytes(dataSize);
...to read the actual sample data but this is only 26 bytes for 60 seconds of audio. Is this correct?
How would I go about converting the byte[] array to double[]?
This code should do the trick. It converts a wave file to a normalized double array (-1 to 1), but it should be trivial to make it an int/short array instead (remove the /32768.0 bit and add 32768 instead). The right[] array will be set to null if the loaded wav file is found to be mono.
I can't claim it's completely bullet proof (potential off-by-one errors), but after creating a 65536 sample array, and creating a wave from -1 to 1, none of the samples appear to go 'through' the ceiling or floor.
// convert two bytes to one double in the range -1 to 1
static double bytesToDouble(byte firstByte, byte secondByte)
{
// convert two bytes to one short (little endian)
short s = (secondByte << 8) | firstByte;
// convert to range from -1 to (just below) 1
return s / 32768.0;
}
// Returns left and right double arrays. 'right' will be null if sound is mono.
public void openWav(string filename, out double[] left, out double[] right)
{
byte[] wav = File.ReadAllBytes(filename);
// Determine if mono or stereo
int channels = wav[22]; // Forget byte 23 as 99.999% of WAVs are 1 or 2 channels
// Get past all the other sub chunks to get to the data subchunk:
int pos = 12; // First Subchunk ID from 12 to 16
// Keep iterating until we find the data chunk (i.e. 64 61 74 61 ...... (i.e. 100 97 116 97 in decimal))
while(!(wav[pos]==100 && wav[pos+1]==97 && wav[pos+2]==116 && wav[pos+3]==97))
{
pos += 4;
int chunkSize = wav[pos] + wav[pos + 1] * 256 + wav[pos + 2] * 65536 + wav[pos + 3] * 16777216;
pos += 4 + chunkSize;
}
pos += 8;
// Pos is now positioned to start of actual sound data.
int samples = (wav.Length - pos)/2; // 2 bytes per sample (16 bit sound mono)
if (channels == 2)
{
samples /= 2; // 4 bytes per sample (16 bit stereo)
}
// Allocate memory (right will be null if only mono sound)
left = new double[samples];
if (channels == 2)
{
right = new double[samples];
}
else
{
right = null;
}
// Write to double array/s:
int i=0;
while (pos < length)
{
left[i] = bytesToDouble(wav[pos], wav[pos + 1]);
pos += 2;
if (channels == 2)
{
right[i] = bytesToDouble(wav[pos], wav[pos + 1]);
pos += 2;
}
i++;
}
}
If you wanted to use plugins, then assuming your WAV file contains 16 bit PCM (which is the most common), you can use NAudio to read it out into a byte array, and then copy that into an array of 16 bit integers for convenience. If it is stereo, the samples will be interleaved left, right.
using (WaveFileReader reader = new WaveFileReader("myfile.wav"))
{
Assert.AreEqual(16, reader.WaveFormat.BitsPerSample, "Only works with 16 bit audio");
byte[] buffer = new byte[reader.Length];
int read = reader.Read(buffer, 0, buffer.Length);
short[] sampleBuffer = new short[read / 2];
Buffer.BlockCopy(buffer, 0, sampleBuffer, 0, read);
}
I personally try to avoid using third-party libraries as much as I can. But the option is still there if you'd like the code to look better and easier to handle.
It's been a good 10-15 years since I touched WAVE file processing, but unlike the first impression that most people get about wave files as simple fixed-size header followed by PCM encoded audio data, WAVE files are a bit more complex RIFF format files.
Instead of re-engineering RIFF file processing and various cases, I would suggest to use interop and call on APIs that deal with RIFF file format.
You can see example of how to open and get data buffer (and meta information about what buffer is) in this example. It's in C++, but it shows use of mmioOpen, mmioRead, mmioDescend, mmioAscend APIs that you would need to use to get your hands on a proper audio buffer.
I'm trying to read from an accelerometer over BLE from my Arduino device. The only problem is that I'm not sure how to convert it back into a readable string value.
My Arduino sketch (part of it) looks like this:
if((x - lastX) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else if((y - lastY) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else if((z - lastZ) > threshold) {
STATUS = "MOVING";
toggleIsMoving();
} else {
STATUS = "STOPPED";
toggleIsStopped();
}
lastX = x;
lastY = y;
lastZ = z;
Serial.print(STATUS);
The hex value that I receive in my Xamarin Android application (through BLE) is this:
A0-08-00-00-00-4D-4F-56-49-4E-47-CE-3C
My current implementation is:
public static string FromHex(string hex) {
hex = hex.Replace("-", "");
byte[] raw = new byte[hex.Length / 2];
for (int i = 0; i < raw.Length; i++)
raw[i] = Convert.ToByte(hex.Substring(i * 2, 2), 16);
return Encoding.ASCII.GetString(raw);
}
This result is:
?������MOVING?<
Why is this happening and how can I convert this back to a readable string in C#?
Update 1
I did some research in the bean-sdk source code and found this:
/**
* Represents a LightBlue Serial Transport Message
*
* Defined as:
*
* [1 byte] - Length (Message ID + Payload)
* [1 byte] - Reserved
* [2 byte] BE - Message ID
* [0-64 bytes] LE - Payload
* [2 bytes] LE - CRC (Everything before CRC)
*
* #param messageId
* #param definition
*/
Update 2
Finally got it working by implementing the protocol mentioned in update 1. The actual payload is located inside, so it needs to be extracted from the hex string.
Anything over 0x7F is NOT ASCII. The first character is 0xA0. Which, both in Unicode and IOS LATIN 1 is NBSP. The sequence A0-08-00-00-00 is probably meaningful, but not in a plain text way. You should have a look at the specs of your accelerometer and see what it is spewing out to interpret it correctly.
EDIT
Also, since in your code you set the variable STATUS to "MOVING", it could be that the characters before and after MOVING are spurious characters.
I have used the following:
private static string hexToASCII(string hexValue)
{
StringBuilder output = new StringBuilder("");
for (int i = 0; i < hexValue.Length; i += 2)
{
string str = hexValue.Substring(i, 2);
output.Append((char)Convert.ToInt32(str, 16));
}
return output.ToString();
}
Reference: converted from a Java function at Convert Hex to ASCII and ASCII to Hex
I understand how to read 8-bit, 16-bit & 32-bit samples (PCM & floating-point) from a .wav file, since (conveniently) the .Net Framework has an in-built integral type for those exact sizes. But, I don't know how to read (and store) 24-bit (3 byte) samples.
How can I read 24-bit audio? Is there maybe some way I can alter my current method (below) for reading 32-bit audio to solve my problem?
private List<float> Read32BitSamples(FileStream stream, int sampleStartIndex, int sampleEndIndex)
{
var samples = new List<float>();
var bytes = ReadChannelBytes(stream, Channels.Left, sampleStartIndex, sampleEndIndex); // Reads bytes of a single channel.
if (audioFormat == WavFormat.PCM) // audioFormat determines whether to process sample bytes as PCM or floating point.
{
for (var i = 0; i < bytes.Length / 4; i++)
{
samples.Add(BitConverter.ToInt32(bytes, i * 4) / 2147483648f);
}
}
else
{
for (var i = 0; i < bytes.Length / 4; i++)
{
samples.Add(BitConverter.ToSingle(bytes, i * 4));
}
}
return samples;
}
Reading (and storing) 24-bit samples is very simple. Now, as you've rightly said, a 3 byte integral type does not exist within the framework, which means you're left with two choices; either create your own type, or, you can pad your 24-bit samples by inserting an empty byte (0) to the start of your sample's byte array therefore making them 32-bit samples (so you can then use an int to store/manipulate them).
I will explain and demonstrate how to do the later (which is also in my opinion the more simpler approach).
First we must look at how a 24-bit sample would be stored within an int,
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ MSB ~ ~ 2ndMSB ~ ~ 2ndLSB ~ ~ LSB ~ ~
24-bit sample: 11001101 01101001 01011100 00000000
32-bit sample: 11001101 01101001 01011100 00101001
MSB = Most Significant Byte, LSB = Lest Significant Byte.
As you can see the LSB of the 24-bit sample is 0, therefore all you have to is declare a byte[] with 4 elements, then read the 3 bytes of the sample into the array (starting at element 1) so that your array looks like below (effectively bit shifting by 8 places to the left),
myArray[0]: 00000000
myArray[1]: 01011100
myArray[2]: 01101001
myArray[3]: 11001101
Once you have your byte array full you can pass it to BitConverter.ToInt32(myArray, 0);, you will then need to shift the sample by 8 places to the right to get the sample in it's proper 24-bit intergal representation (from -8388608 to 8388608); then divide by 8388608 to have it as a floating-point value.
So, putting that all together you should end up with something like this,
Note, I wrote the following code with the intention to be "easy-to-follow", therefore this will not be the most performant method, for a faster solution see the code below this one.
private List<float> Read24BitSamples(FileStream stream, int startIndex, int endIndex)
{
var samples = new List<float>();
var bytes = ReadChannelBytes(stream, Channels.Left, startIndex, endIndex);
var temp = new List<byte>();
var paddedBytes = new byte[bytes.Length / 3 * 4];
// Right align our samples to 32-bit (effectively bit shifting 8 places to the left).
for (var i = 0; i < bytes.Length; i += 3)
{
temp.Add(0); // LSB
temp.Add(bytes[i]); // 2nd LSB
temp.Add(bytes[i + 1]); // 2nd MSB
temp.Add(bytes[i + 2]); // MSB
}
// BitConverter requires collection to be an array.
paddedBytes = temp.ToArray();
temp = null;
bytes = null;
for (var i = 0; i < paddedBytes.Length / 4; i++)
{
samples.Add(BitConverter.ToInt32(paddedBytes, i * 4) / 2147483648f); // Skip the bit shift and just divide, since our sample has been "shited" 8 places to the right we need to divide by 2147483648, not 8388608.
}
return samples;
}
For a faster1 implementation you can do the following instead,
private List<float> Read24BitSamples(FileStream stream, int startIndex, int endIndex)
{
var bytes = ReadChannelBytes(stream, Channels.Left, startIndex, endIndex);
var samples = new float[bytes.Length / 3];
for (var i = 0; i < bytes.Length; i += 3)
{
samples[i / 3] = (bytes[i] << 8 | bytes[i + 1] << 16 | bytes[i + 2] << 24) / 2147483648f;
}
return samples.ToList();
}
1 After benchmarking the above code against the previous method, this solution is approximately 450% to 550% faster.
I'm trying to convert this C printf to C#
printf("%c%c",(x>>8)&0xff,x&0xff);
I've tried something like this:
int x = 65535;
char[] chars = new char[2];
chars[0] = (char)(x >> 8 & 0xFF);
chars[1] = (char)(x & 0xFF);
But I'm getting different results.
I need to write the result to a file
so I'm doing this:
tWriter.Write(chars);
Maybe that is the problem.
Thanks.
In .NET, char variables are stored as unsigned 16-bit (2-byte) numbers ranging in value from 0 through 65535. So use this:
int x = (int)0xA0FF; // use differing high and low bytes for testing
byte[] bytes = new byte[2];
bytes[0] = (byte)(x >> 8); // high byte
bytes[1] = (byte)(x); // low byte
If you're going to use a BinaryWriter than just do two writes:
bw.Write((byte)(x>>8));
bw.Write((byte)x);
Keep in mind that you just performed a Big Endian write. If this is to be read as an 16-bit integer by something that expects it in Little Endian form, swap the writes around.
Ok,
I got it using the Mitch Wheat suggestion and changing the TextWriter to BinaryWriter.
Here is the code
System.IO.BinaryWriter bw = new System.IO.BinaryWriter(System.IO.File.Open(#"C:\file.ext", System.IO.FileMode.Create));
int x = 65535;
byte[] bytes = new byte[2];
bytes[0] = (byte)(x >> 8);
bytes[1] = (byte)(x);
bw.Write(bytes);
Thanks to everyone.
Especially to Mitch Wheat.
I am trying to send a UDP packet of bytes corresponding to the numbers 1-1000 in sequence. How do I convert each number (1,2,3,4,...,998,999,1000) into the minimum number of bytes required and put them in a sequence that I can send as a UDP packet?
I've tried the following with no success. Any help would be greatly appreciated!
List<byte> byteList = new List<byte>();
for (int i = 1; i <= 255; i++)
{
byte[] nByte = BitConverter.GetBytes((byte)i);
foreach (byte b in nByte)
{
byteList.Add(b);
}
}
for (int g = 256; g <= 1000; g++)
{
UInt16 st = Convert.ToUInt16(g);
byte[] xByte = BitConverter.GetBytes(st);
foreach (byte c in xByte)
{
byteList.Add(c);
}
}
byte[] sendMsg = byteList.ToArray();
Thank you.
You need to use :
BitConverter.GetBytes(INTEGER);
Think about how you are going to be able to tell the difference between:
260, 1 -> 0x1, 0x4, 0x1
1, 4, 1 -> 0x1, 0x4, 0x1
If you use one byte for numbers up to 255 and two bytes for the numbers 256-1000, you won't be able to work out at the other end which number corresponds to what.
If you just need to encode them as described without worrying about how they are decoded, it smacks to me of a contrived homework assignment or test, and I'm uninclined to solve it for you.
I think you are looking for something along the lines of a 7-bit encoded integer:
protected void Write7BitEncodedInt(int value)
{
uint num = (uint) value;
while (num >= 0x80)
{
this.Write((byte) (num | 0x80));
num = num >> 7;
}
this.Write((byte) num);
}
(taken from System.IO.BinaryWriter.Write(String)).
The reverse is found in the System.IO.BinaryReader class and looks something like this:
protected internal int Read7BitEncodedInt()
{
byte num3;
int num = 0;
int num2 = 0;
do
{
if (num2 == 0x23)
{
throw new FormatException(Environment.GetResourceString("Format_Bad7BitInt32"));
}
num3 = this.ReadByte();
num |= (num3 & 0x7f) << num2;
num2 += 7;
}
while ((num3 & 0x80) != 0);
return num;
}
I do hope this is not homework, even though is really smells like it.
EDIT:
Ok, so to put it all together for you:
using System;
using System.IO;
namespace EncodedNumbers
{
class Program
{
protected static void Write7BitEncodedInt(BinaryWriter bin, int value)
{
uint num = (uint)value;
while (num >= 0x80)
{
bin.Write((byte)(num | 0x80));
num = num >> 7;
}
bin.Write((byte)num);
}
static void Main(string[] args)
{
MemoryStream ms = new MemoryStream();
BinaryWriter bin = new BinaryWriter(ms);
for(int i = 1; i < 1000; i++)
{
Write7BitEncodedInt(bin, i);
}
byte[] data = ms.ToArray();
int size = data.Length;
Console.WriteLine("Total # of Bytes = " + size);
Console.ReadLine();
}
}
}
The total size I get is 1871 bytes for numbers 1-1000.
Btw, could you simply state whether or not this is homework? Obviously, we will still help either way. But we would much rather you try a little harder so you can actually learn for yourself.
EDIT #2:
If you want to just pack them in ignoring the ability to decode them back, you can do something like this:
protected static void WriteMinimumInt(BinaryWriter bin, int value)
{
byte[] bytes = BitConverter.GetBytes(value);
int skip = bytes.Length-1;
while (bytes[skip] == 0)
{
skip--;
}
for (int i = 0; i <= skip; i++)
{
bin.Write(bytes[i]);
}
}
This ignores any bytes that are zero (from MSB to LSB). So for 0-255 it will use one byte.
As states elsewhere, this will not allow you to decode the data back since the stream is now ambiguous. As a side note, this approach crams it down to 1743 bytes (as opposed to 1871 using 7-bit encoding).
A byte can only hold 256 distinct values, so you cannot store the numbers above 255 in one byte. The easiest way would be to use short, which is 16 bits. If you realy need to conserve space, you can use 10 bit numbers and pack that into a byte array ( 10 bits = 2^10 = 1024 possible values).
Naively (also, untested):
List<byte> bytes = new List<byte>();
for (int i = 1; i <= 1000; i++)
{
byte[] nByte = BitConverter.GetBytes(i);
foreach(byte b in nByte) bytes.Add(b);
}
byte[] byteStream = bytes.ToArray();
Will give you a stream of bytes were each group of 4 bytes is a number [1, 1000].
You might be tempted to do some work so that i < 256 take a single byte, i < 65535 take two bytes, etc. However, if you do this you can't read the values out of the stream. Instead, you'd add length encoding or sentinels bits or something of the like.
I'd say, don't. Just compress the stream, either using a built-in class, or gin up a Huffman encoding implementation using an agree'd upon set of frequencies.