Succinct way to write a mixture of chars and bytes? - c#

I'm trying to write an index file that follows the format of a preexisting (and immutable) text file.
The file is fixed length, with 11 bytes of string (in ASCII) followed by 4 bytes of long for a total of 15 bytes per line.
Perhaps I'm being a bit dim, but is there an simple way to do this? I get the feeling I need to open up two streams to write one line - one for the string and one for the bytes - but that feels wrong.
Any hints?

You can use BitConverter to convert between an int/long and an array of bytes. This way you would be able to write eleven bytes followed by four bytes, followed by eleven more bytes, and so on.
byte[] intBytes = BitConverter.GetBytes(intValue); // returns 4-byte array
Converting to bytes: BitConverter.GetBytes(int).
Converting back to int: BitConverter.ToInt32(byte\[\], int)
If you are developing a cross-platform solution, keep in mind the following note from the documentation (thanks to uriDium for the comment):
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.

Related

c#: How to represent a large data packet easily?

I have an array of bytes which I want to represent with some kind of structure so that using these bytes will be easier.
Currently the bytes come in to an array in my application as :
Bytes 0-3 Header,
Bytes 4-10 A representation of an ascii number,
Bytes 11 to 17 Another representation of an ascii number,
Bytes 18 to 1000, binary data.
Is there a way to represent this so that I can for example just use something like MyArray.Header, MyArray.Number1, MyArray.Number2, MyArray.Data
I think a Struct could handle this but I am not sure how to define it or how to use it afterwards. The bytes will be coming in via a network continuously.
Thanks for any help.

What exactly is byte[] in C#?

In C#, byte is the data type for 8-bit unsigned integers, so a byte[] should be an array of integers who are between 0 and 255, just like an char[] is an array of characters.
But most of time when I encounter byte[], I see byte[] is used as a contiguous chunk of memory for storing raw representation of data.
How do these two relate to each other?
thanks
Well, a byte as datatype is exactly what you already said, an unsigned integer between 0 and 255. Furthermore this type needs exactly - believe it or not - one byte in your memory, thus also the name. This is why most readers that read byte per byte store those information in a structure that fits exactly the size of a byte - the byte-datatype.

Why doesn't my 32-bit integer convert into a float properly?

Background
First of all, I have some hexadecimal data... 0x3AD3FFD6. I have chosen to represent this data as an array of bytes as follows:
byte[] numBytes = { 0x3A, 0xD3, 0xFF, 0xD6 };
I attempt to convert this array of bytes into its single-precision floating point value by executing the following code:
float floatNumber = 0;
floatNumber = BitConverter.ToSingle(numBytes, 0);
I have calculated this online using this IEEE 754 Converter and got the following result:
0.0016174268
I would expect the output of the C# code to produce the same thing, but instead I am getting something like...
-1.406E+14
Question
Can anybody explain what is going on here?
The bytes are in the wrong order. BitConverter uses the endianness of the underlying system (computer architecture), make sure to use the right endianness always.
Quick Answer: You've got the order of the bytes in your numBytes array backwards.
Since you're programming in C# I assume you are running on an Intel processor and Intel processors are little endian; that is, they store (and expect) the least significant bytes first. In your numBytes array you are putting the most significant byte first.
BitConverter doesn't so much convert byte array data as interpret it as another base data type. Think of physical memory holding a byte array:
b0 | b1 | b2 | b3.
To interpret that byte array as a single precision float, one must know the endian of the machine, i.e. if the LSByte is stored first or last. It may seem natural that the LSByte comes last because many of us read that way, but for little endian (Intel) processors, that's incorrect.

Converting byte array to hexadecimal value using BitConverter class in c#?

I'm trying to convert a byte array into hexadecimal value using Bitconverter class.
long hexValue = 0X780B13436587;
byte[] byteArray = BitConverter.GetBytes ( hexValue );
string hexResult = BitConverter.ToString ( byteArray );
now if I execute the above code line by line, this is what I see
I thought hexResult string would be same as hexValue (i.e. 780B13436587h) but what I get is different, am I missing something, correct me if I'm wrong.
Thanks!
Endianness.
BitConverter uses CPU-endianness, which for most people means: little-endian. When humans write numbers, we tend to write big-endian (broadly speaking: you write the thousands, then hundreds, then tens, then the digits). For a CPU, big-endian means that the most-significant byte is first and the least-significant byte is last. However, unless you're using an Itanium, your CPU is probably little-endian, which means that the most-significant byte is last, and the least-significant byte is first. The CPU is implemented such that this doesn't matter unless you are peeking inside raw memory - it will ensure that numeric and binary arithmetic still works the way you expect. However, BitConverter works by peeking inside raw memory - hence you see the reversed data.
If you want the value in big-endian format, then you'll need to:
do it manually in big-endian order
check the BitConverter.IsLittleEndian value, and if true:
either reverse the input bytes
or reverse the output
If you look closely, the bytes in the output from BitConverter are reversed.
To get the hex-string for a number, you use the Convert class:
Convert.ToString(hexValue, 16);
It is the same number but reversed.
BitConverter.ToString can return string representation in reversed order:
http://msdn.microsoft.com/en-us/library/3a733s97(v=vs.110).aspx
"All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian."

Binary file format: why did the system read all the 4 bytes and displayed the value correctly as 25736483

Using a c# program BinaryWriter class I am writing 2 words in a file
bw.write(i);
bw.write(s);
where i is an integer with value 25736483. and s is a string with value I am happy
I am reading the file again and outputting the value to a textbox.text
iN = br.ReadInt32();
newS = br.ReadString();
this.textBox1.Text = i.ToString() + newS;
The integer will be stored in 4 bytes and the string in another 11 bytes. When we read the ReadInt32 how did the system know that it has to spawn 4 bytes and not only 1 byte. Why did the system read all the 4 bytes and displayed the value correctly as 25736483?
You told it to read an Int32 with ReadInt32, and an Int32 is 4 bytes (32 bits). So you said "Read a four byte (32-bit) integer", and it did exactly what you told it to do.
The same thing happened when you used bw.write(i); - as you said, i is an integer, and you told it to write an integer. Since the default integer is 4 bytes (32 bits) on your platform, it wrote 4 bytes.
Integers are 32-bits on your platform. So when you passed an integer to write, you got an overload that writes 4 bytes. The ReadInt32 function knows to read four bytes because 32 bits is four bytes, so that's what it always does.
The Write method has an overload for each variable type that it knows how to write. This overload knows exactly how many bytes it takes to store the value in the stream.
You can see every specific overload here.
When you read data you call the correct version of .Read for the data type you are trying to read, which is also specifically coded to know how many bytes are in that type.

Categories