c#: How to represent a large data packet easily? - c#

I have an array of bytes which I want to represent with some kind of structure so that using these bytes will be easier.
Currently the bytes come in to an array in my application as :
Bytes 0-3 Header,
Bytes 4-10 A representation of an ascii number,
Bytes 11 to 17 Another representation of an ascii number,
Bytes 18 to 1000, binary data.
Is there a way to represent this so that I can for example just use something like MyArray.Header, MyArray.Number1, MyArray.Number2, MyArray.Data
I think a Struct could handle this but I am not sure how to define it or how to use it afterwards. The bytes will be coming in via a network continuously.
Thanks for any help.

Related

How to change wav bits per sample from 16 to 32 in .NET?

I want to change source wav file bits per sample from 16 to 32
To do this I update:
raw data: convert each 2 bytes to short, then cast to float then to bytes array (instead of initial two bytes)
data length field by doubling
bits per sample by doubling
byte rate field just by doubling
block align parameter by doubling
After saving wav file a lot of noise appears and sound becomes very loud.
What am I doing wrong?

What exactly is byte[] in C#?

In C#, byte is the data type for 8-bit unsigned integers, so a byte[] should be an array of integers who are between 0 and 255, just like an char[] is an array of characters.
But most of time when I encounter byte[], I see byte[] is used as a contiguous chunk of memory for storing raw representation of data.
How do these two relate to each other?
thanks
Well, a byte as datatype is exactly what you already said, an unsigned integer between 0 and 255. Furthermore this type needs exactly - believe it or not - one byte in your memory, thus also the name. This is why most readers that read byte per byte store those information in a structure that fits exactly the size of a byte - the byte-datatype.

Binary file format: why did the system read all the 4 bytes and displayed the value correctly as 25736483

Using a c# program BinaryWriter class I am writing 2 words in a file
bw.write(i);
bw.write(s);
where i is an integer with value 25736483. and s is a string with value I am happy
I am reading the file again and outputting the value to a textbox.text
iN = br.ReadInt32();
newS = br.ReadString();
this.textBox1.Text = i.ToString() + newS;
The integer will be stored in 4 bytes and the string in another 11 bytes. When we read the ReadInt32 how did the system know that it has to spawn 4 bytes and not only 1 byte. Why did the system read all the 4 bytes and displayed the value correctly as 25736483?
You told it to read an Int32 with ReadInt32, and an Int32 is 4 bytes (32 bits). So you said "Read a four byte (32-bit) integer", and it did exactly what you told it to do.
The same thing happened when you used bw.write(i); - as you said, i is an integer, and you told it to write an integer. Since the default integer is 4 bytes (32 bits) on your platform, it wrote 4 bytes.
Integers are 32-bits on your platform. So when you passed an integer to write, you got an overload that writes 4 bytes. The ReadInt32 function knows to read four bytes because 32 bits is four bytes, so that's what it always does.
The Write method has an overload for each variable type that it knows how to write. This overload knows exactly how many bytes it takes to store the value in the stream.
You can see every specific overload here.
When you read data you call the correct version of .Read for the data type you are trying to read, which is also specifically coded to know how many bytes are in that type.

Succinct way to write a mixture of chars and bytes?

I'm trying to write an index file that follows the format of a preexisting (and immutable) text file.
The file is fixed length, with 11 bytes of string (in ASCII) followed by 4 bytes of long for a total of 15 bytes per line.
Perhaps I'm being a bit dim, but is there an simple way to do this? I get the feeling I need to open up two streams to write one line - one for the string and one for the bytes - but that feels wrong.
Any hints?
You can use BitConverter to convert between an int/long and an array of bytes. This way you would be able to write eleven bytes followed by four bytes, followed by eleven more bytes, and so on.
byte[] intBytes = BitConverter.GetBytes(intValue); // returns 4-byte array
Converting to bytes: BitConverter.GetBytes(int).
Converting back to int: BitConverter.ToInt32(byte\[\], int)
If you are developing a cross-platform solution, keep in mind the following note from the documentation (thanks to uriDium for the comment):
The order of bytes in the array returned by the GetBytes method depends on whether the computer architecture is little-endian or big-endian.

Sending HEX values over a packet in C#

I currently have the following:
N.Sockets.UdpClient UD;
UD = new N.Sockets.UdpClient();
UD.Connect("xxx.xxx.xxx.xxx", 5255);
UD.Send( data, data.Length );
How would I send data in hex? I cannot just save it straight into a Byte array.
Hex is just an encoding. It's simply a way of representing a number. The computer works with bits and bytes only -- it has no notion of "hex".
So any number, whether represented in hex or decimal or binary, can be encoded into a series of bytes:
var data = new byte[] { 0xFF };
And any hex string can be converted into a number (using, e.g. int.Parse()).
Things get more interesting when a number exceeds one byte: Then there has to be an agreement of how many bytes will be used to represent the number, and the order they should be in.
In C#, ints are 4 bytes. Internally, depending on the endianness of the CPU, the most significant byte (highest-valued digits) might be stored first (big-endian) or last (little-endian). Typically, big-endian is used as the standard for communication over the network (remember the sender and receiver might have CPUs with different endianness). But, since you are sending the raw bytes manually, I'll assume you are also reading the raw bytes manually on the other end; if that's the case, you are of course free to use any arbitrary format you like, providing that the client can understand that format unambiguously.
To encode an int in big-endian order, you can do:
int num = 0xdeadbeef;
var unum = (uint)num; // Convert to uint for correct >> with negative numbers
var data = new[] {
(byte)(unum >> 24),
(byte)(unum >> 16),
(byte)(unum >> 8),
(byte)(unum)
};
Be aware that some packets might never reach the client (this is the main practical difference between TCP and UDP), possibly leading to misinterpretation of the bytes. You should take steps to improve the robustness of your message-sending (e.g. by adding a checksum, and ignoring values whose checksums are invalid or missing).

Categories