C# Converting an int into an array of 2 bytes - c#

I apologize in advance if my question is not clear enough, I don't have a lot of experience with c# and I encounter an odd problem.
I am trying to convert an int into an array of two bytes (for example: take 2210 and get: 0x08, 0xA2), but all I'm getting is: 0x00, 0xA2, and I can't figure out why. Would highly appreciate any advice.
(I've tried reading other questions regarding this matter, but couldn't find an helpful answer)
my code:
profile_number = GetProfileName(); // it gets the int
profile_num[0] = (byte) ((profile_number & 0xFF00));
profile_num[1] = (byte) ((profile_number & 0x00FF));
profile_checksum = CalcProfileChecksum();
//Note: I'm referring to a 2-byte array, so the answer to the question regarding 4-byte arrays does not help me.

You need to shift the 1st byte:
//profile_num[0] = (byte) ((profile_number & 0xFF00));
profile_num[0] = (byte) ((profile_number & 0xFF00) >> 8);
profile_num[1] = (byte) ((profile_number & 0x00FF));

First I thought that this is the simplest way:
public static byte[] IntToByteArray(int value)
{
return (new BigInteger(value)).ToByteArray();
}
But i realised that ToByteArray returns only needed bytes. If the value is small (under 256) then a single byte will be returned. Another thing that I noticed is that values are reversed in the returned array so that the byte that is in the right (the most insignificant byte) is found in the left. So I come with a little revision:
public static byte[] IntToByteArrayUsingBigInteger(int value, int numberOfBytes)
{
var res = (new BigInteger(value)).ToByteArray().Reverse().ToArray();
if (res.Length == numberOfBytes)
return res;
byte[] result = new byte[numberOfBytes];
if (res.Length > numberOfBytes)
Array.Copy(res, res.Length - numberOfBytes, result, 0, numberOfBytes);
else
Array.Copy(res, 0, result, numberOfBytes - res.Length, res.Length);
return result;
}
I know that this does not compare with the performance of having bitwise operations but for the sake of learning new things and if you prefeer to use high level classes that .NET provides instead of going low level and use bitwise operators i think it's a nice alternative.

Related

Getting High/ Low Byte from ushort results in incorrect result

So I'm currently writing a Library that will help me interface with a Fingerprint Scanner over a USB-Port this one in fact, which is just a resold Zhiantec (documentation here).
So the problem I'm running in to is this: The documentation specifies that the Header bytes, the Header, Package Length and Checksum bytes are to be transferred high byte first. Not a big deal, after a quick google I found this answer by Jon Skeet showing exactly how to do this. I then put this into two small helper methods that look like this:
public static class ByteHelper
{
// Low/ High byte arithmetic
// byte upper = (byte) (number >> 8);
// byte lower = (byte) (number & 0xff);
public static byte[] GetBytesOrderedLowHigh(ushort toBytes)
{
return new[] {(byte) (toBytes & 0xFF), (byte) (toBytes >> 8)};
}
public static byte[] GetBytesOrderedHighLow(ushort toBytes)
{
return new[] {(byte) (toBytes >> 8), (byte) (toBytes & 0xFF)};
}
}
Which I'm testing to see if they do the correct thing with this code:
// Expected Output '0A-00', actual '00-0A'
Console.WriteLine(BitConverter.ToString(ByteHelper.GetBytesOrderedHighLow(10)));
// Expected Output '00-0A', actual '0A-00'
Console.WriteLine(BitConverter.ToString(ByteHelper.GetBytesOrderedLowHigh(10)));
But I'm getting the wrong output (see comments above Console.WriteLine statements), can anyone explain me why it's doing this and how to fix it?
The results you get are correct.
Your LowHigh-Method switches the two bytes.
00-0A will be 0A-00
Your HighLow-Method does only convert the ushort into an byte-array.
00-0A will stay 00-0A
Here a step-by-step-example of your logic, with some more outputs for better understanding:
ulong x = 0x0A0B0C;
Console.WriteLine(x.ToString("X6"));
// We're only interested in last 2 Bytes:
ulong x2 = x & 0xFFFF;
// Now let's get those last 2 bytes:
byte upperByte = (byte)(x2 >> 8); // Shift 8 bytes -> tell get dropped
Console.WriteLine(upperByte.ToString("X2"));
byte lowerByte = (byte)(x2 & 0xFF); // Only last Byte is left
Console.WriteLine(lowerByte.ToString("X2"));
// The question is: What do you want to do with it? Switch or leave it in this order?
// leave them in the current order:
Console.WriteLine(BitConverter.ToString(new byte[] { upperByte, lowerByte }));
// switch positions:
Console.WriteLine(BitConverter.ToString(new byte[] { lowerByte, upperByte }));

C# Explanation of stream string example

I am using a Microsoft example for interprocess communication. In the example there are two methods for reading/writing a string to/from a stream. The code sends the length of the string being streamed in the data. I need similar code, but need to make some modifications. An explanation of the highlighted lines would be helpful.
In WriteString(), they taken the length of the byte array being written and divide it by 256. The opposite is done in ReadString(), but an explanation as to why 256 is used would be great. Then, it writes another byte by taking the length and & it with 255. I don't understand the reasoning for this either. I'm thinking it shifts the value, but I don't really understand why this is needed. And then in ReadString() it does a += to the length by reading a byte. An explanation of this would be really helpful. I'm new to streaming and just want to understand exactly what is happening and why.
public string ReadString()
{
int len = 0;
// the next two lines
len = ioStream.ReadByte() * 256;
len += ioStream.ReadByte();
byte[] inBuffer = new byte[len];
ioStream.Read(inBuffer, 0, len);
return streamEncoding.GetString(inBuffer);
}
public int WriteString(string outString)
{
byte[] outBuffer = streamEncoding.GetBytes(outString);
int len = outBuffer.Length;
if (len > UInt16.MaxValue)
{
len = (int)UInt16.MaxValue;
}
// the next to lines
ioStream.WriteByte((byte)(len / 256));
ioStream.WriteByte((byte)(len & 255));
ioStream.Write(outBuffer, 0, len);
ioStream.Flush();
return outBuffer.Length + 2;
}
This code is bad, find a new tutorial.
The 256 stuff is there to convert the low 2 bytes of the length integer to bytes in order to serialize/deserialize them. This is not how it's normally done. Use BinaryReader/Writer or code not based on multiplication but on binary and and shift.
Dividing by 256 is equivalent to x >> 8. Also this only works with positive integers. x & 255 is used to take the lowest byte. This could simply be (byte)x. Sometimes people write x % 256 for this which not idiomatic and has problems with signedness.
Good code would be new byte[] { (byte)(x >> 8), (byte)(x >> 0) } and x = bytes[1] << 8 | bytes[0]. Much simpler, faster and idiomatic. I like writing >> 0 which does nothing for the sake of symmetry. It's optimized away. This might seem ridiculous for 16 bit ints but with longer ints there are 4 or 8 components and having one of them slightly off seems like needless inconsistency.
ioStream.Read(inBuffer, 0, len);
is a bug because it assumes the read completes in one chunk. Need a loop or again BinaryReader.
if (len > UInt16.MaxValue)
{
len = (int)UInt16.MaxValue;
}
I'm going to use this opportunity to warn anyone who reads this: Microsoft .NET sample code often is of extremely poor quality. Read it with great skepticism.

Defining a bit[] array in C#

currently im working on a solution for a prime-number calculator/checker. The algorythm is already working and verry efficient (0,359 seconds for the first 9012330 primes). Here is a part of the upper region where everything is declared:
const uint anz = 50000000;
uint a = 3, b = 4, c = 3, d = 13, e = 12, f = 13, g = 28, h = 32;
bool[,] prim = new bool[8, anz / 10];
uint max = 3 * (uint)(anz / (Math.Log(anz) - 1.08366));
uint[] p = new uint[max];
Now I wanted to go to the next level and use ulong's instead of uint's to cover a larger area (you can see that already), where i tapped into my problem: the bool-array.
Like everybody should know, bool's have the length of a byte what takes a lot of memory when creating the array... So I'm searching for a more resource-friendly way to do that.
My first idea was a bit-array -> not byte! <- to save the bool's, but haven't figured out how to do that by now. So if someone ever did something like this, I would appreciate any kind of tips and solutions. Thanks in advance :)
You can use BitArray collection:
http://msdn.microsoft.com/en-us/library/system.collections.bitarray(v=vs.110).aspx
MSDN Description:
Manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
You can (and should) use well tested and well known libraries.
But if you're looking to learn something (as it seems to be the case) you can do it yourself.
Another reason you may want to use a custom bit array is to use the hard drive to store the array, which comes in handy when calculating primes. To do this you'd need to further split addr, for example lowest 3 bits for the mask, next 28 bits for 256MB of in-memory storage, and from there on - a file name for a buffer file.
Yet another reason for custom bit array is to compress the memory use when specifically searching for primes. After all more than half of your bits will be 'false' because the numbers corresponding to them would be even, so in fact you can both speed up your calculation AND reduce memory requirements if you don't even store the even bits. You can do that by changing the way addr is interpreted. Further more you can also exclude numbers divisible by 3 (only 2 out of every 6 numbers has a chance of being prime) thus reducing memory requirements by 60% compared to plain bit array.
Notice the use of shift and logical operators to make the code a bit more efficient.
byte mask = (byte)(1 << (int)(addr & 7)); for example can be written as
byte mask = (byte)(1 << (int)(addr % 8));
and addr >> 3 can be written as addr / 8
Testing shift/logical operators vs division shows 2.6s vs 4.8s in favor of shift/logical for 200000000 operations.
Here's the code:
void Main()
{
var barr = new BitArray(10);
barr[4] = true;
Console.WriteLine("Is it "+barr[4]);
Console.WriteLine("Is it Not "+barr[5]);
}
public class BitArray{
private readonly byte[] _buffer;
public bool this[long addr]{
get{
byte mask = (byte)(1 << (int)(addr & 7));
byte val = _buffer[(int)(addr >> 3)];
bool bit = (val & mask) == mask;
return bit;
}
set{
byte mask = (byte) ((value ? 1:0) << (int)(addr & 7));
int offs = (int)addr >> 3;
_buffer[offs] = (byte)(_buffer[offs] | mask);
}
}
public BitArray(long size){
_buffer = new byte[size/8 + 1]; // define a byte buffer sized to hold 8 bools per byte. The spare +1 is to avoid dealing with rounding.
}
}

Microsoft BigInteger goes "negative" when I import from an array

Suppose I have an array that ends with the most signifiant bit of the most significant byte equaling 1.
My understanding is that if this is the case, BigInteger will treat this as a negative number, by design.
BigInteger numberToShorten = new BigInteger(toEncode);
if (numberToShorten.Sign == -1)
{
// problem with twos compliment or the last bit of the last byte is equal to 1
throw new Exception("Unexpected negative number");
}
To solve this problem, I think I need to add a dummy zero bit to the array, prior to converting the array. I can easily do this using Array.Resize().
My question is, how should I test if the last bit is indeed equal to 1?
I'm pretty weak on my boolean logic now, and am thinking I need to AND the values and test for equality, but not able to get the syntax correct in C#. Something like this:
byte temp = toEncode[toEncode.Length - 1];
if (temp == ???)
{
Array.Resize(ref toEncode, toEncode.Length +1);
}
If the number to be converted to BigInteger is always positive then you don't actually need to test at all; just appending a zero byte will always work correctly.
For completeness, checking if the MSB of any one byte is set is done with
if (byte & 0x80 == 0x80) ...
Found the answer at the bottom of this MSDN article (I think)
ulong originalNumber = UInt64.MaxValue;
byte[] bytes = BitConverter.GetBytes(originalNumber);
if (originalNumber > 0 && (bytes[bytes.Length - 1] & 0x80) > 0)
{
byte[] temp = new byte[bytes.Length];
Array.Copy(bytes, temp, bytes.Length);
bytes = new byte[temp.Length + 1];
Array.Copy(temp, bytes, temp.Length);
}
BigInteger newNumber = new BigInteger(bytes);
Console.WriteLine("Converted the UInt64 value {0:N0} to {1:N0}.",
originalNumber, newNumber);
// The example displays the following output:
// Converted the UInt64 value 18,446,744,073,709,551,615 to 18,446,744,073,709,551,615.
I'm not sure if this is necessary, have you tried it? In any case, assuming that temp contains a byte and you are trying to test whether that byte ends in 1, this is how you should do it:
if((temp & 0x01)==1)
{
...
}
That amounts to:
Temp: ???????z
0x01: 00000001
Temp & 0x01: 0000000z
Where z represents the bit you are trying to test, and you can now just test whether the result is equal to 1 or not.
If you are trying to test whether the byte starts with 1, do:
if((temp & 0x80)==0x80) //0x80 is 10000000 in binary
{
...
}

Incorrect value converting hexadecimal numbers to UInt C# [duplicate]

This question already has answers here:
Why does BinaryReader.ReadUInt32() reverse the bit pattern?
(6 answers)
Closed 9 years ago.
I am trying to read a binary file in C#, but I am facing a problem.
I declared the following:
public static readonly UInt32 NUMBER = 0XCAFEBABE;
Then while reading from the very beginning of the file I am asking to read the first 4 bytes (already tried different ways, but this is the simplest):
UInt32 num = in_.ReadUInt32(); // in_ is a BinaryReader
While I have that the 4 bytes are CA, FE, BA and BE (in hex) while convert them to UInt I am getting different values. NUMBER is 3405691582, num is 3199925962.
I also tried to do this:
byte[] f2 = {0xCA, 0xFE, 0xBA, 0xBE};
and the result of doing BitConverter.ToUInt32(new byte[]{0xCA, 0xFE, 0xBA, 0xBE},0) is 3199925962.
can anyone help me?
This is because of the little endianness of your machine. See BitConverter.IsLittleEndian property to check this.
Basically, numbers are stored in reverse byte order, compared to how you would write them down. We write the most significant number on the left, but the (little endian) PC stores the least significant byte on the left. Thus, the result you're getting is really 0xBEBAFECA (3199925962 decimal) and not what you expected.
You can convert using bit shifting operations:
uint value = (f2[0] << 24) | (f2[1] << 16) | (f2[2] << 8) | f2[3];
There are many more ways to convert, including IPAddress.NetworkToHostOrder as I4V pointed out, f2.Reverse(), etc.
For your specific code, I believe this would be most practical:
uint num = (uint)IPAddress.NetworkToHostOrder(in_.ReadInt32());
This may result in an arithmetic underflow however, so it may cause problems with a /checked compiler option or checked keyword (neither are very common).
If you want to deal with these situations and get even cleaner code, wrap it in an extension method:
public static uint ReadUInt32NetworkOrder(this BinaryReader reader)
{
unchecked
{
return (uint)IPAddress.NetworkToHostOrder(reader.ReadInt32());
}
}
That's what is called byte order:
var result1 = BitConverter.ToUInt32(new byte[] { 0xCA, 0xFE, 0xBA, 0xBE }, 0);
//3199925962
var result2 = BitConverter.ToUInt32(new byte[] { 0xBE, 0xBA, 0xFE, 0xCA }, 0);
//3405691582

Categories