Convert to lower and upper bits - c#

I have a device and I have some values that user sets in GUI, that will be like 630 330 etc. I need these values passed to I2C bytes. 583will be 02 47 in hex bits. that will be in 2 byte variables and I need to call Set(byte lower ,byte upper) so to convert an int or double value to 2 bytes is the requirement.
I tried :
ushort R1x = (ushort)Rx;
byte upper = (byte)(R1x >> 8);
byte lower = (byte)(R1x & 0xff);
What I needed is lower = 47 and upper = 02.
Which is giving lower = 0 and upper = 247 ..May I know what I am doing wrong

It can give you lower = 0 and upper = 247 for Rx = 247, because ushort is a 16bit value, and 247 fits on 8 bits. That's why upper 8 bits are zero (not needed to hold 247) and lower bits are holding the whole number, which is 247 or 00000000 11110111 in binary.
The first number which will give you non-zero upper bits is 256 (00000001 00000000), for which:
upper = 1
lower = 0
To have upper = 47 you need to reverse the process, so let's write it as a 8 bit binary number: 00101111. Then put those 8 bits as upper bits of a 16 bit number: 00101111 00000000. Since you want lower = 2 we need to put 2 in in the right 8 bits. This gives 00101111 00000010 binary which is equal to 12034 decimal.
Not sure what you're trying to achieve, but for the code you've provided Rx = 12034 is the only possibility to have upper and lower equal to what you desire. So if that doesn't suit your protocol, then you've made a mistake somewhere else.

Related

How can i store 12 bit values in an ushort?

I've got a stream coming from a camera that is set to a 12 bit pixel format.
My question is how can i store the pixel values in an array?
Before i was taking pictures with a 16 bit pixel format, but now i changed to 12 bit and I get the same full size image displayed four images on the screen next to one another I used to store the values in an ushort array then.
When i have the camera set to 8 bit pixel format I store the data in a byte array, but what should I use when having it at 12 bit?
Following on from my comment, we can process the incoming stream in 3-byte "chunks", each of which give 2 pixels.
// for a "chunk" of incoming array a[0], a[1], a[2]
ushort pixel1 = ((ushort)a[0] << 4) | ((a[1] >> 4) & 0xFF);
ushort pixel2 = ((ushort)(a[1] & 0xFF) << 4) | a[2];
(Assuming big-endian)
The smallest memory size you can allocate is one byte (8 bits) that means that if you need 12 bits of data to store one pixel in your frame array you should use ushort. And leave the 4 bits alone . That’s why it’s more efficient to design these kind of stuff with numbers from the pow of two
(1 2 4 8 16 32 64 128.. etch)

How do I interpret a WORD or Nibble as a number in C#?

I don't come from a low-level development background, so I'm not sure how to convert the below instruction to an integer...
Basically I have a microprocessor which tells me which IO's are active or inactive. I send the device an ASCII command and it replies with a WORD about which of the 15 I/O's are open/closed... here's the instruction:
Unit Answers "A0001/" for only DIn0 on, "A????/" for All Inputs Active
Awxyz/ - w=High Nibble of MSB in 0 to ? ASCII Character 0001=1, 1111=?, z=Low Nibble of LSB.
At the end of the day I just want to be able to convert it back into a number which will tell me which of the 15 (or 16?) inputs are active.
I have something hooked up to the 15th I/O port, and the reply I get is "A8000", if that helps?
Can someone clear this up for me please?
You can use the BitConverter class to convert an array of bytes to the integer format you need.
If you're getting 16 bits, convert them to a UInt16.
C# does not define the endianness. That is determined by the hardware you are running on. Intel and AMD processors are little-endian. You can learn the endian-ness of your platform using BitConverter.IsLittleEndian. If the computer running .NET and the hardware providing the data do not have the same endian-ness, you would have to swap the two bytes.
byte[] inputFromHardware = { 126, 42 };
ushort value = BitConverter.ToUInt16( inputFromHardware, index );
which of the 15 (or 16?) inputs are active
If the bits are hardware flags, it is plausible that all 16 are used to mean something. When bits are used to represent a signed number, one of the bits is used to represent positive vs. negative. Since the hardware is providing status bits, none of the bits should be interpreted as a sign.
If you want to know if a specific bit is active, you can use a bit mask along with the & operator. For example, the binary mask
0000 0000 0000 0100
corresponds to the hex number
0x0004
To learn if the third bit from the right is toggled on, use
bool thirdBitFromRightIsOn = (value & 0x0004 != 0);
UPDATE
If the manufacturer says the value 8000 (I assume hex) represents Channel 15 being active, look at it like this:
Your bit mask
1000 0000 0000 0000 (binary)
8 0 0 0 (hex)
Based in that info from the manufacturer, the left-most bit corresponds to Channel 15.
You can use that mask like:
bool channel15IsOn = (value & 0x8000 != 0);
A second choice (but for Nibbles only) is to use the Math library like this:
string x = "A0001/"
int y = 0;
for (int i = 3; i >= 0; i--)
{
if (x[4-i] == '1')
y += (int)Math.Pow(2,i);
}
Using the ability to treat a string as an array of characters by just using the brackets and a numeric character as a number by casting you can make the conversion from bits to an integer hardcoded.
It is a novice solution but i think it's a proper too.

BitArray returns bits the wrong way around?

This code:
BitArray bits = new BitArray(new byte[] { 7 });
foreach (bool bit in bits)
{
Console.WriteLine(bit ? 1 : 0);
}
Gives me the following output:
11100000
Shouldn't it be the other way around? Like this:
00000111
I am aware that there is little and big endian, although those terms only refer to the position of bytes. As far as I know, they don't affect bits.
The documentation for BitArray states:
The first byte in the array represents bits 0 through 7, the second
byte represents bits 8 through 15, and so on. The Least Significant
Bit of each byte represents the lowest index value: " bytes [0] & 1"
represents bit 0, " bytes [0] & 2" represents bit 1, " bytes [0] & 4"
represents bit 2, and so on.
When indexing bits, the convention is to start at the least significant end, which is the right side when written in binary notation. However, when enumerating the array, you start at index 0, so they are printed out left-to-right instead of right-to-left. That's why it looks backwards.
For example, the word 01011010 00101101 (90 45) would be indexed as:
0 1 0 1 1 0 1 0 - 0 0 1 0 1 1 0 1
----------------------- -----------------------
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
And you would pass it to the constructor as new byte[] { 45, 90 } since you pass it least-significant first. When printed out, it would display in index order as: 1011010001011010, which is the reverse of the original binary notation.
The documentation is not explicit about it, but I guess the iterator iterates from the LSB to the MSB. Sound reasonable to me (personally!), unless you are printing out the bits. I had a look at BitArray.GetEnumerator Method.
No it's a bit array, not a numeric value represented as bits.
It's just like any regular array with some methods added for bit operations. Just like if you had an array of int. You wouldn't expect it to be in the reverse order, it just takes it position by position.
Eg:
Numbers (in a byte) converted to a BitArray would come out like:
2 = 01000000
5 = 10100000
8 = 00010000
etc.
It just stores the position of the value but not relative as you would except from a binary numeric value.
Here is a link describing the constructor you are using:
http://msdn.microsoft.com/en-us/library/b3d1dwck.aspx
The key point is:
The number in the first values array element represents bits 0 through
31, the second number in the array represents bits 32 through 63, and
so on. The Least Significant Bit of each integer represents the lowest
index value: " values [0] & 1" represents bit 0, " values [0] & 2"
represents bit 1, " values [0] & 4" represents bit 2, and so on.

Working with depth data - Kinect

I just started learning about Kinect through some quick start videos and was trying out the code to work with depth data.
However, I am not able to understand how the distance is being calculated using bit-shifting and various other formulas that are being employed to calculate other stuff too while working with this depth data.
http://channel9.msdn.com/Series/KinectSDKQuickstarts/Working-with-Depth-Data
Are these the particulars which are Kinect-specifics explained in the documentation etc.? Any help would be appreciated.
Thanks
Pixel depth
When you don't have the kinect set up to detect players, it is a simply array of bytes, with two bytes representing a single depth measurement.
So, just like in a 16 bit color image, each sixteen bits represent a depth rather than a color.
If the array were for a hypothetical 2x2 pixel depth image, you might see: [0x12 0x34 0x56 0x78 0x91 0x23 0x45 0x67] which would represent the following four pixels:
AB
CD
A = 0x34 << 8 + 0x12
B = 0x78 << 8 + 0x56
C = 0x23 << 8 + 0x91
D = 0x67 << 8 + 0x45
The << 8 simply moves that byte into the upper 8 bits of a 16 bit number. It's the same as multiplying it by 256. The whole 16 bit numbers become 0x3412, 0x7856, 0x2391, 0x6745. You could instead do A = 0x34 * 256 + 0x12. In simpler terms, it's like saying I have 329 items and 456 thousands of items. If I have that total of items, I can multiply the 456 by 1,000, and add it to the 329 to get the total number of items. The kinect has broken the whole number up into two pieces, and you simply have to add them together. I could "shift" the 456 over to the left by 3 zero digits, which is the same as multiplying by 1,000. It would then be 456000. So the shift and the multiplication are the same thing for whole amounts of 10. In computers, whole amounts of 2 are the same - 8 bits is 256, so multiplying by 256 is the same as shifting left by 8.
And that would be your four pixel depth image - each resulting 16 bit number represents the depth at that pixel.
Player depth
When you select to show player data it becomes a little more interesting. The bottom three bits of the whole 16 bit number tell you the player that number is part of.
To simplify things, ignore the complicated method they use to get the remaining 13 bits of depth data, and just do the above, and steal the lower three bits:
A = 0x34 << 8 + 0x12
B = 0x78 << 8 + 0x56
C = 0x23 << 8 + 0x91
D = 0x67 << 8 + 0x45
Ap = A % 8
Bp = B % 8
Cp = C % 8
Dp = D % 8
A = A / 8
B = B / 8
C = C / 8
D = D / 8
Now the pixel A has player Ap and depth A. The % gets the remainder of the division - so take A, divide it by 8, and the remainder is the player number. The result of the division is the depth, the remainder is the player, so A now contains the depth since we got rid of the player by A=A/8.
If you don't need player support, at least at the beginning of your development, skip this and just use the first method. If you do need player support, though, this is one of many ways to get it. There are faster methods, but the compiler usually turns the above division and remainder (modulus) operations into more efficient bitwise logic operations so you don't need to worry about it, generally.

Why arithmetic shift halfs a number only in SOME incidents?

Hey, I'm self-learning about bitwise, and I saw somewhere in the internet that arithmetic shift (>>) by one halfs a number. I wanted to test it:
44 >> 1 returns 22, ok
22 >> 1 returns 11, ok
11 >> 1 returns 5, and not 5.5, why?
Another Example:
255 >> 1 returns 127
127 >> 1 returns 63 and not 63.5, why?
Thanks.
The bit shift operator doesn't actually divide by 2. Instead, it moves the bits of the number to the right by the number of positions given on the right hand side. For example:
00101100 = 44
00010110 = 44 >> 1 = 22
Notice how the bits in the second line are the same as the line above, merely
shifted one place to the right. Now look at the second example:
00001011 = 11
00000101 = 11 >> 1 = 5
This is exactly the same operation as before. However, the result of 5 is due to the fact that the last bit is shifted to the right and disappears, creating the result 5. Because of this behavior, the right-shift operator will generally be equivalent to dividing by two and then throwing away any remainder or decimal portion.
11 in binary is 1011
11 >> 1
means you shift your binary representation to the right by one step.
1011 >> 1 = 101
Then you have 101 in binary which is 1*1 + 0*2 + 1*4 = 5.
If you had done 11 >> 2 you would have as a result 10 in binary i.e. 2 (1*2 + 0*1).
Shifting by 1 to the right transforms sum(A_i*2^i) [i=0..n] in sum(A_(i+1)*2^i) [i=0..n-1]
that's why if your number is even (i.e. A_0 = 0) it is divided by two. (sorry for the customised LateX syntax... :))
Binary has no concept of decimal numbers. It's returning the truncated (int) value.
11 = 1011 in binary. Shift to the right and you have 101, which is 5 in decimal.
Bit shifting is the same as division or multiplication by 2^n. In integer arithmetics the result gets rounded towards zero to an integer. In floating-point arithmetics bit shifting is not permitted.
Internally bit shifting, well, shifts bits, and the rounding simply means bits that fall off an edge simply getting removed (not that it would actually calculate the precise value and then round it). The new bits that appear on the opposite edge are always zeroes for the right hand side and for positive values. For negative values, one bits are appended on the left hand side, so that the value stays negative (see how two's complement works) and the arithmetic definition that I used still holds true.
In most statically-typed languages, the return type of the operation is e.g. "int". This precludes a fractional result, much like integer division.
(There are better answers about what's 'under the hood', but you don't need to understand those to grok the basics of the type system.)

Categories