I just started learning about Kinect through some quick start videos and was trying out the code to work with depth data.
However, I am not able to understand how the distance is being calculated using bit-shifting and various other formulas that are being employed to calculate other stuff too while working with this depth data.
http://channel9.msdn.com/Series/KinectSDKQuickstarts/Working-with-Depth-Data
Are these the particulars which are Kinect-specifics explained in the documentation etc.? Any help would be appreciated.
Thanks
Pixel depth
When you don't have the kinect set up to detect players, it is a simply array of bytes, with two bytes representing a single depth measurement.
So, just like in a 16 bit color image, each sixteen bits represent a depth rather than a color.
If the array were for a hypothetical 2x2 pixel depth image, you might see: [0x12 0x34 0x56 0x78 0x91 0x23 0x45 0x67] which would represent the following four pixels:
AB
CD
A = 0x34 << 8 + 0x12
B = 0x78 << 8 + 0x56
C = 0x23 << 8 + 0x91
D = 0x67 << 8 + 0x45
The << 8 simply moves that byte into the upper 8 bits of a 16 bit number. It's the same as multiplying it by 256. The whole 16 bit numbers become 0x3412, 0x7856, 0x2391, 0x6745. You could instead do A = 0x34 * 256 + 0x12. In simpler terms, it's like saying I have 329 items and 456 thousands of items. If I have that total of items, I can multiply the 456 by 1,000, and add it to the 329 to get the total number of items. The kinect has broken the whole number up into two pieces, and you simply have to add them together. I could "shift" the 456 over to the left by 3 zero digits, which is the same as multiplying by 1,000. It would then be 456000. So the shift and the multiplication are the same thing for whole amounts of 10. In computers, whole amounts of 2 are the same - 8 bits is 256, so multiplying by 256 is the same as shifting left by 8.
And that would be your four pixel depth image - each resulting 16 bit number represents the depth at that pixel.
Player depth
When you select to show player data it becomes a little more interesting. The bottom three bits of the whole 16 bit number tell you the player that number is part of.
To simplify things, ignore the complicated method they use to get the remaining 13 bits of depth data, and just do the above, and steal the lower three bits:
A = 0x34 << 8 + 0x12
B = 0x78 << 8 + 0x56
C = 0x23 << 8 + 0x91
D = 0x67 << 8 + 0x45
Ap = A % 8
Bp = B % 8
Cp = C % 8
Dp = D % 8
A = A / 8
B = B / 8
C = C / 8
D = D / 8
Now the pixel A has player Ap and depth A. The % gets the remainder of the division - so take A, divide it by 8, and the remainder is the player number. The result of the division is the depth, the remainder is the player, so A now contains the depth since we got rid of the player by A=A/8.
If you don't need player support, at least at the beginning of your development, skip this and just use the first method. If you do need player support, though, this is one of many ways to get it. There are faster methods, but the compiler usually turns the above division and remainder (modulus) operations into more efficient bitwise logic operations so you don't need to worry about it, generally.
Related
We are working on modding an old 16Bit era video game cartridge. We are hoping to inject some our own Sprites into the game to dip our toes in the water.
To do so, we are developing an app to both display the Sprites and convert new ones to the Hex (to make it easier to inject.)
The game stores individual pixels as 2 Digit Hexidecimal Values (0x0~0xFFFF). The game uses bitwise shifts to establish the individual Red, Green, and Blue colors. There's some old documentation we had to fall back from the Sprite Resources community to confirm this. This confirmed use of two masks.
We have the display function working perfectly. The function receives the HEX then returns an ARRAY with the 3 values: R, G, B.
In the group, we do not have anyone particularly good working with bitwise shifts. We are looking for help turning the 3 "int" colors back into it's original single 2 Digit Hex.
ANSWERED!! THANKS
First, are you sure you want to use ~ in your calculation here:
colorRGB0R = ~(((HexPixelValue >> 11) & PixelMask1) << 3);
colorRGB0G = ~(((HexPixelValue >> 5) & PixelMask2) << 2);
colorRGB0B = ~((HexPixelValue & PixelMask1) << 3);
Because the math appears fine except for that? Maybe the commented part does something with the values but I'm not quite sure why you're inverting them. In any case...
Basically, you are working with a 565 16 bit color then. Rather than that bitmask, it is a lot easier to understand if you write the bit layout of the 16 bit value like this: rrrrrggg gggbbbbb as it visualizes which bits you want to set with what values.
Meaning that red and blue are 5 bit values (0-31) and green is a 6 bit value (0-63). However, since color values are meant to be in the range of 0-255, after extracting the bits, you have to multiply them to get the range. The multiplying here is done by bit shifting as well.
To reconstruct the 16 bit value, you can do something like this:
int ToHex(int red, int green, int blue)
{
if (red < 0 || red >= 256)
throw new ArgumentOutOfRangeException(nameof(red));
if (green < 0 || green >= 256)
throw new ArgumentOutOfRangeException(nameof(green));
if (blue < 0 || blue >= 256)
throw new ArgumentOutOfRangeException(nameof(blue));
// red & 0xF8 cuts off the bottom 3 bits to be save
// green & 0xFC cuts off the bottom 2 bits
// blue needs to be shifted to the right anyway, so we use that to cut off 3 bits
return ((red & 0xF8) << 8) |
((green & 0xFC) << 3) |
(blue >> 3);
}
I've got a stream coming from a camera that is set to a 12 bit pixel format.
My question is how can i store the pixel values in an array?
Before i was taking pictures with a 16 bit pixel format, but now i changed to 12 bit and I get the same full size image displayed four images on the screen next to one another I used to store the values in an ushort array then.
When i have the camera set to 8 bit pixel format I store the data in a byte array, but what should I use when having it at 12 bit?
Following on from my comment, we can process the incoming stream in 3-byte "chunks", each of which give 2 pixels.
// for a "chunk" of incoming array a[0], a[1], a[2]
ushort pixel1 = ((ushort)a[0] << 4) | ((a[1] >> 4) & 0xFF);
ushort pixel2 = ((ushort)(a[1] & 0xFF) << 4) | a[2];
(Assuming big-endian)
The smallest memory size you can allocate is one byte (8 bits) that means that if you need 12 bits of data to store one pixel in your frame array you should use ushort. And leave the 4 bits alone . That’s why it’s more efficient to design these kind of stuff with numbers from the pow of two
(1 2 4 8 16 32 64 128.. etch)
I've read this post and in the part 2) Use Layers of Leosori's answer he use bit shift to get the bit mask. I wanted to have an explanation of how bit shift work (I didn't found my answer on the manual either).
In the example it is shown how to cast only on layer 8:
int layerMask = 1 << 8;
// This would cast rays only against colliders in layer 8.
So, how can I use bit shift to get the bit mask of layers 9 and 10 at the same time?
In my project I have some ray casts on my player to be able to know if he sees some specific objects (layer 10). If the objects are behind a wall (layer 9) the player shouldn't be able to see it. I would like to raycast on both layers and test if the hit.collider.gameObject.tag is "seekObjects". I know there are other solutions to do this but I would like to understand how bit shift works.
Manipulating individual bits is mainly done using the &, |, ~ and <</>> operators.
Example (with bytes):
// single value
byte a = 1; // 00000001
// shift "a" 3 bits left
byte b = a << 3; // 00001000
// combine a and b with bitwise or (|)
byte c = a | b; // 00001001
So in your case, to get bit 9 and bit 10 set, do:
int layerMask = ( 1 << 9 ) | ( 1 << 10 );
Notice that we're using | and not ||, which is logical or.
I have a device and I have some values that user sets in GUI, that will be like 630 330 etc. I need these values passed to I2C bytes. 583will be 02 47 in hex bits. that will be in 2 byte variables and I need to call Set(byte lower ,byte upper) so to convert an int or double value to 2 bytes is the requirement.
I tried :
ushort R1x = (ushort)Rx;
byte upper = (byte)(R1x >> 8);
byte lower = (byte)(R1x & 0xff);
What I needed is lower = 47 and upper = 02.
Which is giving lower = 0 and upper = 247 ..May I know what I am doing wrong
It can give you lower = 0 and upper = 247 for Rx = 247, because ushort is a 16bit value, and 247 fits on 8 bits. That's why upper 8 bits are zero (not needed to hold 247) and lower bits are holding the whole number, which is 247 or 00000000 11110111 in binary.
The first number which will give you non-zero upper bits is 256 (00000001 00000000), for which:
upper = 1
lower = 0
To have upper = 47 you need to reverse the process, so let's write it as a 8 bit binary number: 00101111. Then put those 8 bits as upper bits of a 16 bit number: 00101111 00000000. Since you want lower = 2 we need to put 2 in in the right 8 bits. This gives 00101111 00000010 binary which is equal to 12034 decimal.
Not sure what you're trying to achieve, but for the code you've provided Rx = 12034 is the only possibility to have upper and lower equal to what you desire. So if that doesn't suit your protocol, then you've made a mistake somewhere else.
Hey, I'm self-learning about bitwise, and I saw somewhere in the internet that arithmetic shift (>>) by one halfs a number. I wanted to test it:
44 >> 1 returns 22, ok
22 >> 1 returns 11, ok
11 >> 1 returns 5, and not 5.5, why?
Another Example:
255 >> 1 returns 127
127 >> 1 returns 63 and not 63.5, why?
Thanks.
The bit shift operator doesn't actually divide by 2. Instead, it moves the bits of the number to the right by the number of positions given on the right hand side. For example:
00101100 = 44
00010110 = 44 >> 1 = 22
Notice how the bits in the second line are the same as the line above, merely
shifted one place to the right. Now look at the second example:
00001011 = 11
00000101 = 11 >> 1 = 5
This is exactly the same operation as before. However, the result of 5 is due to the fact that the last bit is shifted to the right and disappears, creating the result 5. Because of this behavior, the right-shift operator will generally be equivalent to dividing by two and then throwing away any remainder or decimal portion.
11 in binary is 1011
11 >> 1
means you shift your binary representation to the right by one step.
1011 >> 1 = 101
Then you have 101 in binary which is 1*1 + 0*2 + 1*4 = 5.
If you had done 11 >> 2 you would have as a result 10 in binary i.e. 2 (1*2 + 0*1).
Shifting by 1 to the right transforms sum(A_i*2^i) [i=0..n] in sum(A_(i+1)*2^i) [i=0..n-1]
that's why if your number is even (i.e. A_0 = 0) it is divided by two. (sorry for the customised LateX syntax... :))
Binary has no concept of decimal numbers. It's returning the truncated (int) value.
11 = 1011 in binary. Shift to the right and you have 101, which is 5 in decimal.
Bit shifting is the same as division or multiplication by 2^n. In integer arithmetics the result gets rounded towards zero to an integer. In floating-point arithmetics bit shifting is not permitted.
Internally bit shifting, well, shifts bits, and the rounding simply means bits that fall off an edge simply getting removed (not that it would actually calculate the precise value and then round it). The new bits that appear on the opposite edge are always zeroes for the right hand side and for positive values. For negative values, one bits are appended on the left hand side, so that the value stays negative (see how two's complement works) and the arithmetic definition that I used still holds true.
In most statically-typed languages, the return type of the operation is e.g. "int". This precludes a fractional result, much like integer division.
(There are better answers about what's 'under the hood', but you don't need to understand those to grok the basics of the type system.)