I am sure this has to as sweet and plain as butter. But I am not able get it or even find it.
It is related to colours in .net. I have taken a sample code from internet and trying to understand it. It takes a uInt as argument and do something to return a, r, g and b byte values. The method goes as:
private Color UIntToColor(uint color)
{
byte a = (byte)(color >> 24);
byte r = (byte)(color >> 16);
byte g = (byte)(color >> 8);
byte b = (byte)(color >> 0);
return Color.FromArgb(a, r, g, b);
}
so what is >> here. For example,
color = 4278190335 // (blue color)
After processing
a = 255
r = 0
g = 0
b = 255
So can anyone help me to understand this?
It's right-shift operator.
Basically, what it does is that it shifts all bits of the first operand to the right. The second operand specifies how "far" are bits shifted. For example:
uint value = 240; // this can be represented as 11110000
uint shift2 = value >> 2; // shift2 now equals 00111100
uint shift4 = value >> 4; // shift4 now equals 00001111
Good article on the subject is here.
It's in the docs
Right here
So, if you convert your value of 4278190335 to hex (because it's easier to see what's going on) you get 0xFF0000FF
So this line:
byte a = (byte)(color >> 24);
Will shift 0xFF0000FF 24 bits to the right to give you 0x000000FF. If you cast that to a byte, you will truncate off the most significant bits and end up with 0xFF or 255.
So you should be able to figure out what the other 3 lines do.
>> is the shift-right operator.
Related
I want to subtract two hexadecimals in C#. How can I do that?
Something like this :
#7ffffff - #000123
Hexadecimal literals are prefixed with 0x as in 0x7fffff. So you could assign each value to an integer and subtract them as such:
int color1 = 0x7fffff;
int color2 = 0x000123;
int difference = color1 - color2;
This is admittedly a naïve approach which will not work in a lot of cases, although given your comment, I'm thinking it will be sufficient. Consider what would happen if you subtracted a color with a larger attribute from a smaller, as in:
0xFF20FF - 0x003000 = 0xFEE0FF
I’m thinking that you would not want to borrow 1 from red, adding 0xFF to green. In a situation like that, I might want the result to be 0xFF00FF. In that case, you would want a method that subtracts the individual color elements with a floor of zero, as in:
int SubtractColors( int color1, int color2 )
{
int red = Math.Max( 0,( color1 >> 16 ) - ( color2 >> 16 ) );
int green = Math.Max( 0, ( ( color1 >> 8 ) & 0xFF ) - ( ( color2 >> 8 ) & 0xFF ) );
int blue = Math.Max( 0, ( color1 & 0xFF ) - ( color2 & 0xFF ) );
return ( red << 16 ) + ( green << 8 ) + blue;
}
We’re doing some “bit bashing” here which a lot of less experienced programmers are not familiar with. If the code above doesn’t entirely make sense, you may want to learn about:
>> right shift operator
<< left shift operator
& bitwise AND operator
Console.WriteLine(7 << 4);
Console.WriteLine(7 >> (32 - 4));
For some reason the second method returns 0 instead of 112. But they both should be equal to one another, they both should return 112.
UPDATE:
It's known that (x << n) == (x >> (32 - n)).
Your ideas?
Don't really understand what you expect to see here:
7 << 4 is shifting left (like multiplication) 7 * (2 ^ 4) = 7 * 16 = 112
on another hand
7 >> (32 - 4) is shifting right (like division)7/(2^28), that converted to integer value leads to 0.
The question is why Console.WriteLine peaks integer overload: is cause you act on integer values so expected by CLR result is int.
So result is correct.
(x << n) == (x >> (32 - n))
This is only true if it is a circular shift being performed, which isn't the case in C#. In C# they bits are lost if the are shifted right past the first bit.
//Seven = 00000111
Console.WriteLine(7 >> 1); //00000011
Console.WriteLine(7 >> 2); //00000001
Console.WriteLine(7 >> 3); //00000000
Console.WriteLine(7 >> 4); //00000000
//.
//.
//.
Console.WriteLine(7 >> 28); //00000000
Explained in more detail here:
Is there a way to perform a circular bit shift in C#?
I'm reading some values from a single byte. I'm told in the user-manual that this one byte contains 3 different values. There's a table that looks like this:
I interpret that has meaning precision takes up 3 bits, scale takes up 2 and size takes up 3 for a total of 8 (1 byte).
What I'm not clear on is:
1 - Why is it labeled 7 through 0 instead of 0 through 7 (something to do with significance maybe?)
2 - How do I extract the individual values out of that one byte?
It is customary to number bits in a byte according to their significance: bit x represents 2^x. According to this numbering scheme, the least significant bit gets number zero, the next bit is number one, and so on.
Getting individual bits requires a shift and a masking operation:
var size = (v >> 0) & 7;
var scale = (v >> 3) & 3;
var precision = (v >> 5) & 7;
Shift by the number of bits to the right of the rightmost portion that you need to get (shifting by zero is ignored; I added it for illustration purposes).
Mask with the highest number that fits in the number of bits that you would like to get: 1 for one bit, 3 for two bits, 7 for three bits, 2^x-1 for x bits.
You can do shifts and masks, or you can use the BitArray class: http://msdn.microsoft.com/en-us/library/system.collections.bitarray.aspx
Example with BitVector32:
BitVector32 bv = new BitVector32(0);
var size = BitVector32.CreateSection(7);
var scale = BitVector32.CreateSection(3, size);
var precision = BitVector32.CreateSection(7, scale);
bv[size] = 5;
bv[scale] = 2;
bv[precision] = 4;
LINQPad output:
Potayto, potahto.
You'd use shifts and masks to flatten out the undesired bits, like such:
byte b = something; // b is our byte
int size = b & 0x7;
int scale = (b >> 3) & 0x3;
int position = (b >> 5) & 0x7;
1. Yes, the most significant bit is usually written first. The left-most bit is labeled 7 because when the byte is interpreted as an integer, that bit has value 27 (= 128) when it is set.
This is completely natural and is in fact is exactly the same as how you write decimal numbers (most significant digit first). For example, the number 356 is (3 x 102) + (5 x 101) + (6 x 100).
2. For completion, as mentioned in other answers you can extract the individual values using the bit shift and bitwise-and operators as follows:
int size = x & 7;
int scale = (x >> 3) & 3;
int precision = (x >> 5) & 7;
Important note: this assumes that the individual values are to be interpreted as positive integers. If the values could be negative then this won't work correctly. Given the names of your variables, this is unlikely to be a problem here.
You can do this via bitwise arithmetic:
uint precision = (thatByte & 0xe0) >> 5,
scale = (thatByte & 0x18) >> 3,
size = thatByte & 7;
Does C# have something analogous to C++'s CHAR_BIT?
Update:
Basically, I'm trying to compute abs without branching, here is the C++ version:
// Compute the integer absolute value (abs) without branching
int v; // we want to find the absolute value of v
unsigned int r; // the result goes here
int const mask = v >> sizeof(int) * CHAR_BIT - 1;
r = (v ^ mask) - mask;
Here is my C# version:
private int Abs(int value)
{
int mask = value >> sizeof(int) * 8 - 1;
return ((value ^ mask) - mask);
}
Strangely, this also works:
private int Abs(int value)
{
int mask = value >> sizeof(int) * sizeof(byte) - 1;
return ((value ^ mask) - mask);
}
If you consider byte in C# to be the equivalent of C++'s char then the closest equivalent of CHAR_BIT is 8. In C# a byte is guaranteed to be exactly 8 bits.
The equivalent is the literal 8 because a byte is always 8-bit long.
You don't need it, because size of primitive C# types is fixed. It's guaranteed that int is a 32-bit integer, long is 64-bit integer and so on.
People, i'm sure that it's quite easy, but google didn't helped...
Here is the task - i have two byte arrays (as ARGB), representing my images. They have the same size.
What operation i should perform (byte by byte) to overlay one image to another?
Second image has some transparency, which must be considered.
To clear, im looking for a code like this:
bytes[] result = new bytes[first.Length];
for(i = 0; i< first.Lenght;i++)
{
result[i] = first[i] !!%SOMETHING%!! second[i];
}
Simple guesses like bitwise OR (I know - that's stupid ;) ) don't working.
Thx for your answers.
edit: i can't use standart library becouse of security issues (all this strange manipulations occur on Silverlight).
Assuming that you are in fact working with bitmaps, you'll likely find it easier to just let the library do this for you.
The System.Drawing.Graphics class has a CompositingMode property that can be set to either SourceCopy (the default - overwrites the background colour) or SourceOver (blends with the background color).
See MSDN: How to Use Compositing Mode to Control Alpha Blending for more detail.
If you just want the raw math, alpha blending is pretty simple. For an alpha value a between 0.0 and 1.0, the result should be:
(aOld * oldValue) + ((1 - aOld) * aNew * newValue)
Where oldValue is the previous value before overlay, newValue is what you want to overlay with, and aOld and aNew are the old and new alpha values respectively. You obviously need to do this calculation for the R, G, and B values separately.
See also: Alpha Compositing (wiki link) for a more thorough explanation.
Update: I think it should be easy to figure out how to adapt this to the code in the OP, but I guess not everybody's a math person.
I'm going to assume that the byte[] is a repeating sequence of A, R, G, B values (so Length would be a multiple of 4). If that's not the case, then you'll have to adapt this code to whatever storage format you're using.
bytes[] result = new bytes[first.Length];
for(i = 0; i < first.Length; i += 4)
{
byte a1 = first[i];
byte a2 = second[i];
byte r1 = first[i+1];
byte r2 = second[i+1];
byte g1 = first[i+2];
byte g2 = second[i+2];
byte b1 = first[i+3];
byte b2 = second[i+3];
byte a = a1 + (255 - a1) * a2 / 255;
byte r = r1 * a1 / 255 + r2 * (255 - a1) * a2 / 65025;
byte g = g1 * a1 / 255 + g2 * (255 - a1) * a2 / 65025;
byte b = b1 * a1 / 255 + b2 * (255 - a1) * a2 / 65025;
result[i] = a;
result[i+1] = r;
result[i+2] = g;
result[i+3] = b;
}
I think you have the right idea. The operation you use depends on what you want for the output. Here are some operations that are useful:
average - a common way to combine
minimum
maximum
bitwise replace
xor
or
add
subtract
multiply image 1 value by (image 2 scaled 0 to 1). This will put more of image 1 on the bright places of image 2 and not so much on the dark places. Try them out and see what you like best, or better yet, let the user select.
You can probable add or or the transparency bytes, and use one of the other operations for the each of the three colors.