I was a looking at the source code of a project, and I noticed the following statement (both keyByte and codedByte are of type byte):
return (byte)(keyByte - codedByte);
I'm trying now to understand what would the result be in cases where keyByte is smaller than codedByte, which results in a negative integer.
After some experiments to understand the result of casting a negative integer which has a value in the range [-255 : -1], I got the following results:
byte result = (byte) (-6); // result = 250
byte result = (byte) (-50); // result = 206
byte result = (byte) (-17); // result = 239
byte result = (byte) (-20); // result = 236
So, provided that -256 < a < 0 , I was able to determine the result by:
result = 256 + a;
My question is: should I always expect this to be the case?
Yes, that will always be the case (i.e. it is not simply dependent on your environment or compiler, but is defined as part of the C# language spec). See http://msdn.microsoft.com/en-us/library/aa691349(v=vs.71).aspx:
In an unchecked context, the result is truncated by discarding any high-order bits that do not fit in the destination type.
The next question is, if you take away the high-order bits of a negative int between -256 and -1, and read it as a byte, what do you get? This is what you've already discovered through experimentation: it is 256 + x.
Note that endianness does not matter because we're discarding the high-order (or most significant) bits, not the "first" 24 bits. So regardless of which end we took it from, we're left with the least significant byte that made up that int.
Yes. Remember, there's no such thing as "-" in the domain of a .Net "Byte":
http://msdn.microsoft.com/en-us/library/e2ayt412.aspx
Because Byte is an unsigned type, it cannot represent a negative
number. If you use the unary minus (-) operator on an expression that
evaluates to type Byte, Visual Basic converts the expression to Short
first. (Note: substitute any CLR/.Net language for "Visual Basic")
ADDENDUM:
Here's a sample app:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestByte
{
class Program
{
static void Main(string[] args)
{
for (int i = -255; i < 256; i++)
{
byte b = (byte)i;
System.Console.WriteLine("i={0}, b={1}", i, b);
}
}
}
}
And here's the resulting output:
testbyte|more
i=-255, b=1
i=-254, b=2
i=-253, b=3
i=-252, b=4
i=-251, b=5
...
i=-2, b=254
i=-1, b=255
i=0, b=0
i=1, b=1
...
i=254, b=254
i=255, b=255
Here is an algorithm that performs the same logic as casting to byte, to help you understand it:
For positives:
byte bNum = iNum % 256;
For negatives:
byte bNum = 256 + (iNum % 256);
It's like searching for any k which causes x + 255k to be in the range 0 ... 255. There could only be one k which produces a result with that range, and the result will be the result of casting to byte.
Another way of looking at it is as if it "cycles around the byte value range":
Lets use the iNum = -712 again, and define a bNum = 0.
We shall do iNum++; bNum--; untill iNum == 0:
iNum = -712;
bNum = 0;
iNum++; // -711
bNum--; // 255 (cycles to the maximum value)
iNum++; // -710
bNum--; // 254
... // And so on, as if the iNum value is being *consumed* within the byte value range cycle.
This is, of course, just an illustration to see how logically it works.
This is what happens in unchecked context. You could say that the runtime (or compiler if the Int32 that you cast to Byte is known at compiletime) adds or subtracts 256 as many times as is needed until it finds a representable value.
In a checked context, an exception (or compiletime error) results. See http://msdn.microsoft.com/en-us/library/khy08726.aspx
Yes - unless you get an exception.
.NET defines all arithmetic operations only on 4 byte and larger data types. So the only non-obvious point is how converting an int to a byte works.
For a conversion from an integral type to another integral type, the result of conversion depends on overflow checking context (says the ECMA 334 standard, Section 13.2.1).
So, in the following context
checked
{
return (byte)(keyByte - codedByte);
}
you will see a System.OverflowException. Whereas in the following context:
unchecked
{
return (byte)(keyByte - codedByte);
}
you are guaranteed to always see the results that you expect regardless of whether you do or don't add a multiple of 256 to the difference; for example, 2 - 255 = 3.
This is true regardless of how the hardware represents signed values. The CLR standard (ECMA 335) specifies, in Section 12.1, that the Int32 type is a "32-bit two's-complement signed value". (Well, that also matches all platforms on which .NET or mono is currently available anyway, so one could almost guess that it would work anyway, but it is good to know that the practice is supported by the language standard and portable.)
Some teams do not want to specify overflow checking contexts explicitly, because they have a policy of checking for overflows early in development cycle, but not in released code. In these cases you can safely do byte arithmetic like this:
return (byte)((keyByte - codedByte) % 256);
Related
I am trying to reverse engineering a serial port device that uses hdlc for its packet format.Based on the documentation, the packet should contain a bitwise inversion of the command(first 4 bytes), which in this case is "HELO". Monitoring the serial port when using the original program shows what the bitwise inversion should be:
HELO -> b7 ba b3 b0
READ -> ad ba be bb
The problem is, I am not getting values even remotely close.
public object checksum
{
get
{
var cmdDec = (int)Char.GetNumericValue((char)this.cmd);
return (cmdDec ^ 0xffffffff);
}
}
You have to work with bytes, not with chars:
string source = "HELO";
// Encoding.ASCII: I assume that the command line has ASCII encoded commands only
byte[] result = Encoding.ASCII
.GetBytes(source)
.Select(b => unchecked((byte)~b)) // unchecked: ~b returns int; can exceed byte.MaxValue
.ToArray();
Test (let's represent the result as hexadecimals)
// b7 ba b3 b0
Console.Write(string.Join(" ", result.Select(b => b.ToString("x2"))));
Char is not a byte. You should use bytes instead of chars.
So this.cmd is an array of bytes? You could use the BitConverter.ToUInt32()
PSEUDO: (you might fix some casting)
public uint checksum
{
get
{
var cmdDec = BitConverter.ToUInt32(this.cmd, 0);
return (cmdDec ^ 0xffffffff);
}
}
if this.cmd is a string you could get a byte array from it with Encoding.UTF8.GetBytes(string)
Your bitwise inversion isn't doing what you think it's doing. Take the following, for example:
int i = 5;
var j = i ^ 0xFFFFFFFF;
var k = ~i;
The first example is performing the inversion the way you are doing it, by XOR-ing the number with a max value. The second value uses the C# Bitwise-NOT ~ operator.
After running this code, j will be a long value equal to 4294967290, while k will be an int value equal to -6. Their binary representation will be the same, but j will include another 32 bits of 0's to go along with it. There's also the obvious problem of them being completely different numbers, so any math performed on the values will be completely different depending on what you are using.
Any fast way to check if two doubles have the same sign? Assume the two doubles cannot be 0.
Potential solutions:
a*b > 0: One floating-point multiply and one comparison.
(a>0) == (b>0): Three comparisons.
Math.Sign(a) == Math.Sign(b): Two function calls and one comparison.
Speed comparison:
It's about what you'd expect (see experimental setup at the bottom):
a*b > 0: 0.42 ± 0.02s
(a>0) == (b>0): 0.49 ± 0.01s
Math.Sign(a) == Math.Sign(b): 1.11 ± 0.9s
Important notes:
As noted by greybeard in the comments, method 1 is susceptible to problems if the values multiply to something smaller than Double.Epsilon. Unless you can guarantee that the multiple is always larger than this, you should probably go with method 2.
Experimental setup:
The following code was run 16 times on http://rextester.com/.
public static void Main(string[] args)
{
double a = 1e-273;
double b = a;
bool equiv = false;
for(int i=0; i<100000000; ++i) {
equiv = THE_COMPARISON;
b += a;
}
//Your code goes here
Console.WriteLine(equiv);
}
The simplest and fastest way for IEEE 754 I know of is just using XOR on the MSB bits of both numbers. Here is a small C# example (note the inlining to avoid the function overhead):
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private unsafe static bool fpu_cmpsign(double a, double b)
{
byte* aa;
byte* bb;
aa = (byte*)(&a); // points to the a as 8bit integral type
bb = (byte*)(&b); // points to the b as 8bit integral type
return ((aa[7] ^ bb[7]) & 128) != 128;
}
Here result of +/- numbers combinations:
a b result
- - 1
- + 0
+ - 0
+ + 1
The idea is simple. The sign is stored in the highest bit (MSB) and XOR returns 1 for non equal bits so XOR the MSB of booth numbers together and negate the output. the [7] is just accessing highest BYTE of the double as 8 bit integral type so I can use CPU ALU instead FPU. If your platform has reversed order of BYTES then use [0] instead (MSByte first vs. LSByte first).
So what you need is just 3x 8 bit XORs for comparison and negation and 1x 8bit AND for extracting sign bit result only.
You can use unions instead of pointers and also use native bit-width for your platform to get best performance.
You could use:
if (copysign(x, y) == x)
I am trying to understand why BigInteger is throwing an overflow exception. I tried to visualize this by converting the BigInteger to a byte[] and iteratively incrementing the shift until I see where the exception occurs.
Should I be able to bit-shift >> a byte[], or is C# simply not able to?
Code causing an exception
uint amountToShift2 = 12;
BigInteger num = new BigInteger(-126);
uint compactBitsRepresentation = (uint)(num >> (int)amountToShift2);
Regarding your edited question with:
uint amountToShift2 = 12;
BigInteger num = new BigInteger(-126);
uint compactBitsRepresentation = (uint)(num >> (int)amountToShift2);
The bit shift works OK and produces a BigInteger of value -1 (negative one).
But the conversion to uint throws an exception becauce -1 is outside the range of an uint. The conversion from BigInteger to uint does not "wrap around" modulo 2**32, but simply throws.
You can get around that with:
uint compactBitsRepresentation = (uint)(int)(num >> (int)amountToShift2);
which will not throw in unchecked context (which is the usual context).
There is no >> or << bit-shift operators for byte arrays in C#. You need to write code by hand to do so (pay attention to bits that fall off).
Something tells me that the >> operator won't work with reference types like arrays, rather it works with primitive types.
your ints are actually represented by a series of bytes, so say
int i = 6;
i is represented as
00000000000000000000000000000110
the >> shifts all the bits to the right, changing it to
00000000000000000000000000000011
or 3
If you really need to shift the byte array, it shouldn't be too terribly hard to define your own method to move all the items of the array over 1 slot. It will have O(n) time complexity though.
Math.Ceiling returns double because double may store much bigger numbers.
However if i'm sure that int type is capable to store the result how should I convert? Is it safe to cast (int) Math.Ceiling(... ?
If you are sure that you do not cross the capacity of int, it should be perfectly safe to do
int myInt = (int)Math.Ceiling(...);
If you are not sure about the bound, you could go with long instead of int.
From C++ practices, I would use the following. It's guaranteed to get the correct result even when ceiling returns 99.99999...8 or 100.000000...1
var result = (int)(Math.Ceiling(value) + 0.5);
The code below should work too if you trust its implementation
var result = Convert.ToInt32(value);
If it's all about speed, then Math.Ceiling for Int inputs and output is quite slow. The fastest is an inline expression. 2.4 seconds vs 33 ms.
Warning: Only for positive value and divisor values.
A) Modulus
Here's one I came up with, that has obviously also been found by C/C++ developers before:
var ceilingResult = (value / divisor) + (value % divisor == 0 ? 0 : 1);
From my own benchmark of 10M iterations, Math.Ceiling takes ~2.4 seconds. Calling this expression inside a named function takes ~380 ms and having it as a direct inline expression takes ~33ms.
B) Simple arithmetic only
Also consider using the suggestion from #mafu
var ceilingResult = (value + divisor - 1) / divisor;
See this 470x upvoted C++ answer for reference and validation. Also https://stackoverflow.com/a/4175152/887092.
C) DivRem
While looking at this answer, https://stackoverflow.com/a/14878734/887092, I noticed the comment that reminded me about DivRem CPU instructions. See https://learn.microsoft.com/en-us/dotnet/api/system.math.divrem?view=netframework-4.8. Math.DivRem should get resolved down to such a CPU instruction.
var quotient = Math.DivRem(value, divisor, out long remainder);
var ceilingResult = quotient + (remainder == 0 ? 0 : 1);
[I have not tested this]. See https://stackoverflow.com/a/924160/887092, for potential edge cases (Where negative Int numbers are used)
Further optimisations might be possible for this - maybe with casting. In this answer, https://stackoverflow.com/a/924160/887092, an if-conditional statement is used - they are about the same.
Performance of the 3:
Modulus: Has two operations that are added, but also a conditional branch.
Arithmetic: Has some additional mathematical operations in a sequence.
DivRem: Builds on the Modulus approach. If C# does resolve Math.DivRem to a single CPU instruction, this might be faster. Further optimisations might also be possible.
I'm not sure how the two would perform on various architectures. But now you have options to explore.
If you would like Math.Floor for Int inputs and Output, it's even easier:
var floorResult = (value / divisor);
I'd go with
int x = (int)Math.Ceiling(0.9); // 1
If you are uncertain you can always put an if statement and check if the number you get back is highter then int.MaxValue
int oInt = Convert.ToInt32(Math.Ceiling(value));
since Math.Ceiling returns double and you want to convert it to int, use Convert Class. example:
double[] values= { Double.MinValue, -1.38e10, -1023.299, -12.98,
0, 9.113e-16, 103.919, 17834.191, Double.MaxValue };
int result;
foreach (double value in values)
{
try {
result = Convert.ToInt32(value);
Console.WriteLine("Converted the {0} value '{1}' to the {2} value {3}.",
value.GetType().Name, value,
result.GetType().Name, result);
}
catch (OverflowException) {
Console.WriteLine("{0} is outside the range of the Int32 type.", value);
}
}
// -1.79769313486232E+308 is outside the range of the Int32 type.
// -13800000000 is outside the range of the Int16 type.
// Converted the Double value '-1023.299' to the Int32 value -1023.
// Converted the Double value '-12.98' to the Int32 value -13.
// Converted the Double value '0' to the Int32 value 0.
// Converted the Double value '9.113E-16' to the Int32 value 0.
// Converted the Double value '103.919' to the Int32 value 104.
// Converted the Double value '17834.191' to the Int32 value 17834.
// 1.79769313486232E+308 is outside the range of the Int32 type.
How to I use logical operators to determine if a bit is set, or is bit-shifting the only way?
I found this question that uses bit shifting, but I would think I can just AND out my value.
For some context, I'm reading a value from Active Directory and trying to determine if it a Schema Base Object. I think my problem is a syntax issue, but I'm not sure how to correct it.
foreach (DirectoryEntry schemaObjectToTest in objSchema.Children)
{
var resFlag = schemaObjectToTest.Properties["systemFlags"].Value;
//if bit 10 is set then can't be made confidential.
if (resFlag != null)
{
byte original = Convert.ToByte( resFlag );
byte isFlag_Schema_Base_Object = Convert.ToByte( 2);
var result = original & isFlag_Schema_Base_Object;
if ((result) > 0)
{
//A non zero result indicates that the bit was found
}
}
}
When I look at the debugger:
resFlag is an object{int} and the value is 0x00000010.
isFlag_Schema_Base_Object, is 0x02
resFlag is 0x00000010 which is 16 in decimal, or 10000 in binary. So it seems like you want to test bit 4 (with bit 0 being the least significant bit), despite your comment saying "if bit 10 is set".
If you do need to test bit 4, then isFlag_Schema_Base_Object needs to be initialised to 16, which is 0x10.
Anyway, you are right - you don't need to do bit shifting to see if a bit is set, you can AND the value with a constant that has just that bit set, and see if the result is non-zero.
If the bit is set:
original xxx1xxxx
AND
isFlag_Schema_Base_Object 00010000
-----------------------------------
= 00010000 (non-zero)
But if the bit isn't set:
original xxx0xxxx
AND
isFlag_Schema_Base_Object 00010000
-----------------------------------
= 00000000 (zero)
Having said that, it might be clearer to initialise isFlag_Schema_Base_Object using the value 1<<4, to make it clear that you're testing whether bit 4 is set.
If you know which bit to check and you're dealing with int's you can use BitVector32.
int yourValue = 5;
BitVector32 bv = new BitVector32(yourValue);
int bitPositionToCheck = 3;
int mask = Enumerable.Range(0, bitPositionToCheck).Select(BitVector32.CreateMask).Last();
bool isSet = bv[mask];
Using bitshifting is probably cleaner than using CreateMask. But it's there :)