Any fast way to check if two doubles has the same sign? - c#

Any fast way to check if two doubles have the same sign? Assume the two doubles cannot be 0.

Potential solutions:
a*b > 0: One floating-point multiply and one comparison.
(a>0) == (b>0): Three comparisons.
Math.Sign(a) == Math.Sign(b): Two function calls and one comparison.
Speed comparison:
It's about what you'd expect (see experimental setup at the bottom):
a*b > 0: 0.42 ± 0.02s
(a>0) == (b>0): 0.49 ± 0.01s
Math.Sign(a) == Math.Sign(b): 1.11 ± 0.9s
Important notes:
As noted by greybeard in the comments, method 1 is susceptible to problems if the values multiply to something smaller than Double.Epsilon. Unless you can guarantee that the multiple is always larger than this, you should probably go with method 2.
Experimental setup:
The following code was run 16 times on http://rextester.com/.
public static void Main(string[] args)
{
double a = 1e-273;
double b = a;
bool equiv = false;
for(int i=0; i<100000000; ++i) {
equiv = THE_COMPARISON;
b += a;
}
//Your code goes here
Console.WriteLine(equiv);
}

The simplest and fastest way for IEEE 754 I know of is just using XOR on the MSB bits of both numbers. Here is a small C# example (note the inlining to avoid the function overhead):
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private unsafe static bool fpu_cmpsign(double a, double b)
{
byte* aa;
byte* bb;
aa = (byte*)(&a); // points to the a as 8bit integral type
bb = (byte*)(&b); // points to the b as 8bit integral type
return ((aa[7] ^ bb[7]) & 128) != 128;
}
Here result of +/- numbers combinations:
a b result
- - 1
- + 0
+ - 0
+ + 1
The idea is simple. The sign is stored in the highest bit (MSB) and XOR returns 1 for non equal bits so XOR the MSB of booth numbers together and negate the output. the [7] is just accessing highest BYTE of the double as 8 bit integral type so I can use CPU ALU instead FPU. If your platform has reversed order of BYTES then use [0] instead (MSByte first vs. LSByte first).
So what you need is just 3x 8 bit XORs for comparison and negation and 1x 8bit AND for extracting sign bit result only.
You can use unions instead of pointers and also use native bit-width for your platform to get best performance.

You could use:
if (copysign(x, y) == x)

Related

Bit reverse numbers by N bits

I am trying to find a simple algorithm that reverses the bits of a number up to N number of bits. For example:
For N = 2:
01 -> 10
11 -> 11
For N = 3:
001 -> 100
011 -> 110
101 -> 101
The only things i keep finding is how to bit reverse a full byte but thats only going to work for N = 8 and thats not always what i need.
Does any one know an algorithm that can do this bitwise operation? I need to do many of them for an FFT so i'm looking for something that can be very optimised too.
It is the C# implementation of bitwise reverse operation:
public uint Reverse(uint a, int length)
{
uint b = 0b_0;
for (int i = 0; i < length; i++)
{
b = (b << 1) | (a & 0b_1);
a = a >> 1;
}
return b;
}
The code above first shifts the output value to the left and adds the bit in the smallest position of the input to the output and then shifts the input to right. and repeats it until all bits finished. Here are some samples:
uint a = 0b_1100;
uint b = Reverse(a, 4); //should be 0b_0011;
And
uint a = 0b_100;
uint b = Reverse(a, 3); //should be 0b_001;
This implementation's time complexity is O(N) which N is the length of the input.
Edit in Dotnet fiddle
Here's a small look-up table solution that's good for (2<=N<=32).
For N==8, I think everyone agrees that a 256 byte array lookup table is the way to go. Similarly, for N from 2 to 7, you could create 4, 8, ... 128 lookup byte arrays.
For N==16, you could flip each byte and then reorder the two bytes. Similarly, for N==24, you could flip each byte and then reorder things (which would leave the middle one flipped but in the same position). It should be obvious how N==32 would work.
For N==9, think of it as three 3-bit numbers (flip each of them, reorder them and then do some masking and shifting to get them in the right position). For N==10, it's two 5-bit numbers. For N==11, it's two 5-bit numbers on either side of a center bit that doesn't change. The same for N==13 (two 6-bit numbers around an unchanging center bit). For a prime like N==23, it would be a pair of 8- bit numbers around a center 7-bit number.
For the odd numbers between 24 and 32 it gets more complicated. You probably need to consider five separate numbers. Consider N==29, that could be four 7-bit numbers around an unchanging center bit. For N==31, it would be a center bit surround by a pair of 8-bit numbers and a pair of 7-bit numbers.
That said, that's a ton of complicated logic. It would be a bear to test. It might be faster than #MuhammadVakili's bit shifting solution (it certainly would be for N<=8), but it might not. I suggest you go with his solution.
Using string manipulation?
static void Main(string[] args)
{
uint number = 269;
int numBits = 4;
string strBinary = Convert.ToString(number, 2).PadLeft(32, '0');
Console.WriteLine($"{number}");
Console.WriteLine($"{strBinary}");
string strBitsReversed = new string(strBinary.Substring(strBinary.Length - numBits, numBits).ToCharArray().Reverse().ToArray());
string strBinaryModified = strBinary.Substring(0, strBinary.Length - numBits) + strBitsReversed;
uint numberModified = Convert.ToUInt32(strBinaryModified, 2);
Console.WriteLine($"{strBinaryModified}");
Console.WriteLine($"{numberModified}");
Console.Write("Press Enter to Quit.");
Console.ReadLine();
}
Output:
269
00000000000000000000000100001101
00000000000000000000000100001011
267
Press Enter to Quit.

Is ampersand followed by int.MaxValue rounding down?

I have a piece of C# code that another developer has copied from a blog post which is used to encode/obfuscate an integer. this code contains some syntax that i am unfamiliar with. It looks like it might be rounding down the result of the calculation to prevent it from exceeding the maximum size of an integer, if that is the case i am worried that two input values could potentially result in the same output. The obfuscated values need to be unique so i'm worried about using this code without understanding how it works
this is a simplified version of the code:
public static int DecodeNumber(int input)
{
return (input * PrimeInverse) & int.MaxValue;
}
so my question is
what is the meaning of the ampersand in this context and will this code produce an output that is unique to the input?
No, there is no "rounding" going on here. This is a sneaky way of truncating the most significant bit when multiplication results in overflow.
According to the documentation, int.MaxValue is 2,147,483,647, which is 0x7FFFFFFF in hex. Performing a bitwise AND with this value simply clears out the most significant bit.
Since the intention of the code is to use int.MaxValue for its binary pattern, rather than for its numeric value of the highest int that could be represented by Int32, I would recommend either using 0x7FFFFFFF constant explicitly, or computing it with ~ expressionL
return (input * PrimeInverse) & ~(1 << 31);
The ampersand is a bitwise AND operator. The numbers on the sides of this operator will be considered in binary format and a logic AND would be performed on the bits of the same significance.
The int.MaxValue equals 2,147,483,647. The result of this operation is explained as below:
operation:
a = x & int.MaxValue;
result:
if (x >= 0) {a = x;}
if (x < 0) {a = x + 2,147,483,648;}
if x is non-negative then a = x;
if x is negative, then a = x + 2,147,483,648;
EDIT :
Logical Operations:
Logical operations like AND, OR, XOR, etc are defined to work on Boolean (logical) values. Boolean variables can have either 1 or 0 as their values. The result of AND operation between two logical variables will be 1 if and only if both the variables are equal to 1. This is shown below:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
Bitwise AND operator on numbers works using the basic AND operator. First the two numbers on the sides of operator are converted to binary format. If the number of digits in both numbers are not equal, zeros are added to the left side of the number with less number of digits to have the same number of digits for both numbers. Then the digits of the same significance are ANDed one by one the way explained above and the result of each operation will be written on the place with the same significance constructing the result. The bitwise and between 12 and 7 is shown below. 12 is demonstrated as 1100 in binary format and 7 is 0111.
12 = 0b1100
7 = 0b0111
12 & 7 = ?
1 1 0 0 &
0 1 1 1
----------
0 1 0 0 = 4

Calculating bitwise inversion of char

I am trying to reverse engineering a serial port device that uses hdlc for its packet format.Based on the documentation, the packet should contain a bitwise inversion of the command(first 4 bytes), which in this case is "HELO". Monitoring the serial port when using the original program shows what the bitwise inversion should be:
HELO -> b7 ba b3 b0
READ -> ad ba be bb
The problem is, I am not getting values even remotely close.
public object checksum
{
get
{
var cmdDec = (int)Char.GetNumericValue((char)this.cmd);
return (cmdDec ^ 0xffffffff);
}
}
You have to work with bytes, not with chars:
string source = "HELO";
// Encoding.ASCII: I assume that the command line has ASCII encoded commands only
byte[] result = Encoding.ASCII
.GetBytes(source)
.Select(b => unchecked((byte)~b)) // unchecked: ~b returns int; can exceed byte.MaxValue
.ToArray();
Test (let's represent the result as hexadecimals)
// b7 ba b3 b0
Console.Write(string.Join(" ", result.Select(b => b.ToString("x2"))));
Char is not a byte. You should use bytes instead of chars.
So this.cmd is an array of bytes? You could use the BitConverter.ToUInt32()
PSEUDO: (you might fix some casting)
public uint checksum
{
get
{
var cmdDec = BitConverter.ToUInt32(this.cmd, 0);
return (cmdDec ^ 0xffffffff);
}
}
if this.cmd is a string you could get a byte array from it with Encoding.UTF8.GetBytes(string)
Your bitwise inversion isn't doing what you think it's doing. Take the following, for example:
int i = 5;
var j = i ^ 0xFFFFFFFF;
var k = ~i;
The first example is performing the inversion the way you are doing it, by XOR-ing the number with a max value. The second value uses the C# Bitwise-NOT ~ operator.
After running this code, j will be a long value equal to 4294967290, while k will be an int value equal to -6. Their binary representation will be the same, but j will include another 32 bits of 0's to go along with it. There's also the obvious problem of them being completely different numbers, so any math performed on the values will be completely different depending on what you are using.

C#: The result of casting a negative integer to a byte

I was a looking at the source code of a project, and I noticed the following statement (both keyByte and codedByte are of type byte):
return (byte)(keyByte - codedByte);
I'm trying now to understand what would the result be in cases where keyByte is smaller than codedByte, which results in a negative integer.
After some experiments to understand the result of casting a negative integer which has a value in the range [-255 : -1], I got the following results:
byte result = (byte) (-6); // result = 250
byte result = (byte) (-50); // result = 206
byte result = (byte) (-17); // result = 239
byte result = (byte) (-20); // result = 236
So, provided that -256 < a < 0 , I was able to determine the result by:
result = 256 + a;
My question is: should I always expect this to be the case?
Yes, that will always be the case (i.e. it is not simply dependent on your environment or compiler, but is defined as part of the C# language spec). See http://msdn.microsoft.com/en-us/library/aa691349(v=vs.71).aspx:
In an unchecked context, the result is truncated by discarding any high-order bits that do not fit in the destination type.
The next question is, if you take away the high-order bits of a negative int between -256 and -1, and read it as a byte, what do you get? This is what you've already discovered through experimentation: it is 256 + x.
Note that endianness does not matter because we're discarding the high-order (or most significant) bits, not the "first" 24 bits. So regardless of which end we took it from, we're left with the least significant byte that made up that int.
Yes. Remember, there's no such thing as "-" in the domain of a .Net "Byte":
http://msdn.microsoft.com/en-us/library/e2ayt412.aspx
Because Byte is an unsigned type, it cannot represent a negative
number. If you use the unary minus (-) operator on an expression that
evaluates to type Byte, Visual Basic converts the expression to Short
first. (Note: substitute any CLR/.Net language for "Visual Basic")
ADDENDUM:
Here's a sample app:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestByte
{
class Program
{
static void Main(string[] args)
{
for (int i = -255; i < 256; i++)
{
byte b = (byte)i;
System.Console.WriteLine("i={0}, b={1}", i, b);
}
}
}
}
And here's the resulting output:
testbyte|more
i=-255, b=1
i=-254, b=2
i=-253, b=3
i=-252, b=4
i=-251, b=5
...
i=-2, b=254
i=-1, b=255
i=0, b=0
i=1, b=1
...
i=254, b=254
i=255, b=255
Here is an algorithm that performs the same logic as casting to byte, to help you understand it:
For positives:
byte bNum = iNum % 256;
For negatives:
byte bNum = 256 + (iNum % 256);
It's like searching for any k which causes x + 255k to be in the range 0 ... 255. There could only be one k which produces a result with that range, and the result will be the result of casting to byte.
Another way of looking at it is as if it "cycles around the byte value range":
Lets use the iNum = -712 again, and define a bNum = 0.
We shall do iNum++; bNum--; untill iNum == 0:
iNum = -712;
bNum = 0;
iNum++; // -711
bNum--; // 255 (cycles to the maximum value)
iNum++; // -710
bNum--; // 254
... // And so on, as if the iNum value is being *consumed* within the byte value range cycle.
This is, of course, just an illustration to see how logically it works.
This is what happens in unchecked context. You could say that the runtime (or compiler if the Int32 that you cast to Byte is known at compiletime) adds or subtracts 256 as many times as is needed until it finds a representable value.
In a checked context, an exception (or compiletime error) results. See http://msdn.microsoft.com/en-us/library/khy08726.aspx
Yes - unless you get an exception.
.NET defines all arithmetic operations only on 4 byte and larger data types. So the only non-obvious point is how converting an int to a byte works.
For a conversion from an integral type to another integral type, the result of conversion depends on overflow checking context (says the ECMA 334 standard, Section 13.2.1).
So, in the following context
checked
{
return (byte)(keyByte - codedByte);
}
you will see a System.OverflowException. Whereas in the following context:
unchecked
{
return (byte)(keyByte - codedByte);
}
you are guaranteed to always see the results that you expect regardless of whether you do or don't add a multiple of 256 to the difference; for example, 2 - 255 = 3.
This is true regardless of how the hardware represents signed values. The CLR standard (ECMA 335) specifies, in Section 12.1, that the Int32 type is a "32-bit two's-complement signed value". (Well, that also matches all platforms on which .NET or mono is currently available anyway, so one could almost guess that it would work anyway, but it is good to know that the practice is supported by the language standard and portable.)
Some teams do not want to specify overflow checking contexts explicitly, because they have a policy of checking for overflows early in development cycle, but not in released code. In these cases you can safely do byte arithmetic like this:
return (byte)((keyByte - codedByte) % 256);

Fastest way to sum digits in a number

Given a large number, e.g. 9223372036854775807 (Int64.MaxValue), what is the quickest way to sum the digits?
Currently I am ToStringing and reparsing each char into an int:
num.ToString().Sum(c => int.Parse(new String(new char[] { c })));
Which is surely horrifically inefficent. Any suggestions?
And finally, how would you make this work with BigInteger?
Thanks
Well, another option is:
int sum = 0;
while (value != 0)
{
int remainder;
value = Math.DivRem(value, 10, out remainder);
sum += remainder;
}
BigInteger has a DivRem method as well, so you could use the same approach.
Note that I've seen DivRem not be as fast as doing the same arithmetic "manually", so if you're really interested in speed, you might want to consider that.
Also consider a lookup table with (say) 1000 elements precomputed with the sums:
int sum = 0;
while (value != 0)
{
int remainder;
value = Math.DivRem(value, 1000, out remainder);
sum += lookupTable[remainder];
}
That would mean fewer iterations, but each iteration has an added array access...
Nobody has discussed the BigInteger version. For that I'd look at 101, 102, 104, 108 and so on until you find the last 102n that is less than your value. Take your number div and mod 102n to come up with 2 smaller values. Wash, rinse, and repeat recursively. (You should keep your iterated squares of 10 in an array, and in the recursive part pass along the information about the next power to use.)
With a BigInteger with k digits, dividing by 10 is O(k). Therefore finding the sum of the digits with the naive algorithm is O(k2).
I don't know what C# uses internally, but the non-naive algorithms out there for multiplying or dividing a k-bit by a k-bit integer all work in time O(k1.6) or better (most are much, much better, but have an overhead that makes them worse for "small big integers"). In that case preparing your initial list of powers and splitting once takes times O(k1.6). This gives you 2 problems of size O((k/2)1.6) = 2-0.6O(k1.6). At the next level you have 4 problems of size O((k/4)1.6) for another 2-1.2O(k1.6) work. Add up all of the terms and the powers of 2 turn into a geometric series converging to a constant, so the total work is O(k1.6).
This is a definite win, and the win will be very, very evident if you're working with numbers in the many thousands of digits.
Yes, it's probably somewhat inefficient. I'd probably just repeatedly divide by 10, adding together the remainders each time.
The first rule of performance optimization: Don't divide when you can multiply instead. The following function will take four digit numbers 0-9999 and do what you ask. The intermediate calculations are larger than 16 bits. We multiple the number by 1/10000 and take the result as a Q16 fixed point number. Digits are then extracted by multiplication by 10 and taking the integer part.
#define TEN_OVER_10000 ((1<<25)/1000 +1) // .001 Q25
int sum_digits(unsigned int n)
{
int c;
int sum = 0;
n = (n * TEN_OVER_10000)>>9; // n*10/10000 Q16
for (c=0;c<4;c++)
{
printf("Digit: %d\n", n>>16);
sum += n>>16;
n = (n & 0xffff) * 10; // next digit
}
return sum;
}
This can be extended to larger sizes but its tricky. You need to ensure that the rounding in the fixed point calculation always works correctly. I also did 4 digit numbers so the intermediate result of the fixed point multiply would not overflow.
Int64 BigNumber = 9223372036854775807;
String BigNumberStr = BigNumber.ToString();
int Sum = 0;
foreach (Char c in BigNumberStr)
Sum += (byte)c;
// 48 is ascii value of zero
// remove in one step rather than in the loop
Sum -= 48 * BigNumberStr.Length;
Instead of int.parse, why not subtract '0' from each digit to get the actual value.
Remember, '9' - '0' = 9, so you should be able to do this in order k (length of the number). The subtraction is just one operation, so that should not slow things down.

Categories