Initially I thought Math.Sign would be the proper way to go but after running a test it seems that it treats -0.0 and +0.0 the same.
Here's a grotty hack way of doing it:
private static readonly long NegativeZeroBits =
BitConverter.DoubleToInt64Bits(-0.0);
public static bool IsNegativeZero(double x)
{
return BitConverter.DoubleToInt64Bits(x) == NegativeZeroBits;
}
Basically that's testing for the exact bit pattern of -0.0, but without having to hardcode it.
After a bit of searching I finally made it to Section 7.7.2 of the C# specification and came up with this solution.
private static bool IsNegativeZero(double x)
{
return x == 0.0 && double.IsNegativeInfinity(1.0 / x);
}
Negative zero has the sign bit set. Thus:
public static bool IsNegativeZero(double value) {
if (value != 0) return false;
int index = BitConverter.IsLittleEndian ? 7 : 0;
return BitConverter.GetBytes(value)[index] == 0x80;
}
Edit: as the OP pointed out, this doesn't work in Release mode. The x86 JIT optimizer takes the if() statement seriously and loads zero directly rather than loading value. Which is indeed more performant. But that causes the negative zero to be lost. The code needs to be de-tuned to prevent this:
public static bool IsNegativeZero(double value) {
int index = BitConverter.IsLittleEndian ? 7 : 0;
if (BitConverter.GetBytes(value)[index] != 0x80) return false;
return value == 0;
}
This is quite typical behavior for the x86 jitter btw, it doesn't handle corner cases really well when it optimizes floating point code. The x64 jitter is much better in that respect. Although there's arguably no worse corner case than giving meaning to negative zero. Be forewarned.
x == 0 && 1 / x < 0
Here's another hack. It takes advantage of the fact that Equals on a struct will do a bitwise comparison instead of calling Equals on its members:
struct Negative0
{
double val;
public static bool Equals(double d)
{
return new Negative0 { val = -0d }.Equals(new Negative0 { val = d });
}
}
Negative0.Equals(0); // false
Negative0.Equals(-0.0); // true
More generally, you can do,
bool IsNegative(double value)
{
const ulong SignBit = 0x8000000000000000;
return ((ulong)BitConverter.DoubleToInt64Bits(value) & SignBit) == SignBit;
}
or alternatively, if you prefer,
[StructLayout(LayoutKind.Explicit)]
private struct DoubleULong
{
[FieldOffset(0)]
public double Double;
[FieldOffset(0)]
public readonly ulong ULong;
}
bool IsNegative(double value)
{
var du = new DoubleULong { Double = value };
return ((du.ULong >> 62) & 2) == 2;
}
The later gives an approximate 50% performance improvment in debug but. Once compiled in release mode and run from the command line there is no significant difference.
I couldn't generate a performance improvement using unsafe code either but this may be due to my inexperience.
Related
When the int variable is more than 10 digits, an error occurs and the number becomes negative.
Why is this happening and how can I solve the problem?
This is my code:
UnityWebRequest www = new UnityWebRequest("https://api.hypixel.net/skyblock/bazaar");
www.downloadHandler = new DownloadHandlerBuffer();
yield return www.SendWebRequest();
JSONNode itemsData = JSON.Parse(www.downloadHandler.text);
unixtimeOnline = itemsData2["lastUpdated"];
Debug.Log(unixtimeOnline);
// output -2147483648
tl;dr
Simply use ulong instead of int for unixtimeOnline
ulong unixtimeOnline = itemsData2["lastUpdated"];
What happened?
As was already mentioned int (or also System.Int32) has 32 bits.
The int.MaxValue is
2147483647
no int can be higher than that. What you get is basically a byte overflow.
From the JSON.Parse I suspect you are using SimpleJson
and if you have
int unixtimeOnline = itemsData2["lastUpdated"];
it will implicitly use
public static implicit operator int(JSONNode d)
{
return (d == null) ? 0 : d.AsInt;
}
which uses AsInt
public virtual int AsInt
{
get { return (int)AsDouble; }
set { AsDouble = value; }
}
which is a problem because a double can hold up to
so when you simply do
double d = 2147483648.0;
int example = (int)d;
you will again get
-2147483648
What you want
You want to use a type that supports larger numbers. Like e.g.
long: goes up to
9,223,372,036,854,775,807
and is actually what system time ticks are usually stored as (see e.g. DateTime.Ticks
or actually since your time is probably never negative anyway directly use the unsigned ones
ulong: goes up to
18,446,744,073,709,551,615
Solution
Long store short: There are implicit conversion for the other numeric values so all you need to do is use
ulong unixtimeOnline = itemsData2["lastUpdated"];
and it will use AsUlong instead
public static implicit operator ulong(JSONNode d)
{
return (d == null) ? 0 : d.AsULong;
}
which now correctly uses
public virtual ulong AsULong
{
get
{
ulong val = 0;
if (ulong.TryParse(Value, out val))
return val;
return 0;
}
set
{
Value = value.ToString();
}
}
As the comment says you will need to use a long variable type
I could use something like this to check...
private static readonly int IntMaxValue = int.Parse(int.MaxValue.ToString());
private static bool IsChecked()
{
try {
var i = (IntMaxValue + 1);
return false;
}
catch (OverflowException) {
return true;
}
}
... but that's a lot of overhead in a tight loop, throwing and catching just to detect it. Is there a lighter way to do this?
EDIT for more context...
struct NarrowChar
{
private readonly Byte b;
public static implicit operator NarrowChar(Char c) => new NarrowChar(c);
public NarrowChar(Char c)
{
if (c > Byte.MaxValue)
if (IsCheckedContext())
throw new OverflowException();
else
b = 0; // since ideally I don't want to have a non-sensical value
b = (Byte)c;
}
}
If the answer is just 'no', don't be afraid to simply say that :)
So the answer seems to be 'no', but I figured out the solution for my particular problem. It could be useful to someone else who ends up in this situation.
public NarrowChar(Char c) {
var b = (Byte)c;
this.b = (c & 255) != c ? (Byte)'?' : b;
}
First we "probe" the checked/unchecked context by just trying the cast. If we're checked, the overflow exception is thrown by the (Byte) c. If we're unchecked, the bit mask and comparison to c tells us if there was an overflow in the casting. In our particular case, we want the semantics of NarrowChar such that a Char that won't fit in a Byte gets set to ?; just like if you transcode a String of ™ to ISO-8759-1 or ASCII you get ?.
Doing the cast first is important to the semantics. Inlining b will break that "probing" behaviour.
I need to get data bit width of a type. How can I get that?
For example how can I write a function as follows?
int x = 30;
Type t = x.GetType();
bool sign = IsSignedType(t); // int is signed type, so it's true
int width = GetWidth(t); // 32
For the size, you can use Marshal.SizeOf and multiply by the number of bits in a byte (hint: 8), though for the built-in value types it is probably easy enough and certainly faster to just use a case statement.
For the sign , I'd think bool sign = t == Math.Abs(t); would do.
EDIT:
To determine if it is a signed number, there is no built-in method, but there are only 3 5 of them:
public static class Application
{
public enum SignedEnum : int
{
Foo,
Boo,
Zoo
}
public enum UnSignedEnum : uint
{
Foo,
Boo,
Zoo
}
public static void Main()
{
Console.WriteLine(Marshal.SizeOf(typeof(Int32)) * 8);
Console.WriteLine(5.IsSigned());
Console.WriteLine(((UInt32)5).IsSigned());
Console.WriteLine((SignedEnum.Zoo).IsSigned());
Console.WriteLine((UnSignedEnum.Zoo).IsSigned());
Console.ReadLine();
}
}
public static class NumberHelper
{
public static Boolean IsSigned<T>(this T value) where T : struct
{
return value.GetType().IsSigned();
}
public static Boolean IsSigned(this Type t)
{
return !(
t.Equals(typeof(Byte)) ||
t.Equals(typeof(UIntPtr)) ||
t.Equals(typeof(UInt16)) ||
t.Equals(typeof(UInt32)) ||
t.Equals(typeof(UInt64)) ||
(t.IsEnum && !Enum.GetUnderlyingType(t).IsSigned())
);
}
}
#ChrisShain's answers the first part correctly. Assuming you can guarantee that t is a numeric type, to tell whether the type is signed or not you should be able to use expression trees to dynamically invoke the MaxValue const field on t, convert it to a bitarray and check to see if it uses the sign bit (or just use bitshift magic to test it without the conversion). I haven't done it this way but it should be doable. If you want an example, I can work through it.
Or do it the easy way with a switch statement (or series of ifs) like everyone else does.
I am trying to convert a decimal to an integer safely.
Something like
public static bool Decimal.TryConvertToInt32(decimal val, out int val)
this will return false if it cannot convert to an integer, and true w/ successful output if it can.
This is to avoid catching the OverflowException in decimal.ToInt32 method. What is the easiest way to do this?
Here:
public static bool TryConvertToInt32(decimal val, out int intval)
{
if (val > int.MaxValue || val < int.MinValue)
{
intval = 0; // assignment required for out parameter
return false;
}
intval = Decimal.ToInt32(val);
return true;
}
I would write an extension method for class decimal like this:
public static class Extensions
{
public static bool TryConvertToInt32(this decimal decimalValue, out int intValue)
{
intValue = 0;
if ((decimalValue >= int.MinValue) && (decimalValue <= int.MaxValue))
{
intValue = Convert.ToInt32(decimalValue);
return true;
}
return false;
}
}
You can use it in that way:
if (decimalNumber.TryConvertToInt32(out intValue))
{
Debug.WriteLine(intValue.ToString());
}
Compare the decimal against int.MinValue and int.MaxValue prior to the conversion.
What's wrong with using Int32.TryParse(string) ?
Why are you trying to avoid catching the OverflowException? It is there for a reason and you should totally catch it where you call Decimal.ToInt32(). Exceptions are used widely throughout the framework and users should catch them. The Try methods can help you around them to make code tighter and cleaner, but where the framework doesn't have a suitable method (Decimal.TryConvertToInt32() in this case) catching OverflowException is the appropriate thing to do. It is actually more clear than making an extension class or writing your own separate static method (both of those involve writing your own code where the framework is already giving you this functionality).
First of all, please excuse any typo, English is not my native language.
Here's my question. I'm creating a class that represents approximate values as such:
public sealed class ApproximateValue
{
public double MaxValue { get; private set; }
public double MinValue { get; private set; }
public double Uncertainty { get; private set; }
public double Value { get; private set; }
public ApproximateValue(double value, double uncertainty)
{
if (uncertainty < 0) { throw new ArgumentOutOfRangeException("uncertainty", "Value must be postivie or equal to 0."); }
this.Value = value;
this.Uncertainty = uncertainty;
this.MaxValue = this.Value + this.Uncertainty;
this.MinValue = this.Value - this.Uncertainty;
}
}
I want to use this class for uncertain measurments, like x = 8.31246 +/-0.0045 for example and perform calculations on these values.
I want to overload operators in this class. I don't know how to implement the >, >=, <= and < operators... The first thing I thought of is something like this:
public static bool? operator >(ApproximateValue a, ApproximateValue b)
{
if (a == null || b == null) { return null; }
if (a.MinValue > b.MaxValue) { return true; }
else if (a.MaxValue < b.MinValue) { return false; }
else { return null; }
}
However, in the last case, I'm not satisfied with this 'null' as the accurate result is not 'null'. It may be 'true' or it may be 'false'.
Is there any object in .Net 4 that would help implementing this feature I am not aware of, or am I doing the correct way? I was also thinking about using an object instead of a boolean that would define in what circumstances the value is superior or not to another one rather than implementing comparison operators but I feel it's a bit too complex for what I'm trying to achieve...
I'd probably do something like this. I'd implement IComparable<ApproximateValue> and then define <, >, <=, and >= according to the result of CompareTo():
public int CompareTo(ApproximateValue other)
{
// if other is null, we are greater by default in .NET, so return 1.
if (other == null)
{
return 1;
}
// this is > other
if (MinValue > other.MaxValue)
{
return 1;
}
// this is < other
if (MaxValue < other.MinValue)
{
return -1;
}
// "same"-ish
return 0;
}
public static bool operator <(ApproximateValue left, ApproximateValue right)
{
return (left == null) ? (right != null) : left.CompareTo(right) < 0;
}
public static bool operator >(ApproximateValue left, ApproximateValue right)
{
return (right == null) ? (left != null) : right.CompareTo(left) < 0;
}
public static bool operator <=(ApproximateValue left, ApproximateValue right)
{
return (left == null) || left.CompareTo(right) <= 0;
}
public static bool operator >=(ApproximateValue left, ApproximateValue right)
{
return (right == null) || right.CompareTo(left) <= 0;
}
public static bool operator ==(ApproximateValue left, ApproximateValue right)
{
return (left == null) ? (right == null) : left.CompareTo(right) == 0;
}
public static bool operator !=(ApproximateValue left, ApproximateValue right)
{
return (left == null) ? (right != null) : left.CompareTo(left) != 0;
}
This is one of the rare cases where it may make more sense to define a value type (struct), which then eliminates the null case concern. You can also modify MinValue and MaxValue to be computed properties (just implement a get method that computes the result) rather than storing them upon construction.
On a side note, comparison of approximate values is itself an approximate operation, so you need to consider the use cases for your data type; are you only intending to use comparison to determine when the ranges are non-overlapping? It really depends on the meaning of your type. Is this intended to represent a data point from a normally distributed data set, where the uncertainty is some number of standard deviations for the sampling? If so, it might make more sense for a comparison operation to return a numeric probability (which couldn't be called through the comparison operator, of course.)
It looks to me like you need to check if a.MaxValue == b.MinValue also, in your current implementation that would return null, which seems incorrect, it should either return true or false based on how you want the spec to actually work. I'm not sure of any built in .net functionality for this, so I believe you are going about it the correct way.
return a.Value - a.Uncertainty > b.Value + b.Uncertainty
I wouldn't really mess with the semantics of >: I think bool? is a dangerous return type here. That said, given the uncertainty, you could return true, if a is more likely to be > b.
It seems to me that you're trying to implement some form of Ternary Logic because you want the result of applying the operators to be either True, False or Indeterminate. The problem with doing that is that you really cannot combine the built-in boolean values with your indeterminate value. So whilst you could do some limited form of comparison of two ApproximateValues I think that it's inappropriate to use bool as the result of these comparisons because that implies that the result of the comparisons can be freely combined with other expressions that result in bool values, but the possibility of an indeterminate value undermines that. For example, it makes no sense to do the following when the result of operation on the left of the OR is indeterminate.
ApproximateValue approx1 = ...;
ApproximateValue approx2 = ...;
bool result = ...;
bool result = approx1 > approx2 || someBool;
So, in my opinion, I don't think that it's a good idea to implement the comparisons as operators at all if you want to retain the indeterminacy. The solutions offered here eliminate the indeterminacy, which is fine, but not what was originally specified.