public static double[] ParseDoubleArray(MWArray array)
{
var vector2d = (array as MWNumericArray).ToArray() as double[,];
var vector1d = new double[vector2d.Length];
System.Buffer.BlockCopy(vector2d, 0, vector1d, 0, vector2d.Length * sizeof(double));
return vector1d;
}
this is my function for getting the double[] from MWArray
however why i do this:
prepImage.RawData = Array.ConvertAll(prepRawData, Convert.ToUInt16);
I sometimes get an exception because matlab is returning doubles too big for conversion.
has anyone came across this issue? i can crop the numbers but is there another solution?
UInt16, as its name implies, holds unsigned 16 bit integers (values from 0 to 65535). On the other hand, the double structure ranges from -1.79769313486232e308 to 1.79769313486232e308.
The issue here is that your Matlab code returns either a negative value, or a positive value greater than 65535. Matlab will also assign NaN to any uninitialized value which is also invalid for UInt16.
To fix your problem, either make sure that your Matlab code is really only returning values in the 0 to 65535 range or change the data structure on the C# side to something else than UInt16.
Related
I'm trying to understand difference between some data types and conversion.
public static void ExplicitTypeConversion2()
{
long longValue=long.MaxValue;
float floatValue = float.MaxValue;
int integerValue = (int) longValue;
int integerValue2 = (int)floatValue;
Console.WriteLine(integerValue);
Console.WriteLine(integerValue2);
}
When I run that code block, it outputs:
-1
-2147483648
I know that if the value you want to assign to an integer is bigger than that integer can keep, it returns the minimum value of integer (-2147483648).
As far as I know, long.MaxValue is much bigger than the maximum value of an integer, but if I cast long.MaxValue to int, it returns -1.
What is the difference these two casting? I think the first one also suppose to return -2147483648 instead of -1.
if the value you want to assign to an integer, bigger than that integer can keep, returns minimum value of integer
That's not a rule. The relevant rules are
For integer types in an unchecked context (ie the default):
If the source type is larger than the destination type, then the source value is truncated by
discarding its “extra” most significant bits. The result is then treated as a value of the destination
type.
For float->int in an unchecked context:
The value is rounded towards zero to the nearest integral value. If this integral value is within
the range of the destination type, then this value is the result of the conversion.
Otherwise, the result of the conversion is an unspecified value of the destination type.
Chopping off 32 leading bits of off 0x7fffffffffffffff gives 0xffffffff aka -1.
You were never promised you would get int.MinValue for that out of range float->int cast, but you do anyway because it's easy to implement: x64's conversion instruction cvtss2si makes 0x80000000 for out of range results and similarly fistp (the old x87 conversion instruction used by the 32bit JIT) stores "the integer indefinite value" which is 0x80000000.
The binary value of long.MaxValue is 0111...111111(a zero followed by 63 ones). When you cast to int, you keep the lowest 32 bits 111...11111. This is -1 in decimal, as int is signed and two's complement applies.
Let me explain:
long longValue=long.MaxValue;
float floatValue = float.MaxValue;
int integerValue = (int) longValue;
int integerValue2 = (int)floatValue;
The maximum value of long is 9,223,372,036,854,775,807 or 0x7FFFFFFFFFFFFFFF, thus 2's complement after reducing it into 0xFFFFFFFF will return 0x00000001 with minus sign bit, represented as -1 in decimal.
On the other side, the maximum value of float is 3.40282347E+38, thus casting it to int rounded the value to 3E+38 and using 2's complement after reducing it we get the hex value of 0x80000000 with minus sign bit, there is -2147483648 in decimal.
All of this case applies on signed integers, the result will be different on unsigned ones.
Reference:
https://msdn.microsoft.com/en-us/library/system.int64.maxvalue(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.single.maxvalue(v=vs.110).aspx
I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:
Determine the decimal precision of an input number
First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.
With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.
I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:
[TestMethod]
public void ScaleAndPrecisionTest()
{
//arrange
var number = 12345.67890M;
//act
var scale = number.Scale();
var precision = number.Precision();
//assert
Assert.IsTrue(precision == 10);
Assert.IsTrue(scale == 5);
}
but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.
Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.
Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?
This is how you get the scale using the GetBits() function:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F);
And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;
Now we can put them into extensions:
public static class Extensions{
public static int GetScale(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
return (int) ((bits[3] >> 16) & 0x7F);
}
public static int GetPrecision(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
return (int)Math.Floor(Math.Log10((double)d)) + 1;
}
}
And here is a fiddle.
First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.
Now, there are 2 fundamental ways to determine each digit (and thus, their number):
get+interpret the meaningful parts
calculate mathematically
The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.
For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().
ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.
E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:
public static int[] GetBits(decimal d)
{
return new int[]
{
d.lo,
d.mid,
d.hi,
d.flags
};
}
And their semantics are:
|high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
flags:
bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
(thus (flags>>16)&0xFF is the raw value of this field)
bit 31 - sign (doesn't concern us)
as you can see, this is very similar to IEEE 754 floats.
So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.
Racil's answer gives you the value of the internal scale value of the decimal which is correct, although if the internal representation ever changes it'll be interesting.
In the current format the precision portion of decimal is fixed at 96 bits, which is between 28 and 29 decimal digits depending on the number. All .NET decimal values share this precision. Since this is constant there's no internal value you can use to determine it.
What you're apparently after though is the number of digits, which we can easily determine from the string representation. We can also get the scale at the same time or at least using the same method.
public struct DecimalInfo
{
public int Scale;
public int Length;
public override string ToString()
{
return string.Format("Scale={0}, Length={1}", Scale, Length);
}
}
public static class Extensions
{
public static DecimalInfo GetInfo(this decimal value)
{
string decStr = value.ToString().Replace("-", "");
int decpos = decStr.IndexOf(".");
int length = decStr.Length - (decpos < 0 ? 0 : 1);
int scale = decpos < 0 ? 0 : length - decpos;
return new DecimalInfo { Scale = scale, Length = length };
}
}
So I've got a nice convoluted piece of C# code that deals with substitution into mathematical equations. It's working almost perfectly. However, when given the equation (x - y + 1) / z and values x=2 y=0 z=5, it fails miserably and inexplicably.
The problem is not that the values are passed to the function wrong. That's fine. The problem is that no matter what type I use, C# seems to think that 3/5=0.
Here's the piece of code in question:
public static void TrapRule(string[] args)
{
// ...
string equation = args[0];
int ordinates = Convert.ToInt32(args[1]);
int startX = Convert.ToInt32(args[2]);
int endX = Convert.ToInt32(args[3]);
double difference = (endX - startX + 1) / ordinates;
// ...
}
It gets passed args as:
args[0] = Pow(6,[x])
args[1] = 5
args[2] = 0
args[3] = 2
(Using NCalc, by the way, so the Pow() function gets evaluated by that - which works fine.)
The result? difference = 0.
The same thing happens when using float, and when trying simple math:
Console.Write((3 / 5));
produces the same result.
What's going on?
The / operator looks at its operands and when it discovers that they are two integers it returns an integer. If you want to get back a double value then you need to cast one of the two integers to a double
double difference = (endX - startX + 1) / (double)ordinates;
You can find a more formal explanation in the C# reference
They're called integers. Integers don't store any fractional parts of a number. Moreover, when you divide an integer divided by another integer... the result is still an integer.
So when you take 3 / 5 in integer land, you can't store the .6 result. All you have left is 0. The fractional part is always truncated, never rounded. Most programming languages work this way.
For something like this, I'd recommend working in the decimal type, instead.
Can anyone explain in a simple way the codes below:
public unsafe static float sample(){
int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);
return *(float*)(&result); //don't know what for... please explain
}
Note: the above code uses unsafe function
For the above code, I'm having hard time because I don't understand what's the difference between its return value compare to the return value below:
return (float)(result);
Is it necessary to use unsafe function if your returning *(float*)(&result)?
On .NET a float is represented using an IEEE binary32 single precision floating number stored using 32 bits. Apparently the code constructs this number by assembling the bits into an int and then casts it to a float using unsafe. The cast is what in C++ terms is called a reinterpret_cast where no conversion is done when the cast is performed - the bits are just reinterpreted as a new type.
The number assembled is 4019999A in hexadecimal or 01000000 00011001 10011001 10011010 in binary:
The sign bit is 0 (it is a positive number).
The exponent bits are 10000000 (or 128) resulting in the exponent 128 - 127 = 1 (the fraction is multiplied by 2^1 = 2).
The fraction bits are 00110011001100110011010 which, if nothing else, almost have a recognizable pattern of zeros and ones.
The float returned has the exact same bits as 2.4 converted to floating point and the entire function can simply be replaced by the literal 2.4f.
The final zero that sort of "breaks the bit pattern" of the fraction is there perhaps to make the float match something that can be written using a floating point literal?
So what is the difference between a regular cast and this weird "unsafe cast"?
Assume the following code:
int result = 0x4019999A // 1075419546
float normalCast = (float) result;
float unsafeCast = *(float*) &result; // Only possible in an unsafe context
The first cast takes the integer 1075419546 and converts it to its floating point representation, e.g. 1075419546f. This involves computing the sign, exponent and fraction bits required to represent the original integer as a floating point number. This is a non-trivial computation that has to be done.
The second cast is more sinister (and can only be performed in an unsafe context). The &result takes the address of result returning a pointer to the location where the integer 1075419546 is stored. The pointer dereferencing operator * can then be used to retrieve the value pointed to by the pointer. Using *&result will retrieve the integer stored at the location however by first casting the pointer to a float* (a pointer to a float) a float is instead retrieved from the memory location resulting in the float 2.4f being assigned to unsafeCast. So the narrative of *(float*) &result is give me a pointer to result and assume the pointer is pointer to a float and retrieve the value pointed to by the pointer.
As opposed to the first cast the second cast doesn't require any computations. It just shoves the 32 bit stored in result into unsafeCast (which fortunately also is 32 bit).
In general performing a cast like that can fail in many ways but by using unsafe you are telling the compiler that you know what you are doing.
If i'm interpreting what the method is doing correctly, this is a safe equivalent:
public static float sample() {
int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);
byte[] data = BitConverter.GetBytes(result);
return BitConverter.ToSingle(data, 0);
}
As has been said already, it is re-interpreting the int value as a float.
This looks like an optimization attempt. Instead of doing floating point calculations you are doing integer calculations on the Integer representation of a floating point number.
Remember, floats are stored as binary values just like ints.
After the calculation is done you are using pointers and casting to convert the integer into the float value.
This is not the same as casting the value to a float. That will turn the int value 1 into the float 1.0. In this case you turn the int value into the floating point number described by the binary value stored in the int.
It's quite hard to explain properly. I will look for an example. :-)
Edit:
Look here: http://en.wikipedia.org/wiki/Fast_inverse_square_root
Your code is basically doing the same as described in this article.
Re : What is it doing?
It is taking the value of the bytes stored int and instead interpreting these bytes as a float (without conversion).
Fortunately, floats and ints have the same data size of 4 bytes.
Because Sarge Borsch asked, here's the 'Union' equivalent:
[StructLayout(LayoutKind.Explicit)]
struct ByteFloatUnion {
[FieldOffset(0)] internal byte byte0;
[FieldOffset(1)] internal byte byte1;
[FieldOffset(2)] internal byte byte2;
[FieldOffset(3)] internal byte byte3;
[FieldOffset(0)] internal float single;
}
public static float sample() {
ByteFloatUnion result;
result.single = 0f;
result.byte0 = 154;
result.byte1 = 153;
result.byte2 = 25;
result.byte3 = 64;
return result.single;
}
As others have already described, it's treating the bytes of an int as if they were a float.
You might get the same result without using unsafe code like this:
public static float sample()
{
int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);
return BitConverter.ToSingle(BitConverter.GetBytes(result), 0);
}
But then it won't be very fast any more and you might as well use floats/doubles and the Math functions.
I'm writing a datalog parser for a robot controller, and what's coming in from the data log is a number in the range of 0 - 65535 (which is a 16 bit unsigned integer if I'm not mistaken). I'm trying to convert that to a signed 16 bit integer to display to the user (since that was the actual datatype before the logger changed it).
Can someone give me a hand?
Example:
What the values should be
(0, -1, -2, -3, -4)
What the values are
(0, 65535, 65534, 65533, 65532)
Have you tried explicit casting?
UInt16 x = 65535;
var y = (Int16)x; // y = -1
Using unchecked here avoids a crash if [X] Check for Arithmetic Overflow is on:
UInt16 x = 65535;
Int16 y = unchecked((Int16)x);
Or like this
Just check if UI16>32767 if yes, I16=UI16-65536, otherwise = UI16