The following code works fine in C#.
Int32 a, b;
Int16 c;
a = 0x7FFFFFFF;
b = a & 0xFFFF;
c = (Int16)b;
But this code crash with a OverflowException in VB.NET.
Dim a, b As Int32
Dim c As Int16
a = &H7FFFFFFF
b = a And &HFFFF
c = CType(b, Int16)
Both code snippets seem the same to me. What is the difference and how can I get the C# code converted to VB.NET?
From MSDN:
For the arithmetic, casting, or conversion operation to throw an OverflowException, the operation must occur in a checked context. By default, arithmetic operations and overflows in Visual Basic are checked; in C#, they are not. If the operation occurs in an unchecked context, the result is truncated by discarding any high-order bits that do not fit into the destination type.
EDIT:
If you're going to port code from C# to VB.NET, you may be interested in differences between them. Also compare and explicitly set compiler settings to be the same as default settings in C# (when needed).
First up: My understanding of this is that CType(b, Int16) isn't the same as (Int16)b. One is a conversion of type (CType) and the other is a cast. (Int16)b equates to DirectCast(b, Int16) rather than CType(b, Int16).
The difference between the two (as noted on MSDN) is that CType succeeds so long as there is a valid conversion, however, DirectCast requires the run-time type of the object to be the same, and as such, all you're doing is telling the compiler at design time that this object is of that type rather than telling it to convert to that type.
See: http://msdn.microsoft.com/en-us/library/7k6y2h6x(VS.71).aspx
The underlying problem though is that you're trying to convert a 32 bit integer into a 16 bit integer which is... [I'm missing the word I need, perhaps someone can insert it here for me] lossy. Converting from 16 bit to 32 bit is allowed because it's lossless, converting from 32 bit to 16 bit is undefined. For why it works in C# you can see #Roman's answer - it relates to the fact that C# doesn't check the overflow.
The resulting value of &H7FFFFFFF And &HFFFF results in UInt16.MaxValue (65535) UInt16 runs from 0 to 65535, you're trying to cram that into Int16 which runs from -32768 through to 32767 which as you can see isn't going to work. Also the fact that this value might fit into a UInt16 is coincidental, adding two 32 bit integers and trying to cram them into a 16 bit integer (short) would frequently cause an overflow and thus I would say this is an inherently dangerous operation.
Have you tried using DirectCast(b, Int16)? CType is not the same as a C# cast.
Here's an article comparing the performance of DirectCast and CType, as well as going into more detail to when either should be used.
http://www.cnblogs.com/liujq007/archive/2010/12/04/1896059.html
Summary:
To unsigned type: just do And operator, or 2st method.
Dim a As Byte = CByte(300 And &HFF)
To signed type: left shift n bits, then right shift n bits, which is to expand the signed bit. n = (sizeof(type1) - sizeof(type2)) * 8 or VB:use Len(New type) instead of sizeof(type)
Dim a As Short = CShort(34042 << 16 >> 16)
You can find the details from the link below.
?CType(b, Int16)
Constant expression not representable in type 'Short'.
?b
65535
?directcast(b, Int16)
Value of type 'Integer' cannot be converted to 'Short'.
?int16.TryParse(b.ToString(), c)
False
You can truncate this kind of overflow with a structure.
<StructLayout(LayoutKind.Explicit)> _
Public Structure int3216
<FieldOffset(0)> Public i32 As Int32
<FieldOffset(0)> Public i16high As Int16
<FieldOffset(2)> Public i16low As Int16
End Structure
...
Dim _i3216 As int3216
_i3216.i32 = a And &HFFFF
c = _i3216.i16low
I came across this question when looking for a solution for converting a Short and getting the overflow result with out the overflow error. I found a solution here:
http://bytes.com/topic/visual-basic-net/answers/732622-problems-typecasting-vb-net
about halfway down the page is this:
The old, VB "Proper" trick of "side-stepping" out to Hexadecimal and
back again still works!
Dim unsigned as UInt16 = 40000
Dim signed as Int16 = CShort(Val("&H" & Hex(unsigned)))
it seems to work pretty slick!
Related
There are already solutions to this problem for small numbers:
Here: Difference between 2 numbers
Here: C# function to find the delta of two numbers
Here: How can I find the difference between 2 values in C#?
I'll summarise the answer to them all:
Math.Abs(a - b)
The problem is when the numbers are large this gives the wrong answer (by means of an overflow). Worse still, if (a - b) = Int32.MinValue then Math.Abs crashes with an exception (because Int32.MaxValue = Int32.MinValue - 1):
System.OverflowException occurred
HResult=0x80131516
Message=Negating the minimum value of a twos complement number is
invalid.
Source=mscorlib
StackTrace: at
System.Math.AbsHelper(Int32 value) at System.Math.Abs(Int32 value)
Its specific nature leads to difficult-to-reproduce bugs.
Maybe I'm missing some well known library function, but is there any way of determining the difference safely?
As suggested by others, use BigInteger as defined in System.Numerics (you'll have to include the namespace in Visual Studio)
Then you can just do:
BigInteger a = new BigInteger();
BigInteger b = new BigInteger();
// Assign values to a and b somewhere in here...
// Then just use included BigInteger.Abs method
BigInteger result = BigInteger.Abs(a - b);
Jeremy Thompson's answer is still valid, but note that the BigInteger namespace includes an absolute value method, so there shouldn't be any need for special logic. Also, Math.Abs expects a decimal, so it will give you grief if you try to pass in a BigInteger.
Keep in mind there are caveats to using BigIntegers. If you have a ludicrously large number, C# will try to allocate memory for it, and you may run into out of memory exceptions. On the flip side, BigIntegers are great because the amount of memory allotted to them is dynamically changed as the number gets larger.
Check out the microsoft reference here for more info: https://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx
The question is, how do you want to hold the difference between two large numbers? If you're calculating the difference between two signed long (64-bit) integers, for example, and the difference will not fit into a signed long integer, how do you intend to store it?
long a = +(1 << 62) + 1000;
long b = -(1 << 62);
long dif = a - b; // Overflow, bit truncation
The difference between a and b is wider than 64 bits, so when it's stored into a long integer, its high-order bits are truncated, and you get a strange value for dif.
In other words, you cannot store all possible differences between signed integer values of a given width into a signed integer of the same width. (You can only store half of all of the possible values; the other half require an extra bit.)
Your options are to either use a wider type to hold the difference (which won't help you if you're already using the widest long integer type), or to use a different arithmetic type. If you need at least 64 signed bits of precision, you'll probably need to use BigInteger.
The BigInteger was introduced in .Net 4.0.
There are some open source implementations available in lower versions of the .Net Framework, however you'd be wise to go with the standard.
If the Math.Abs still gives you grief you can implement the function yourself; if the number is negative (a - b < 0) simply trim the negative symbol so its unsigned.
Also, have you tried using Doubles? They hold much larger values.
Here's an alternative that might be interesting to you, but is very much within the confines of a particular int size. This example uses Int32, and uses bitwise operators to accomplish the difference and then the absolute value. This implementation is tolerant of your scenario where a - b equals the min int value, it naturally returns the min int value (not much else you can do, without casting things to the a larger data type). I don't think this is as good an answer as using BigInteger, but it is fun to play with if nothing else:
static int diff(int a, int b)
{
int xorResult = (a ^ b);
int diff = (a & xorResult) - (b & xorResult);
return (diff + (diff >> 31)) ^ (diff >> 31);
}
Here are some cases I ran it through to play with the behavior:
Console.WriteLine(diff(13, 14)); // 1
Console.WriteLine(diff(11, 9)); // 2
Console.WriteLine(diff(5002000, 2346728)); // 2655272
Console.WriteLine(diff(int.MinValue, 0)); // Should be 2147483648, but int data type can't go that large. Actual result will be -2147483648.
First, this question has related posts:
Why Int32 maximum value is 0x7FFFFFFF?
However, I want to know why the hexadecimal value is always treated as an unsigned quantity.
See the following snippet:
byte a = 0xFF; //No error (byte is an unsigned type).
short b = 0xFFFF; //Error! (even though both types are 16 bits).
int c = 0xFFFFFFFF; //Error! (even though both types are 32 bits).
long d = 0xFFFFFFFFFFFFFFFF; //Error! (even though both types are 64 bits).
The reason for the error is because the hexadecimal values are always treated as unsigned values, regardless of what data-type they are stored as. Hence, the value is 'too large' for the data-type described.
For instance, I expected:
int c = 0xFFFFFFFF;
To store the value:
-1
And not the value:
4294967295
Simply because int is a signed type.
So, why is it that the hexadecimal values are always treated as unsigned even if the sign type can be inferred by the data-type used to store them?
How can I store these bits into these data-types without resorting to the use of ushort, uint, and ulong?
In particular, how can I achieve this for the long data-type considering I cannot use a larger signed data-type?
What's going on is that a literal is intrinsically typed. 0.1 is a double, which is why you can't say float f = 0.1. You can cast a double to a float (float f = (float)0.1), but you may lose precision. Similarly, the literal 0xFFFFFFFF is intrinsically a uint. You can cast it to an int, but that's after it has been interpreted by the compiler as a uint. The compiler doesn't use the variable to which you are assigning it to figure out its type; its type is defined by what sort of literal it is.
They are treated as unsigned numbers, as that is what the language specification says to do
I have such code in C++
#define dppHRESULT(Code) \
MAKE_HRESULT(1, 138, (Code))
long x = dppHRESULT(101);
result being x = -2138439579.
MAKE_HRESULT is a windows function and defined as
#define MAKE_HRESULT(sev,fac,code) \
((HRESULT) (((unsigned long)(sev)<<31) | ((unsigned long)(fac)<<16) | ((unsigned long)(code))) )
I need to replicate this in C#. So I wrote this code:
public static long MakeHResult(uint facility, uint errorNo)
{
// Make HR
uint result = (uint)1 << 31;
result |= (uint)facility << 16;
result |= (uint)errorNo;
return (long) result;
}
And call like:
// Should return type be long actually??
long test = DppUtilities.MakeHResult(138, 101);
But I get different result, test = 2156527717.
Why? Can someone please help me replicate that C++ function also in C#? Such that I get similar output on similar inputs?
Alternative implementation.
If I use this implementation
public static long MakeHResult(ulong facility, ulong errorNo)
{
// Make HR
long result = (long)1 << 31;
result |= (long)facility << 16;
result |= (long)errorNo;
return (long) result;
}
this works on input 101.
But if I input -1, then C++ returns -1 as result while C# returns 4294967295. Why?
I would really appreciate some help as I am stuck with it.
I've rewritten the function to be the C# equivalent.
static int MakeHResult(uint facility, uint errorNo)
{
// Make HR
uint result = 1U << 31;
result |= facility << 16;
result |= errorNo;
return unchecked((int)result);
}
C# is more strict about signed/unsigned conversions, whereas the original C code didn't pay any mind to it. Mixing signed and unsigned types usually leads to headaches.
As Ben Voigt mentions in his answer, there is a difference in type naming between the two languages. long in C is actually int in C#. They both refer to 32-bit types.
The U in 1U means "this is an unsigned integer." (Brief refresher: signed types can store negative numbers, unsigned types cannot.) All the arithmetic in this function is done unsigned, and the final value is simply cast to a signed value at the end. This is the closest approximation to the original C macro posted.
unchecked is required here because otherwise C# will not allow you to convert the value if it's out of range of the target type, even if the bits are identical. Switching between signed and unsigned will generally require this if you don't mind that the values differ when you deal with negative numbers.
In Windows C++ compilers, long is 32-bits. In C#, long is 64-bits. Your C# conversion of this code should not contain the type keyword long at all.
SaxxonPike has provided the correct translation, but his explanation(s) are missing this vital information.
Your intermediate result is a 32-bit unsigned integer. In the C++ version, the cast is to a signed 32-bit value, resulting in the high bit being reinterpreted as a sign bit. SaxxonPike's code does this as well. The result is negative if the intermediate value had its most significant bit set.
In the original code in the question, the cast is to a 64-bit signed version, which preserves the old high bit as a normal binary digit, and adds a new sign bit (always zero). Thus the result is always positive. Even though the low 32-bits exactly match the 32-bit result in C++, in the C# version returning long, what would be the sign bit in C++ isn't treated as a sign bit.
In the new attempt in the question, the same thing happens (sign bit in the 64-bit number is always zero), but it happens in intermediate calculations instead of at the end.
You're calculating it inside an unsigned type (uint). So shifts are going to behave accordingly. Try using int instead and see what happens.
The clue here is that 2156527717 as an unsigned int is the same as -2138439579 as a signed int. They are literally the same bits.
I have a 32 bit int and I want to address only the lower half of this variable. I know I can convert to bit array and to int16, but is there any more straight forward way to do that?
It you want only the lower half, you can just cast it: (Int16)my32BitInt
In general, if you're extending/truncating bit patterns like this, then you do need to be careful about signed types - unsigned types may cause fewer surprises.
As mentioned in the comments - if you've enclosed your code in a 'checked' context, or changed your compiler options so that the default is 'checked', then you can't truncate a number like this without an exception being thrown if there are any non-zero bits being discarded - in that situation you'd need to do:
(UInt16)(my32BitInt & 0xffff)
(The option of using signed types is gone in this case, because you'd have to use & 0x7fff which then preserves only 15 bits)
just use this function
Convert.ToInt16()
or just
(Int16)valueasint
You can use implicit conversation to Int16 like;
(Int16)2;
but be careful when you do that. Because Int16 can't hold all possible Int32 values.
For example this won't work;
(Int16)2147483683;
because Int16 can hold 32787 as maximum value. You can use unchecked (C# Reference) keyword such this cases.
If you force an unchecked operation, a cast should work:
int r = 0xF000001;
short trimmed = unchecked((short) r);
This will truncate the value of r to fit in a short.
If the value of r should always fit in a short, you can just do a normal cast and let an exception be thrown.
If you need a 16 bit value and you happen to know something specific like that the number will never be less than zero, you could use a UINT16 value. That conversion looks like:
int x = 0;
UInt16 value = (UInt16)x;
This has the full (positive) range of an integer.
Well, first, make sure you actually want to have the value signed. uint and ushort are there for a reason. Then:
ushort ret = (ushort)(val & ((1 << 16) - 1));
What is the difference between int, System.Int16, System.Int32 and System.Int64 other than their sizes?
Each type of integer has a different range of storage capacity
Type Capacity
Int16 -- (-32,768 to +32,767)
Int32 -- (-2,147,483,648 to +2,147,483,647)
Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)
As stated by James Sutherland in his answer:
int and Int32 are indeed synonymous; int will be a little more
familiar looking, Int32 makes the 32-bitness more explicit to those
reading your code. I would be inclined to use int where I just need
'an integer', Int32 where the size is important (cryptographic code,
structures) so future maintainers will know it's safe to enlarge an
int if appropriate, but should take care changing Int32 variables
in the same way.
The resulting code will be identical: the difference is purely one of
readability or code appearance.
The only real difference here is the size. All of the int types here are signed integer values which have varying sizes
Int16: 2 bytes
Int32 and int: 4 bytes
Int64 : 8 bytes
There is one small difference between Int64 and the rest. On a 32 bit platform assignments to an Int64 storage location are not guaranteed to be atomic. It is guaranteed for all of the other types.
int
It is a primitive data type defined in C#.
It is mapped to Int32 of FCL type.
It is a value type and represent System.Int32 struct.
It is signed and takes 32 bits.
It has minimum -2147483648 and maximum +2147483647 value.
Int16
It is a FCL type.
In C#, short is mapped to Int16.
It is a value type and represent System.Int16 struct.
It is signed and takes 16 bits.
It has minimum -32768 and maximum +32767 value.
Int32
It is a FCL type.
In C#, int is mapped to Int32.
It is a value type and represent System.Int32 struct.
It is signed and takes 32 bits.
It has minimum -2147483648 and maximum +2147483647 value.
Int64
It is a FCL type.
In C#, long is mapped to Int64.
It is a value type and represent System.Int64 struct.
It is signed and takes 64 bits.
It has minimum –9,223,372,036,854,775,808 and maximum 9,223,372,036,854,775,807 value.
According to Jeffrey Richter(one of the contributors of .NET framework development)'s book 'CLR via C#':
int is a primitive type allowed by the C# compiler, whereas Int32 is the Framework Class Library type (available across languages that abide by CLS). In fact, int translates to Int32 during compilation.
Also,
In C#, long maps to System.Int64, but in a different programming
language, long could map to Int16 or Int32. In fact, C++/CLI does
treat long as Int32.
In fact, most (.NET) languages won't even treat long as a keyword and won't
compile code that uses it.
I have seen this author, and many standard literature on .NET preferring FCL types(i.e., Int32) to the language-specific primitive types(i.e., int), mainly on such interoperability concerns.
They tell what size can be stored in a integer variable. To remember the size you can think in terms of :-) 2 beers (2 bytes), 4 beers (4 bytes) or 8 beers (8 bytes).
Int16 :- 2 beers/bytes = 16 bit = 2^16 = 65536 = 65536/2 = -32768 to 32767
Int32 :- 4 beers/bytes = 32 bit = 2^32 = 4294967296 = 4294967296/2 = -2147483648 to 2147483647
Int64 :- 8 beers/bytes = 64 bit = 2^64 = 18446744073709551616 = 18446744073709551616/2 = -9223372036854775808 to 9223372036854775807
In short you can not store more than 32767 value in int16 , more than
2147483647 value in int32 and more than 9223372036854775807 value in
int64.
To understand above calculation you can check out this video int16 vs int32 vs int64
A very important note on the 16, 32 and 64 types:
if you run this query...
Array.IndexOf(new Int16[]{1,2,3}, 1)
you are suppose to get zero(0) because you are asking... is 1 within the array of 1, 2 or 3.
if you get -1 as answer, it means 1 is not within the array of 1, 2 or 3.
Well check out what I found:
All the following should give you 0 and not -1
(I've tested this in all framework versions 2.0, 3.0, 3.5, 4.0)
C#:
Array.IndexOf(new Int16[]{1,2,3}, 1) = -1 (not correct)
Array.IndexOf(new Int32[]{1,2,3}, 1) = 0 (correct)
Array.IndexOf(new Int64[]{1,2,3}, 1) = 0 (correct)
VB.NET:
Array.IndexOf(new Int16(){1,2,3}, 1) = -1 (not correct)
Array.IndexOf(new Int32(){1,2,3}, 1) = 0 (correct)
Array.IndexOf(new Int64(){1,2,3}, 1) = -1 (not correct)
So my point is, for Array.IndexOf comparisons, only trust Int32!
EDIT: This isn't quite true for C#, a tag I missed when I answered this question - if there is a more C# specific answer, please vote for that instead!
They all represent integer numbers of varying sizes.
However, there's a very very tiny difference.
int16, int32 and int64 all have a fixed size.
The size of an int depends on the architecture you are compiling for - the C spec only defines an int as larger or equal to a short though in practice it's the width of the processor you're targeting, which is probably 32bit but you should know that it might not be.
Nothing. The sole difference between the types is their size (and, hence, the range of values they can represent).
int and int32 are one and the same (32-bit integer)
int16 is short int (2 bytes or 16-bits)
int64 is the long datatype (8 bytes or 64-bits)
They both are indeed synonymous, However i found the small difference between them,
1)You cannot use Int32 while creatingenum
enum Test : Int32
{ XXX = 1 // gives you compilation error
}
enum Test : int
{ XXX = 1 // Works fine
}
2) Int32 comes under System declaration. if you remove using.System you will get compilation error but not in case for int
The answers by the above people are about right. int, int16, int32... differs based on their data holding capacity. But here is why the compilers have to deal with these - it is to solve the potential Year 2038 problem. Check out the link to learn more about it.
https://en.wikipedia.org/wiki/Year_2038_problem
Int=Int32 --> Original long type
Int16 --> Original int
Int64 --> New data type become available after 64 bit systems
"int" is only available for backward compatibility. We should be really using new int types to make our programs more precise.
---------------
One more thing I noticed along the way is there is no class named Int similar to Int16, Int32 and Int64. All the helpful functions like TryParse for integer come from Int32.TryParse.