Implement function from C++ in C# (MAKE_HRESULT - Windows function) - c#

I have such code in C++
#define dppHRESULT(Code) \
MAKE_HRESULT(1, 138, (Code))
long x = dppHRESULT(101);
result being x = -2138439579.
MAKE_HRESULT is a windows function and defined as
#define MAKE_HRESULT(sev,fac,code) \
((HRESULT) (((unsigned long)(sev)<<31) | ((unsigned long)(fac)<<16) | ((unsigned long)(code))) )
I need to replicate this in C#. So I wrote this code:
public static long MakeHResult(uint facility, uint errorNo)
{
// Make HR
uint result = (uint)1 << 31;
result |= (uint)facility << 16;
result |= (uint)errorNo;
return (long) result;
}
And call like:
// Should return type be long actually??
long test = DppUtilities.MakeHResult(138, 101);
But I get different result, test = 2156527717.
Why? Can someone please help me replicate that C++ function also in C#? Such that I get similar output on similar inputs?
Alternative implementation.
If I use this implementation
public static long MakeHResult(ulong facility, ulong errorNo)
{
// Make HR
long result = (long)1 << 31;
result |= (long)facility << 16;
result |= (long)errorNo;
return (long) result;
}
this works on input 101.
But if I input -1, then C++ returns -1 as result while C# returns 4294967295. Why?
I would really appreciate some help as I am stuck with it.

I've rewritten the function to be the C# equivalent.
static int MakeHResult(uint facility, uint errorNo)
{
// Make HR
uint result = 1U << 31;
result |= facility << 16;
result |= errorNo;
return unchecked((int)result);
}
C# is more strict about signed/unsigned conversions, whereas the original C code didn't pay any mind to it. Mixing signed and unsigned types usually leads to headaches.
As Ben Voigt mentions in his answer, there is a difference in type naming between the two languages. long in C is actually int in C#. They both refer to 32-bit types.
The U in 1U means "this is an unsigned integer." (Brief refresher: signed types can store negative numbers, unsigned types cannot.) All the arithmetic in this function is done unsigned, and the final value is simply cast to a signed value at the end. This is the closest approximation to the original C macro posted.
unchecked is required here because otherwise C# will not allow you to convert the value if it's out of range of the target type, even if the bits are identical. Switching between signed and unsigned will generally require this if you don't mind that the values differ when you deal with negative numbers.

In Windows C++ compilers, long is 32-bits. In C#, long is 64-bits. Your C# conversion of this code should not contain the type keyword long at all.
SaxxonPike has provided the correct translation, but his explanation(s) are missing this vital information.
Your intermediate result is a 32-bit unsigned integer. In the C++ version, the cast is to a signed 32-bit value, resulting in the high bit being reinterpreted as a sign bit. SaxxonPike's code does this as well. The result is negative if the intermediate value had its most significant bit set.
In the original code in the question, the cast is to a 64-bit signed version, which preserves the old high bit as a normal binary digit, and adds a new sign bit (always zero). Thus the result is always positive. Even though the low 32-bits exactly match the 32-bit result in C++, in the C# version returning long, what would be the sign bit in C++ isn't treated as a sign bit.
In the new attempt in the question, the same thing happens (sign bit in the 64-bit number is always zero), but it happens in intermediate calculations instead of at the end.

You're calculating it inside an unsigned type (uint). So shifts are going to behave accordingly. Try using int instead and see what happens.
The clue here is that 2156527717 as an unsigned int is the same as -2138439579 as a signed int. They are literally the same bits.

Related

Is there an easy way to convert from 32bit integer to 16bit integer?

I have a 32 bit int and I want to address only the lower half of this variable. I know I can convert to bit array and to int16, but is there any more straight forward way to do that?
It you want only the lower half, you can just cast it: (Int16)my32BitInt
In general, if you're extending/truncating bit patterns like this, then you do need to be careful about signed types - unsigned types may cause fewer surprises.
As mentioned in the comments - if you've enclosed your code in a 'checked' context, or changed your compiler options so that the default is 'checked', then you can't truncate a number like this without an exception being thrown if there are any non-zero bits being discarded - in that situation you'd need to do:
(UInt16)(my32BitInt & 0xffff)
(The option of using signed types is gone in this case, because you'd have to use & 0x7fff which then preserves only 15 bits)
just use this function
Convert.ToInt16()
or just
(Int16)valueasint
You can use implicit conversation to Int16 like;
(Int16)2;
but be careful when you do that. Because Int16 can't hold all possible Int32 values.
For example this won't work;
(Int16)2147483683;
because Int16 can hold 32787 as maximum value. You can use unchecked (C# Reference) keyword such this cases.
If you force an unchecked operation, a cast should work:
int r = 0xF000001;
short trimmed = unchecked((short) r);
This will truncate the value of r to fit in a short.
If the value of r should always fit in a short, you can just do a normal cast and let an exception be thrown.
If you need a 16 bit value and you happen to know something specific like that the number will never be less than zero, you could use a UINT16 value. That conversion looks like:
int x = 0;
UInt16 value = (UInt16)x;
This has the full (positive) range of an integer.
Well, first, make sure you actually want to have the value signed. uint and ushort are there for a reason. Then:
ushort ret = (ushort)(val & ((1 << 16) - 1));

Bitwise left shift in Python and C#

Why bitwise left shift in Python and C# has different values?
Python:
>>> 2466250752<<1
4932501504L
C#:
System.Console.Write((2466250752 << 1).ToString()); // output is 637534208
You are overflowing the 32-bit (unsigned) integer in C#.
In python, all integers are arbitrarily-sized. That means the integer will expand to whatever size is required. Note that I added the underscores:
>>> a = 2466250752
>>>
>>> hex(a)
'0x9300_0000L'
>>>
>>> hex(a << 1)
'0x1_2600_0000L'
^-------------- Note the additional place
In C#, uint is only 32-bits. When you shift left, you are exceeding the size of the integer, causing overflow.
Before shifting:
After shifting:
Notice that a does not have the leading 1 that python showed.
To get around this limitation for this case, you can use a ulong which is 64 bits instead of 32 bits. This will work for values up to 264-1.
Python makes sure your integers don't overflow, while C# allows for overflow (but throws an exception on overflow in a checked context). In practice this means you can treat Python integers as having infinite width, while a C# int or uint is always 4 bytes.
Notice in your Python example that the value "4932501504L" has a trailing L, which means long integer. Python automatically performs math in long (size-of-available-memory-width, unlike C#'s long, which is 8 bytes) integers when overflow would occur in int values. You can see the rationale behind this idea in PEP 237.
EDIT: To get the Python result in C#, you cannot use a plain int or long - those types have limited size. One type whose size is limited only by memory is BigInteger. It will be slower than int or long for arithmetic, so I wouldn't recommend using it on every application, but it can come in handy.
As an example, you can write almost the same code as in C#, with the same result as in Python:
Console.WriteLine(new BigInteger(2466250752) << 1);
// output is 4932501504
This works for arbitrary shift sizes. For instance, you can write
Console.WriteLine(new BigInteger(2466250752) << 1000);
// output is 26426089082476043843620786304598663584184261590451906619194221930186703343408641580508146166393907795104656740341094823575842096015719243506448572304002696283531880333455226335616426281383175835559603193956495848019208150304342043576665227249501603863012525070634185841272245152956518296810797380454760948170752
Of course, this would overflow a long.

How to set endianness when converting to or from hex strings

To convert an integer to a hex formatted string I am using ToString("X4") like so:
int target = 250;
string hexString = target.ToString("X4");
To get an integer value from a hex formatted string I use the Parse method:
int answer = int.Parse(data, System.Globalization.NumberStyles.HexNumber);
However the machine that I'm exchanging data with puts the bytes in reverse order.
To keep with the sample data, If I want to send the value 250 I need a string of "FA00" (not 00FA which is what hexString is) Likewise if I get "FA00" I need to convert that to 250 not 64000.
How do I set the endianness of these two converstion methods?
Marc's answer seems, by virtue of having been accepted, to have addressed the OP's original issue. However, it's not really clear to me from the question text why. That still seems to require swapping of bytes, not pairs of bytes as Marc's answer does. I'm not aware of any reasonably common scenario where swapping bits 16 at a time makes sense or is useful.
For the stated requirements, IMHO it would make more sense to write this:
int target = 250; // 0x00FA
// swap the bytes of target
target = ((target << 8) | (target >> 8)) & 0xFFFF;
// target now is 0xFA00
string hexString = target.ToString("X4");
Note that the above assumes we're actually dealing with 16-bit values, stored in a 32-bit int variable. It will handle any input in the 16-bit range (note the need to mask off the upper 16 bits, as they get set to non-zero values by the << operator).
If swapping 32-bit values, one would need something like this:
int target = 250; // 0x00FA
// swap the bytes of target
target = (int)((int)((target << 24) & 0xff000000) |
((target << 8) & 0xff0000) |
((target >> 8) & 0xff00) |
((target >> 24) & 0xff));
// target now is 0xFA000000
string hexString = target.ToString("X8");
Again, masking is required to isolate the bits we are moving to specific positions. Casting the << 24 result back to int before or-ing with the other three bytes is needed because 0xff000000 is a uint (UInt32) literal and causes the & expression to be extended to long (Int64). Otherwise, you'll get compiler warnings with each of the | operators.
In any case, as this comes up most often in networking scenarios, it is worth noting that .NET provides helper methods that can assist with this operation: HostToNetworkOrder() and NetworkToHostOrder(). In this context, "network order" is always big-endian, and "host order" is whatever byte order is used on the computer hosting the current process.
If you know that you are receiving data that's big-endian, and you want to be able to interpret in as correct values in your process, you can call NetworkToHostOrder(). Likewise, if you need to send data in a context where big-endian is expected, you can call HostToNetworkOrder().
These methods work only with the three basic integer types: Int16, Int32, and Int64 (in C#, short, int, and long, respectively). They also return the same type passed to them, so you have to be careful about sign extension. The original example in the question could be solved like this:
int target = 250; // 0x00FA
// swap the bytes of target
target = IPAddress.HostToNetworkOrder((short)target) & 0xFFFF;
// target now is 0xFA00
string hexString = target.ToString("X4");
Once again, masking is required because otherwise the short value returned by the method will be sign-extended to 32 bits. If bit 15 (i.e. 0x8000) is set in the result, then the final int value would otherwise have its highest 16 bits set as well. This could be addressed without masking simply by using more appropriate data types for the variables (e.g. short when the data is expected to be signed 16-bit values).
Finally, I will note that the HostToNetworkOrder() and NetworkToHostOrder() methods, since they are only ever swapping bytes, are equivalent to each other. They both swap bytes, when the machine architecture is little-endian† . And indeed, the .NET implementation is simply for the NetworkToHostOrder() to call HostToNetworkOrder(). There are two methods mainly so that the .NET API matches the original BSD sockets API, which included functions like htons() and ntohs(), and that API in turn included functions for both directions of conversion mainly so that it was clear in code whether one was receiving data from the network or sending data to the network.
† And do nothing when the machine architecture is big-endian…they aren't useful as generalized byte-swapping functions. Rather, the expectation is that the network protocol will always be big-endian, and these functions are used to ensure the data bytes are swapped to match whatever the machine architecture is.
That isn't an inbuilt option. So either do string work to swap the characters around, or so some bit-shifting, I.e.
int otherEndian = (value << 16) | (((uint)value) >> 16);

Why is Count not an unsigned integer? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Why does .NET use int instead of uint in certain classes?
Why is Array.Length an int, and not an uint
I've always wonder why .Count isn't an unsigned integer instead of a signed one?
For example, take ListView.SelectedItems.Count. The number of elements can't be less then 0, so why is it a signed int?
If I try to test if there are elements selected, I would like to test
if (ListView.SelectedItems.Count == 0) {}
but because it's a signed integer, I have to test
if (ListView.SelectedItems.Count <= 0) {}
or is there any case when .Count could be < 0 ?
Unsigned integer is not CLS-compliant (Common Language Specification)
For more info on CLS compliant code, see this link:
http://msdn.microsoft.com/en-us/library/bhc3fa7f.aspx
Mabye because the uint data type is not part of the CLS (common language specification) as not all .Net languages support it.
Here is very similar thread about arrays:
Why is Array.Length an int, and not an uint
It's not CLS compliant, largely to allow wider support from different languages.
A signed int offers ease in porting code from C or C++ that uses pointer arithmetic.
Count can be part of an expression where the overall value can be negative. In particular, count has a direct relationship to indices, where valid indices are always in the range [0, Count - 1], but negative results are used e.g. by some binary search methods (including those provided by the BCL) to reflect the position where a new item should be inserted to maintain order.
Let’s look at this from a practical angle.
For better or worse, signed ints are the normal sort of ints in use in .NET. It was also normal to use signed ints in C and C++. So, most variables are declared to be int rather than unsigned int unless there is a good reason otherwise.
Converting between an unsigned int and a signed int has issues and is not always safe.
On a 32 bit system it is not possible for a collection to have anywhere close to 2^^32 items in it, so a signed int is big enough in all cases.
On a 64 bit system, an unsigned int does not gain you much, in most cases a signed int is still big enough, otherwise you need to use a 64 bit int. (I expect that none of the standard collection will cope well with anywhere near 2^^31 items on a 64 system!)
Therefore given that using an unsigned int has no clear advantage, why would you use an unsigned int?
In vb.net, the normal looping construct (a "For/Next loop") will execute the loop with values up to and including the maximum value specified, unlike C which can easily loop with values below the upper limit. Thus, it is often necessary to specify a loop as e.g. "For I=0 To Array.Length-1"; if Array.Length were unsigned and and zero, that could cause an obvious problem. Even in C, one benefits from being able to say "for (i=Array.Length-1; i GE 0; --i)". Sometimes I think it would be useful to have a 31-bit integer type which would support widening casts to both signed and unsigned int, but I've never heard of a language supporting such.

Difference between casting in C# and VB.NET

The following code works fine in C#.
Int32 a, b;
Int16 c;
a = 0x7FFFFFFF;
b = a & 0xFFFF;
c = (Int16)b;
But this code crash with a OverflowException in VB.NET.
Dim a, b As Int32
Dim c As Int16
a = &H7FFFFFFF
b = a And &HFFFF
c = CType(b, Int16)
Both code snippets seem the same to me. What is the difference and how can I get the C# code converted to VB.NET?
From MSDN:
For the arithmetic, casting, or conversion operation to throw an OverflowException, the operation must occur in a checked context. By default, arithmetic operations and overflows in Visual Basic are checked; in C#, they are not. If the operation occurs in an unchecked context, the result is truncated by discarding any high-order bits that do not fit into the destination type.
EDIT:
If you're going to port code from C# to VB.NET, you may be interested in differences between them. Also compare and explicitly set compiler settings to be the same as default settings in C# (when needed).
First up: My understanding of this is that CType(b, Int16) isn't the same as (Int16)b. One is a conversion of type (CType) and the other is a cast. (Int16)b equates to DirectCast(b, Int16) rather than CType(b, Int16).
The difference between the two (as noted on MSDN) is that CType succeeds so long as there is a valid conversion, however, DirectCast requires the run-time type of the object to be the same, and as such, all you're doing is telling the compiler at design time that this object is of that type rather than telling it to convert to that type.
See: http://msdn.microsoft.com/en-us/library/7k6y2h6x(VS.71).aspx
The underlying problem though is that you're trying to convert a 32 bit integer into a 16 bit integer which is... [I'm missing the word I need, perhaps someone can insert it here for me] lossy. Converting from 16 bit to 32 bit is allowed because it's lossless, converting from 32 bit to 16 bit is undefined. For why it works in C# you can see #Roman's answer - it relates to the fact that C# doesn't check the overflow.
The resulting value of &H7FFFFFFF And &HFFFF results in UInt16.MaxValue (65535) UInt16 runs from 0 to 65535, you're trying to cram that into Int16 which runs from -32768 through to 32767 which as you can see isn't going to work. Also the fact that this value might fit into a UInt16 is coincidental, adding two 32 bit integers and trying to cram them into a 16 bit integer (short) would frequently cause an overflow and thus I would say this is an inherently dangerous operation.
Have you tried using DirectCast(b, Int16)? CType is not the same as a C# cast.
Here's an article comparing the performance of DirectCast and CType, as well as going into more detail to when either should be used.
http://www.cnblogs.com/liujq007/archive/2010/12/04/1896059.html
Summary:
To unsigned type: just do And operator, or 2st method.
Dim a As Byte = CByte(300 And &HFF)
To signed type: left shift n bits, then right shift n bits, which is to expand the signed bit. n = (sizeof(type1) - sizeof(type2)) * 8 or VB:use Len(New type) instead of sizeof(type)
Dim a As Short = CShort(34042 << 16 >> 16)
You can find the details from the link below.
?CType(b, Int16)
Constant expression not representable in type 'Short'.
?b
65535
?directcast(b, Int16)
Value of type 'Integer' cannot be converted to 'Short'.
?int16.TryParse(b.ToString(), c)
False
You can truncate this kind of overflow with a structure.
<StructLayout(LayoutKind.Explicit)> _
Public Structure int3216
<FieldOffset(0)> Public i32 As Int32
<FieldOffset(0)> Public i16high As Int16
<FieldOffset(2)> Public i16low As Int16
End Structure
...
Dim _i3216 As int3216
_i3216.i32 = a And &HFFFF
c = _i3216.i16low
I came across this question when looking for a solution for converting a Short and getting the overflow result with out the overflow error. I found a solution here:
http://bytes.com/topic/visual-basic-net/answers/732622-problems-typecasting-vb-net
about halfway down the page is this:
The old, VB "Proper" trick of "side-stepping" out to Hexadecimal and
back again still works!
Dim unsigned as UInt16 = 40000
Dim signed as Int16 = CShort(Val("&H" & Hex(unsigned)))
it seems to work pretty slick!

Categories