I have converted C# decrypt/encrypt functions to VB.NET. When I test the result in C# is showing below result but in VB.NET it throws an exception. Could you explain to me how C# showing the below result?
Below codes are tested in VS 2010 with 4.0 framework.
C# Code
class Program
{
static void Main(string[] args)
{
byte bytTen = 10;
int aa = 1527870874;
int bb = 28904;
int cc = 35756;
Console.WriteLine((bytTen + aa) * bb + cc);
Console.ReadKey();
}
}
Result: 726329420
VB.NET Code
Module Module1
Sub Main()
Dim bytTen As Byte = 10
Dim aa As Integer = 1527870874, bb As Integer = 28904, cc As Integer = 35756
Console.WriteLine((bytTen + aa) * bb + cc)
Console.ReadKey()
End Sub
End Module
Result: Arithmetic operation resulted in an overflow.
The C# code is running as unchecked code (where integer overflow is ignored).
The VB code is running in as checked, where the runtime detects an integer overflow and throws an exception.
To get the same result in VB, you need to check the project-level option "Remove integer overflow checks" on the "Advanced Compiler Settings" options via the "Compile" tab of the project options.
C# by default removes integer overflow checks (but this can also be changed on the C# project options), while VB by default has integer overflow checks.
Related
I have import an unmanageable .dll to my project. It's no any document left
and the original working code is in VB6. so I try to make C# code equivalent to VB6 as same as possible.
PROBLEM
I don't know how to convert following code to C#...
Dim ATQ As String * 10
Dim Uid As String * 10
Dim MultiTag As String * 10
NOTE
Q: some users ask me that do you really need string fixed length?
A: I already try string in c# but there are no result update to these variable. So, I think input signature for the dllImport function might be wrong. So, I want to make it as same as VB6 did because I didn't know exactly what should be the right signature.
TRIAL & ERROR
I tried all of this but it's not working (still no result update to these variable)
Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString ATQ = new Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString(10)
Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString Uid = new Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString(10)
Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString MultiTag = new Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString(10)
You can use Microsoft.VisualBasic.Compatibility:
using Microsoft.VisualBasic.Compatibility;
var ATQ = new VB6.FixedLengthString(10);
var Uid = new VB6.FixedLengthString(10);
var MultiTag = new VB6.FixedLengthString(10);
But it's marked as obsolete and specifically not supported for 64-bit processes, so write your own that replicates the functionality, which is to truncate on setting long values and padding right with spaces for short values. It also sets an "uninitialised" value, like above, to nulls.
Sample code from LinqPad (which I can't get to allow using Microsoft.VisualBasic.Compatibility I think because it is marked obsolete, but I have no proof of that):
var U = new Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString(5);
var S = new Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString(5,"Test");
var L = new Microsoft.VisualBasic.Compatibility.VB6.FixedLengthString(5,"Testing");
Func<string,string> p0=(s)=>"\""+s.Replace("\0","\\0")+"\"";
p0(U.Value).Dump();
p0(S.Value).Dump();
p0(L.Value).Dump();
U.Value="Test";
p0(U.Value).Dump();
U.Value="Testing";
p0(U.Value).Dump();
which has this output:
"\0\0\0\0\0"
"Test "
"Testi"
"Test "
"Testi"
string ATQ;
string Uid;
string MultiTag;
One difference is that, in VB6, I believe the String * 10 syntax may create fixed-size strings. If that's the case, then the padding behavior may be different.
What is the equivalent of (byte) in VB.NET:
C#:
uint value = 1161;
byte data = (byte)value;
data = 137
VB.NET:
Dim value As UInteger = 1161
Dim data1 As Byte = CType(value, Byte)
Dim data2 As Byte = CByte(value)
Exception: Arithmetic operation resulted in an overflow.
How can I achieve the same result as in C#?
By default, C# does not check for integer overflows, but VB.NET does.
You get the same exception in C# if you e.g. wrap your code in a checked block:
checked
{
uint value = 1161;
byte data = (byte)value;
}
In your VB.NET project properties, enable Configuration Properties => Optimizations => Remove Integer Overflow Checks, and your VB.NET code will work exactly like your C# code.
Integer overflow checks are then disabled for your entire project, but that's usually not a problem.
Try first chopping the most significant bytes off the number, then converting it to Byte:
Dim value As UInteger = 1161
Dim data1 As Byte = CType(value And 255, Byte)
Dim data2 As Byte = CByte(value And 255)
To get just the most significant byte, you can do the rather hackalicious
Dim data1 = BitConvertor.GetBytes(value)(0)
It's explicit, and you wouldn't need to disable overflow checking.
I need the following function converted to VB.NET, but I'm not sure how to handle the statement
res = (uint)((h * 0x21) + c);
Complete function:
private static uint convert(string input)
{
uint res = 0;
foreach (int c in input)
res = (uint)((res * 0x21) + c);
return res;
}
I created the following, but I get an overflow error:
Private Shared Function convert(ByVal input As String) As UInteger
Dim res As UInteger = 0
For Each c In input
res = CUInt((res * &H21) + Asc(c)) ' also tried AscW
Next
Return res
End Function
What am I missing? Can someone explain the details?
Your code is correct. The calculation is overflowing after just a few characters since res increases exponentially with each iteration (and it’s not the conversion on the character that’s causing the overflow, it’s the unsigned integer that overflows).
C# by default allows integer operations to overflow – VB doesn’t. You can disable the overflow check in the project settings of VB, though. However I would try not to rely on this. Is there a reason this particular C# has to be ported? After all, you can effortlessly mix C# and VB libraries.
Here is a useful online converter: http://www.developerfusion.com/tools/convert/csharp-to-vb/
I have a VB class library I built from an existing VB class which wraps an unmanaged DLL. The VB class library contains the DLL functions and various structs and types associated with the DLL functions.
I am using the class lib in a C# project and one of the functions in the class lib requires me to pass a struct as an argument. This is where I am running into trouble.
Here is the VB code for the DLL:
Declare Auto Function CtSetVRegister Lib "Ctccom32v2.dll" _
(ByVal ConnectID As Integer, ByRef Storage As CT_VARIANT) As Integer
Here is the VB struct:
<StructLayout(LayoutKind.Sequential, Pack:=1)> _
Public Structure CT_VARIANT
Dim vRegister As Integer 'Variant Register desired
Dim type As Integer 'Format want results returned in
Dim precision As Integer 'Precision desired for floating point conversions
Dim flags As Integer 'Specially defined flags, 0 for normal, (indirection, etc.)
Dim cmd As Integer 'Special commands, 0 for normal operation
Dim taskHandle As Integer 'Alternate task handle for local task register access, 0 = default public
Dim slength As Integer 'Length of bytes returned in stringVar, not include null
Dim indexCol As Integer 'Column (X) selection, base 0
Dim indexRow As Integer 'Row (X) selection base 0
Dim IntegerIntVar As Integer '32 bit signed integer storage
Dim FloatVar As Single '32 bit float
Dim DoubleVar As Double '64 bit double in Microsoft format
<MarshalAs(UnmanagedType.ByValArray, SizeConst:=223)> _
Public stringVar() As Byte 'null terminated ASCII string of bytes (1 to 224)
End Structure
The C# method I am writing requires me to set the necessary values in the struct and then pass those to the DLL function:
private void btnWriteVReg_Click(object sender, System.EventArgs e)
{
int results;
CTC_Lib.Ctccom32v2.CT_VARIANT Var;
Var.vRegister = int.Parse(txtVRegToRead.Text);
Var.cmd = 0;
Var.flags = 0;
Var.FloatVar = 0;
Var.IntegerIntVar = 0;
Var.DoubleVar = 0;
Var.precision = 6;
writeStatus.Text = "";
Var.type = CTC_Lib.Ctccom32v2.CT_VARIANT_INTEGER;
Var.IntegerIntVar = Convert.ToInt32(txtVRegVal.Text);
Var.taskHandle = 0;
results = CTC_Lib.Ctccom32v2.CtSetVRegister(CTconnection,ref Var);
if ((results == SUCCESS))
{
writeStatus.Text = "SUCCESS";
}
else
{
writeStatus.Text = "ERROR";
}
}
I get the error:
Use of unassigned local variable 'Var'
I am a bit puzzled as to how to properly pass the struct 'Var' to the VB Class library.
init variable
CTC_Lib.Ctccom32v2.CT_VARIANT Var = new CTC_Lib.Ctccom32v2.CT_VARIANT();
you must create instance of Var,
CTC_Lib.Ctccom32v2.CT_VARIANT Var = new CTC_Lib.Ctccom32v2.CT_VARIANT();
I am trying to convert some vb6 code to c# and I am struggling a bit.
I have looked at this page below and others similar, but am still stumped.
Why use hex?
vb6 code below:
Dim Cal As String
Cal = vbNull
For i = 1 To 8
Cal = Cal + Hex(Xor1 Xor Xor2)
Next i
This is my c# code - it still has some errors.
string Cal = null;
int Xor1 = 0;
int Xor2 = 0;
for (i = 1; i <= 8; i++)
{
Cal = Cal + Convert.Hex(Xor1 ^ Xor2);
}
The errors are:
Cal = Cal + Convert.Hex(Xor1 ^ Xor2 ^ 6);
Any advice as to why I cant get the hex to convert would be appreciated.
I suspect its my lack of understanding the .Hex on line 3 above and the "&H" on line 1/2 above.
Note: This answer was written at a point where the lines Xor1 = CDec("&H" + Mid(SN1, i, 1))
and Xor1 = Convert.ToDecimal("&H" + SN1.Substring(i, 1)); were still present in the question.
What's the &H?
In Visual Basic (old VB6 and also VB.NET), hexadecimal constants can be used by prefixing them with &H. E.g., myValue = &H20 would assign the value 32 to the variable myValue. Due to this convention, the conversion functions of VB6 also accepted this notation. For example, CInt("20") returned the integer 20, and CInt("&H20") returned the integer 32.
Your code example uses CDec to convert the value to the data type Decimal (actually, to the Decimal subtype of Variant) and then assigns the result to an integer, causing an implicit conversion. This is actually not necessary, using CInt would be correct. Apparently, the VB6 code was written by someone who did not understand that (a) the Decimal data type and (b) representing a number in decimal notation are two completely different things.
So, how do I convert between strings in hexadecimal notation and number data types in C#?
To convert a hexadecimal string into a number use
int number = Convert.ToInt32(hex, 16); // use this instead of Convert.ToDecimal
In C#, there's no need to pad the value with "&H" in the beginning. The second parameter,16, tells the conversion function that the value is in base 16 (i.e., hexadecimal).
On the other hand, to convert a number into its hex representation, use
string hex = number.ToString("X"); // use this instead of Convert.ToHex
What you are using, Convert.ToDecimal, does something completely different: It converts a value into the decimal data type, which is a special data type used for floating-point numbers with decimal precision. That's not what you need. Your other method, Convert.Hex simply does not exist.