I am really stumped on this one. In C# there is a hexadecimal constants representation format as below :
int a = 0xAF2323F5;
is there a binary constants representation format?
Nope, no binary literals in C#. You can of course parse a string in binary format using Convert.ToInt32, but I don't think that would be a great solution.
int bin = Convert.ToInt32( "1010", 2 );
As of C#7 you can represent a binary literal value in code:
private static void BinaryLiteralsFeature()
{
var employeeNumber = 0b00100010; //binary equivalent of whole number 34. Underlying data type defaults to System.Int32
Console.WriteLine(employeeNumber); //prints 34 on console.
long empNumberWithLongBackingType = 0b00100010; //here backing data type is long (System.Int64)
Console.WriteLine(empNumberWithLongBackingType); //prints 34 on console.
int employeeNumber_WithCapitalPrefix = 0B00100010; //0b and 0B prefixes are equivalent.
Console.WriteLine(employeeNumber_WithCapitalPrefix); //prints 34 on console.
}
Further information can be found here.
You could use an extension method:
public static int ToBinary(this string binary)
{
return Convert.ToInt32( binary, 2 );
}
However, whether this is wise I'll leave up to you (given the fact it will operate on any string).
Since Visual Studio 2017, binary literals like 0b00001 are supported.
Related
A MAC address (Wikipedia article) is typically formatted in the form of 6 hexadecimal numbers separated by a semicolon, like 14:10:9F:D4:04:1A.
In C#, it can be passed around as a string, while some libraries manipulate these as a UInt64 or ulong.
Question
What are the relationship between the string, hex representation, ulong, and how can I go from one to the other?
MAC Address is HEX
As correctly described here:
The MAC address is very nearly a hex string. In fact, if you remove the ':' characters, you have a hex string.
14:10:9F:D4:04:1A literally means 0x14109FD4041A, only easier to read.
string to UInt64 and back
A MAC address is made up of 6 bytes, 48 bits, fitting in an UInt64 with 2 bytes to spare. Leaving out the MSB vs. LSB ordering complication, you can use the 2 methods below:
Format into a string
using System;
using System.Linq;
public static string MAC802DOT3(ulong macAddress)
{
return string.Join(":",
BitConverter.GetBytes(macAddress).Reverse()
.Select(b => b.ToString("X2"))).Substring(6);
}
// usage: var s = MAC802DOT3(0x14109fd4041a);
// var s = MAC802DOT3(22061633504282);
// s becomes "14:10:9F:D4:04:1A"
Convert to an integer
public static ulong MAC802DOT3(string macAddress)
{
string hex = macAddress.Replace(":", "");
return Convert.ToUInt64(hex, 16);
}
// usage: var m = MAC802DOT3("14:10:9F:D4:04:1A");
// m becomes 22061633504282 (0x14109fd4041a)
I have an integer value. I want to convert it to the Base 64 value. I tried the following code.
byte[] b = BitConverter.GetBytes(123);
string str = Convert.ToBase64String(b);
Console.WriteLine(str);
Its giving the out put as "ewAAAA==" with 8 characters.
I convert the same value to base 16 as follows
int decvalue = 123;
string hex = decvalue.ToString("X");
Console.WriteLine(hex);
the out put of the previous code is 7B
If we do this in maths the out comes are same. How its differ? How can I get same value to Base 64 as well. (I found the above base 64 conversion in the internet)
The question is rather unclear... "How is it differ?" - well, in many different ways:
one is base-16, the other is base-64 (hence they are fundamentally different anyway)
one is doing an arithmetic representation; one is a byte serialization format - very different
one is using little-endian arithmetic (assuming a standard CPU), the other is using big-endian arithmetic
To get a comparable base-64 result, you probably need to code it manually (since Convert only support base-8, base-10, base-16 for arithmetic converts). Perhaps (note: not optimized):
static void Main()
{
string b64 = ConvertToBase64Arithmetic(123);
}
// uint because I don't care to worry about sign
static string ConvertToBase64Arithmetic(uint i)
{
const string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
StringBuilder sb = new StringBuilder();
do
{
sb.Insert(0, alphabet[(int)(i % 64)]);
i = i / 64;
} while (i != 0);
return sb.ToString();
}
What can be the reason for the problem? My method returns incorrect int values. When I give it hex value of AB or DC or something similar it returns int = 0 but when I give it a hex = 22 it returns me int = 22. (though int should be 34 in this case).
public int StatusBit(int Xx, int Rr) {
int Number;
int.TryParse(GetX(Xx,Rr), out Number);
return Number;
}
I tried to use Number = Convert.ToInt32(GetX(Xx,Rr)); but it gives same result but null instead of 0 for anything that includes letters.
Use Convert.ToInt32(string, int) instead. That way you can give a base the number should be interpreted in. E.g.
return Convert.ToInt32(GetX(Xx, Rr), 16);
(You also don't check the return value of TryParse which would give a hint that the parse failed.)
If you expect both decimal and hexadecimal numbers you need to branch according to how the number looks and use either base 10 or base 16. E.g. if your hexadeximal numbers always start with 0x you could use something along the following lines:
string temp = GetX(Xx, Rr);
return Convert.ToInt32(temp, temp.StartsWith("0x") ? 16 : 10);
But that would depend on how (if at all) you would distinguish the two. If everything is hexadecimal then there is no such need, of course.
Use NumberStyles.HexNumber:
using System;
using System.Globalization;
class Test
{
static void Main()
{
string text = "22";
int value;
int.TryParse(text, NumberStyles.HexNumber,
CultureInfo.InvariantCulture, out value);
Console.WriteLine(value); // Prints 34
}
}
Do you really want to silently return 0 if the value can't be parsed, by the way? If not, use the return value of int.TryParse to determine whether the parsing succeeded or not. (That's the reason it's returning 0 for "AB" in your original code.)
int.TryParse parses a base 10 integer.
Use Convert.ToUInt32(hex, 16) instead
here is my solution;
kTemp = int.Parse(xcc, System.Globalization.NumberStyles.HexNumber);
above kTemp is an integer, and xcc is a string.
xcc can be anything like; FE, 10BA, FE0912... that is to say; xcc is a string of hex characters in any length.
beware; I dont get the 0x prefix with my hex strings.
I am trying to convert some vb6 code to c# and I am struggling a bit.
I have looked at this page below and others similar, but am still stumped.
Why use hex?
vb6 code below:
Dim Cal As String
Cal = vbNull
For i = 1 To 8
Cal = Cal + Hex(Xor1 Xor Xor2)
Next i
This is my c# code - it still has some errors.
string Cal = null;
int Xor1 = 0;
int Xor2 = 0;
for (i = 1; i <= 8; i++)
{
Cal = Cal + Convert.Hex(Xor1 ^ Xor2);
}
The errors are:
Cal = Cal + Convert.Hex(Xor1 ^ Xor2 ^ 6);
Any advice as to why I cant get the hex to convert would be appreciated.
I suspect its my lack of understanding the .Hex on line 3 above and the "&H" on line 1/2 above.
Note: This answer was written at a point where the lines Xor1 = CDec("&H" + Mid(SN1, i, 1))
and Xor1 = Convert.ToDecimal("&H" + SN1.Substring(i, 1)); were still present in the question.
What's the &H?
In Visual Basic (old VB6 and also VB.NET), hexadecimal constants can be used by prefixing them with &H. E.g., myValue = &H20 would assign the value 32 to the variable myValue. Due to this convention, the conversion functions of VB6 also accepted this notation. For example, CInt("20") returned the integer 20, and CInt("&H20") returned the integer 32.
Your code example uses CDec to convert the value to the data type Decimal (actually, to the Decimal subtype of Variant) and then assigns the result to an integer, causing an implicit conversion. This is actually not necessary, using CInt would be correct. Apparently, the VB6 code was written by someone who did not understand that (a) the Decimal data type and (b) representing a number in decimal notation are two completely different things.
So, how do I convert between strings in hexadecimal notation and number data types in C#?
To convert a hexadecimal string into a number use
int number = Convert.ToInt32(hex, 16); // use this instead of Convert.ToDecimal
In C#, there's no need to pad the value with "&H" in the beginning. The second parameter,16, tells the conversion function that the value is in base 16 (i.e., hexadecimal).
On the other hand, to convert a number into its hex representation, use
string hex = number.ToString("X"); // use this instead of Convert.ToHex
What you are using, Convert.ToDecimal, does something completely different: It converts a value into the decimal data type, which is a special data type used for floating-point numbers with decimal precision. That's not what you need. Your other method, Convert.Hex simply does not exist.
I am working with a Fortran program that expects floating point numbers to be input using Fortran's E format specifier, which is scientific notation, except the mantissa must be between 0 and 1. So instead of:
"3147.3" --> "3.1473E3",
it needs
"3147.3" --> "0.31473E4".
I am unable to modify the Fortran program, as it works with a few other programs that are also particular.
It would appear that the C# E format string would give me the former. Is there any simple way to achieve the latter in C#?
You could specify a custom format like so.
var num = 3147.3;
num.ToString("\\0.#####E0"); // "0.31473E4"
I think that you are solving a non-existent problem. It is true that the default of the Fortran E output specifier has a leading zero before the decimal point (this can be modified). But when the E specifier is used for input it is very tolerant and does not require the leading zero -- if you have a decimal point in the number and the number fits within the columns specified by the format, it will work.
Here is an example Fortran program, and an example input file.
program test_format
real :: num1, num2, num3
open (unit=16, file="numbers_3.txt", status='old', access='sequential', form='formatted', action='read' )
read (16, 1010 ) num1
read (16, 1010 ) num2
read (16, 1010 ) num3
1010 format (E9.5)
write (*, *) num1, num2, num3
stop
end program test_format
and the sample input with three different cases:
3.1473E3
0.31473E4
3147.3
I tested the program with gfortran and Intel ifort. The output was:
3147.300 3147.300 3147.300
So when performing input using Fortran's E format specifier, it is not necessary that the digit before the decimal point be zero. It is not even necessary that the input value use E-notation!
Edit / P.S. I translated the program to the fixed-form source layout of FORTRAN 77 and compiled it with g77 -- it read the three test numbers just fine. The E-format has been flexible for input for a long time -- probably since FORTRAN IV, perhaps longer.
The representaiton of floats or doubles are defined in IEEE754 / IEC 60559:1989. You should look to find libraries to extract mantissa and exponent. Then you could just divide by then to move to comma and subtract the number of steps from the exponent to form your solution.
You could take something similar to Jeff M's solution, and implement it via extension method:
public static class DoubleExtensions
{
public static string ToFortranDouble(this double value)
{
return value.ToString("\\0.#####E0");
}
}
class Program
{
static void Main(string[] args)
{
string fortranValue = 3147.3.ToFortranDouble();
System.Console.WriteLine(fortranValue);
}
}
Or for something a little more complicated (not sure how much precision Fortran floats/doubles give):
public static class DoubleExtensions
{
public static string ToFortranDouble(this double value)
{
return value.ToFortranDouble(4);
}
public static string ToFortranDouble(this double value, int precision)
{
return string.Format(value.ToString(
string.Format("\\0.{0}E0", new string('#', precision))
));
}
}