A MAC address (Wikipedia article) is typically formatted in the form of 6 hexadecimal numbers separated by a semicolon, like 14:10:9F:D4:04:1A.
In C#, it can be passed around as a string, while some libraries manipulate these as a UInt64 or ulong.
Question
What are the relationship between the string, hex representation, ulong, and how can I go from one to the other?
MAC Address is HEX
As correctly described here:
The MAC address is very nearly a hex string. In fact, if you remove the ':' characters, you have a hex string.
14:10:9F:D4:04:1A literally means 0x14109FD4041A, only easier to read.
string to UInt64 and back
A MAC address is made up of 6 bytes, 48 bits, fitting in an UInt64 with 2 bytes to spare. Leaving out the MSB vs. LSB ordering complication, you can use the 2 methods below:
Format into a string
using System;
using System.Linq;
public static string MAC802DOT3(ulong macAddress)
{
return string.Join(":",
BitConverter.GetBytes(macAddress).Reverse()
.Select(b => b.ToString("X2"))).Substring(6);
}
// usage: var s = MAC802DOT3(0x14109fd4041a);
// var s = MAC802DOT3(22061633504282);
// s becomes "14:10:9F:D4:04:1A"
Convert to an integer
public static ulong MAC802DOT3(string macAddress)
{
string hex = macAddress.Replace(":", "");
return Convert.ToUInt64(hex, 16);
}
// usage: var m = MAC802DOT3("14:10:9F:D4:04:1A");
// m becomes 22061633504282 (0x14109fd4041a)
Related
Is there any case when this test could fail? Would it fail on BigEndian machine? Is ByteArrayToHexString LittleEndian and why (chars seem to be written from left to right, so it must be BigEndian)?
[Fact]
public void ToMd5Bytes2_ValidValue_Converted()
{
// Arrange
// Act
var bytes = new byte[] {16, 171, 205, 239};
var direct = ByteArrayToHexString(bytes);
var bitConverter = BitConverter.ToString(bytes).Replace("-", string.Empty);
var convert = Convert.ToHexString(bytes);
// Assert
Assert.Equal(direct, bitConverter);
Assert.Equal(bitConverter, convert);
}
public static string ByteArrayToHexString(byte[] Bytes)
{
StringBuilder Result = new StringBuilder(Bytes.Length * 2);
string HexAlphabet = "0123456789ABCDEF";
foreach (byte B in Bytes)
{
Result.Append(HexAlphabet[(int)(B >> 4)]);
Result.Append(HexAlphabet[(int)(B & 0xF)]);
}
return Result.ToString();
}
Only multi-byte values are affected by endianness, like Int32.
Arrays of bytes are not - they're already a defined sequence of single bytes.Of course, it matters how you retrieved this byte array - if it is the result of converting a multi-byte value, you must have done that conversion with the appropriate endianness. Otherwise you'd have to reverse each slice of your array which originally represented a multi-byte value.
Also, endianness does not happen on the bit level, a misconception I see every now and then.
In the comments, you mentioned the remarks sections of the BitConverter.ToString documentation:
All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian.
Looking at the reference source, I do not see where endianness is having an effect on its operation on byte arrays. This comment is either outdated or misleading, and I've opened an issue on it.
i need to convert some char to int value but bigger than 256. This is my function to convert int to char. I need reverse it
public static string chr(int number)
{
return ((char)number).ToString();
}
This function doesnt work - its returning only 0-256, ord(chr(i))==i
public static int ord(string str)
{
return Encoding.Unicode.GetBytes(str)[0];
}
The problem is that your ord function truncates the character of the string to the first byte, as interpreted by UNICODE encoding. This expression
Encoding.Unicode.GetBytes(str)[0]
// ^^^
returns the initial element of a byte array, so it is bound to stay within the 0..255 range.
You can fix your ord method as follows:
public static int Ord(string str) {
var bytes = Encoding.Unicode.GetBytes(str);
return BitConverter.ToChar(bytes, 0);
}
Demo
Since you don't care much about encodings and you directly cast an int to a char in your chr() function, then why dont you simply try the other way around?
Console.WriteLine((int)'\x1033');
Console.WriteLine((char)(int)("\x1033"[0]) == '\x1033');
Console.WriteLine(((char)0x1033) == '\x1033');
char is 2 bytes long (UTF-16 encoding) in C#
char c1; // TODO initialize me
int i = System.Convert.ToInt32(c1); // could be greater than 255
char c2 = System.Convert.ToChar(i); // c2 == c1
System.Convert on MSDN : https://msdn.microsoft.com/en-us/library/system.convert(v=vs.110).aspx
Characters and bytes are not the same thing in C#. The conversion between char and int is a simple one: (char)intValue or (int)myString[x].
Say you have a byte variable, and you assign it the decimal value of 102.
(byte myByteVariable = 102;)
Why is that possible? Shouldn't you have to provide an actual 8 bit value instead to avoid confusion? And how do I set a byte value by supplying bits instead?
And how do I set a byte value by supplying bits instead?
If you are setting the byte value explicitly, you can do so using the prefix 0x to indicate that you are doing so in hexadecimal, as an encoding for groups of 4 bits. E.g. the hex value of decimal 102 is 0x66, which is equivalent to 0110 0110.
But for supplying bits directly, Marc Gravell provides an option here for converting from binary to int using the Convert class, and similarly you can convert the string representation of a binary value into a byte using the Convert.ToByte method as:
byte b = Convert.ToByte("01100110", 2);
byte is a decimal type which represents a decimal number, it doesn't represent a bit field. So 102 is a normal value for it, because it's in the range of byte values (which is [0;255]). If you want to manipulate the bits, consider using BitArray or BitVector32.
A byte can store 8 bits (that is values from 0 to 255). A short stores 16 bits, an int 32, etc. Each integer type in C# simply allows the storage of a wider range of numbers. C# allows you to assign a whole number to any of these types.
In order to set each bit individually, you will need to use bitwise operators.
I actually wrote a library a while ago to handle this, allowing you to set each bit of the byte using data[x]. It is very similar to the BitArray class (which I'm not sure if I knew about when I made it)
The main idea of it is:
private byte data;
public void SetBit(int index, bool value)
{
if (value)
data = (byte)(data | (1 << index));
else
data = (byte)(data & ~(1 << index));
}
public bool GetBit(int index)
{
return ((data & (1 << index)) != 0);
}
I have the following String:
String characters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";
I need to create two strings from it:
A string obtained simply by reordering the characters;
A string obtained by selecting 10 characters and reordering them.
So for (1) I would get, for example:
String characters = "jkDEF56789hisGHIbdefpqraXYZ1234txyzABCcglmnoRSTUVWuvwJKLMNOPQ0";
And for (2) I would get, for example:
String shortList = "8GisIbH9hd";
THE PROBLEM
I could just change to Char Array and order by randomly by a Guid.
However I want to specify some kind of key (maybe a guid?) and for that key the result or reordering and of selecting the shortList must be the same.
Does this make sense?
you could convert your GUID string to an int array of its ascii/utf/whatever codes like here
Getting The ASCII Value of a character in a C# string.
then iterate over this array with something along lines of this (note: this is pseudocode):
string res="";
for(elem in intconvertedGUIDstring) res+= characters[elem%(characters.count)];
for the task [2] you could reverse your Characters i.e. like here Best way to reverse a string
and use the c# string function left() to truncate it before running it through the same procedure
You can use a hash function with a good distribution value as seed for comparison between elements. Here's a sample:
static ulong GetHash(char value, ulong seed)
{
ulong hash = seed * 3074457345618258791ul;
hash += value;
hash *= 3074457345618258799ul;
return hash;
}
And use this function for comparison:
static void Main()
{
var seed = 53ul;
var str = "ABCDEFHYUXASPOIMNJH";
var shuffledStr = new string(str.OrderBy(x => GetHash(x, seed)).ToArray());
Console.WriteLine(shuffledStr);
}
Now every time you order by seed 53 you'll get the same result, and if you seed by 54 you'll get a different result.
I have an integer value. I want to convert it to the Base 64 value. I tried the following code.
byte[] b = BitConverter.GetBytes(123);
string str = Convert.ToBase64String(b);
Console.WriteLine(str);
Its giving the out put as "ewAAAA==" with 8 characters.
I convert the same value to base 16 as follows
int decvalue = 123;
string hex = decvalue.ToString("X");
Console.WriteLine(hex);
the out put of the previous code is 7B
If we do this in maths the out comes are same. How its differ? How can I get same value to Base 64 as well. (I found the above base 64 conversion in the internet)
The question is rather unclear... "How is it differ?" - well, in many different ways:
one is base-16, the other is base-64 (hence they are fundamentally different anyway)
one is doing an arithmetic representation; one is a byte serialization format - very different
one is using little-endian arithmetic (assuming a standard CPU), the other is using big-endian arithmetic
To get a comparable base-64 result, you probably need to code it manually (since Convert only support base-8, base-10, base-16 for arithmetic converts). Perhaps (note: not optimized):
static void Main()
{
string b64 = ConvertToBase64Arithmetic(123);
}
// uint because I don't care to worry about sign
static string ConvertToBase64Arithmetic(uint i)
{
const string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
StringBuilder sb = new StringBuilder();
do
{
sb.Insert(0, alphabet[(int)(i % 64)]);
i = i / 64;
} while (i != 0);
return sb.ToString();
}