byte ToString without converting from hexidecimal - c#

I am using c# to read information coming out of a scale and I am getting back 6 bytes of Data. The last two contain the weight, in Hexadecimal. The way that it is set up is that if your append byte 5 on to byte 4 and convert to decimal you will get the correct weight.
I am trying to do this right now by using toString on the bytes and appending them but toString is automatically converting them from Hexadecimal to decimal. This is occurring before I can append them so I am getting incorrect weights.
Is there any way to convert a byte to a string without it being formatted from hexadecimal to decimal for you?

Use the X format string when calling ToString on your bytes to keep them in hexadecimal. You can append a number to X to specify the number of "digits" you want.
byte b = 0x0A;
b.ToString("X"); // A
b.ToString("X2"); // 0A

Related

C# Writing hex string from textbox as bytes

I am trying to convert hex values from a textbox string (ie ffff) to 0xffff as a INT (This way I can use binary writer to write FFFF as 2 bytes in a file).
I actually used this:
string hextoconvert = Convert.ToInt32(textBox1.Text).ToString("X8");
(But again wasn't sure how to convert the string 0002045E to int 0x0002045E (as 4 bytes)).
If that isn't the right idea then what should I use to convert hex values that the user puts in a textbox TO BYTES?
int.Parse(hexString, System.Globalization.NumberStyles.AllowHexSpecifier);
This worked, thanks m.rogalski!

Converting int to hex but not string

Now I know that converting a int to hex is simple but I have an issue here.
I have an int that I want to convert it to hex and then add another hex to it.
The simple solution is int.Tostring("X") but after my int is turned to hex, it is also turned to string so I can't add anything to it until it is turned back to int again.
So my question is; is there a way to turn a int to hex and avoid having turned it to string as well. I mean a quick way such as int.Tostring("X") but without the int being turned to string.
I mean a quick way such as int.Tostring("X") but without the int being
turned to string.
No.
Look at this way. What is the difference between those?
var i = 10;
var i = 0xA;
As a value, they are exactly same. As a representation, first one is decimal notation and the second one is hexadecimal notation. The X you use hexadecimal format specifier which generates hexadecimal notation of that numeric value.
Be aware, you can parse this hexadecimal notation string to integer anytime you want.
C# convert integer to hex and back again
There is no need to convert. Number ten is ten, write it in binary or hex, yes their representation will differ depending in which base you write them but value is same. So just add another integer to your integer - and convert the final result to hex string when you need it.
Take example. Assume you have
int x = 10 + 10; // answer is 20 or 0x14 in Hex.
Now, if you added
int x = 0x0A + 0x0A; // x == 0x14
Result would still be 0x14. See?
Numeric 10 and 0x0A have same value just they are written in different base.
Hexadecimal string although is a different beast.
In above case that could be "0x14".
For computer this would be stored as: '0', 'x', '1', '4' - four separate characters (or bytes representing these characters in some encoding). While in case with integers, it is stored as single integer (encoded in binary form).
I guess you missing the point what is HEX and what is INT. They both represent an numbers. 1, 2, 3, 4, etc.. numbers. There's a way to look at numbers: as natural numbers: INT and hexadecimal - but at the end those are same numbers. For example if you have number: 5 + 5 = 10 (int) and A (as hex) but it the same number. Just view on them are different
Hex is just a way to represent number. The same statment is true for decimal number system and binary although with exception of some custom made numbers (BigNums etd) everything will be stored as binary as long as its integer (by integer i mean not floating point number). What would you really like to do is probably performing calculations on integers and then printing them as a Hex which have been already described in this topic C# convert integer to hex and back again
The short answer: no, and there is no need.
The integer One Hundred and seventy nine (179) is B3 in hex, 179 in base-10, 10110011 in base-2 and 20122 in base-3. The base of the number doesn't change the value of it. B3, 17, 10110011, and 20122 are all the same number, they are just represented different. So it doesn't matter what base they are in as long as you do you mathematical operations on numbers in the same base it doesn't matter what the base is.
So in your case with Hex numbers, they can contain characters such as 'A','B', 'C', and so on. So when you get a value in hex if it is a number that will contain a letter in its hex representation it will have to be a string as letters are not ints. To do what you want, it would be best to convert both numbers to regular ints and then do math and convert to Hex after. The reason for this is that if you want to be able to add (or whatever operation) with them looking like hex you are going to to need to change the behavior of the desired operator on string which is a hassle.

Converting byte array to hexadecimal value using BitConverter class in c#?

I'm trying to convert a byte array into hexadecimal value using Bitconverter class.
long hexValue = 0X780B13436587;
byte[] byteArray = BitConverter.GetBytes ( hexValue );
string hexResult = BitConverter.ToString ( byteArray );
now if I execute the above code line by line, this is what I see
I thought hexResult string would be same as hexValue (i.e. 780B13436587h) but what I get is different, am I missing something, correct me if I'm wrong.
Thanks!
Endianness.
BitConverter uses CPU-endianness, which for most people means: little-endian. When humans write numbers, we tend to write big-endian (broadly speaking: you write the thousands, then hundreds, then tens, then the digits). For a CPU, big-endian means that the most-significant byte is first and the least-significant byte is last. However, unless you're using an Itanium, your CPU is probably little-endian, which means that the most-significant byte is last, and the least-significant byte is first. The CPU is implemented such that this doesn't matter unless you are peeking inside raw memory - it will ensure that numeric and binary arithmetic still works the way you expect. However, BitConverter works by peeking inside raw memory - hence you see the reversed data.
If you want the value in big-endian format, then you'll need to:
do it manually in big-endian order
check the BitConverter.IsLittleEndian value, and if true:
either reverse the input bytes
or reverse the output
If you look closely, the bytes in the output from BitConverter are reversed.
To get the hex-string for a number, you use the Convert class:
Convert.ToString(hexValue, 16);
It is the same number but reversed.
BitConverter.ToString can return string representation in reversed order:
http://msdn.microsoft.com/en-us/library/3a733s97(v=vs.110).aspx
"All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian."

File.WriteAllBytes does not changes file to binary 10101011

I have a some confusion about the function File.WriteAllBytes.
Actually i read from an image file using
byte[] b = System.IO.File.ReadAllBytes(textBox1.Text);
and then i wrote back the read data to a text file to see how it looks.
System.IO.File.WriteAllBytes(#"D:\abc.txt", b);
But the contents of abc.txt are not pure binary(1010110) , but they appear as :-
ëžÕwN±k›“ùIRA=Ï¥Dh﬒ȪÊj:³0Æî(À÷«3ÚÉid¤n•O<‰-ª#–¢)cùY³Ö˜K„TûËEÇóþ}wtÑ+²=£v*NÌ!\ äji;âíÇ8ÿ ?犴ö¬€Áç#µ:+ŠVÜ„©³Û?çù~VèÖ·ÂËSŠE7RH8}GJGfT?Ý?çüÿ œÌÊR"6­ÓŠY¬Š¬L§|n¹> ÷’ÃU{D®t­vE!3** Ý× õ¨ã(¨qžO§ùÿ >Ó¥¤…K€#N{ñM(ÊÅ€ûÃŒRtj/²Æ¤¶¹RÁŽxqþÏó#KŒîn皘æ0C/-Ž1Mu>oÊ }é5(­Q¢i±pIôÀôÿ ?çÒÂB-á.ãï©Ú}êB®æÇÌyÿ ?çüU¥mã$”ã
‚DiFQ¸'µ,ARGLäc¯4%ËŸÃœsŸóù~H 3d‚zŠ‡Ø........................................
Are the binary 1s and 0s getting converted to some other number system comprising of so many symbols ??
A text-viewer like NotePad will try to interpret the bytes as text, probably it will interpret the bytes as Unicode.
If you want to see the actual 0's and 1's then read in the image as bytes and convert the byte array to a string of 0's and 1's, for this you could use:
public static string ByteArrayToString(byte[] ba)
{
string hex = BitConverter.ToString(ba);
return hex.Replace("-","");
}
The conversion function is copied from here (accepted answer). Remember, this string will not anymore be interpretable as image, essentially this will simply be a large string of 0's and 1's.
Each byte in the file is comprised of 8 bits. When you use ReadAllBytes, you get an array of byte instances, where each byte represents a number between 0 and 255 (inclusive). One representation of the number 86 that is readable by humans is 01010110. However, when you use WriteAllBytes, it writes the sequence of bytes in the original form. Notepad is then loading the file and displaying each byte as a single character (or in some encodings treating multiple bytes as a single character to display). However, if you were to write "01010110" to a file such that Notepad shows those numbers, you would actually end up writing 8 bytes, not 8 bits, like this, where each set of 8 bits represents the digit '0' or '1':
00110000 00110001 00110000 00110001 00110000 00110001 00110001 00110000

Efficient Hex Manipulation

I have a byte array represented by hex values, these are time durations. The data could be converted to integer values and multiplied by a constant to get the timings. The decoding of the data will be saved to a file as a series of hex strings. What would be an efficient way of manipulating hex values?
I was looking at performance issues when dealing with data formats, as I have to work with more than one format at different stages (calculations, data display, etc.). Most examples show the conversion from byte[] to hex string ("1A 3C D4"), and viceversa, but I was looking for an alternative, which is to convert to Int16 and use char[] array.
You don't have a byte array representing hex values. You have a byte array representing numbers. The base you represent a number in is only relevant when you're representing it.
To put it a different way: if you thought of your byte array as representing decimal integers instead, how do you imagine it would be different? Is my height different if I represent it in feet and inches instead of metres?
Now, if you're trying to represent 16-bit numbers, I'd suggest that using a byte array is a bad idea. Use a ushort[] or short[] instead, as those are 16-bit values. If you're having trouble getting the data into an array like that, please give details... likewise if you have any other problems with the manipulation. Just be aware that until you're writing the data out as text, there's really no such concept as which base it's in, as far as the computer is concerned.
(Note that this is different for floating point values, where the data really would be different between a decimal and a double, for example... there, the base of representation is part of the data format. It's not for integers. Alternatively, you can think of all integers as just being binary until you decide to format them as text...)
From MSDN:
The hexadecimal ("X") format specifier
converts a number to a string of
hexadecimal digits. The case of the
format specifier indicates whether to
use uppercase or lowercase characters
for hexadecimal digits that are
greater than 9. For example, use "X"
to produce "ABCDEF", and "x" to
produce "abcdef". This format is
supported only for integral types.
The precision specifier indicates the
minimum number of digits desired in
the resulting string. If required, the
number is padded with zeros to its
left to produce the number of digits
given by the precision specifier.
byte x = 60;
string hex = String.Format("0x{0:X4}", x);
Console.WriteLine(hex); // prints "0x003C"

Categories