question on C# data type - c#

Did Binary and ASCII data type had been defined in C# 2.0?
I plan to detect several variables are Binary or ASCII or Integer(sbyte, byte, short, ushort, long, ulong) types.
I can use typeof(sbyte) ect.
But I failed to implement as this typeof(binary) or typeof(ascii).
What I need is something like this typeof function to detect the variables are Binary or ASCII type?
[update]
format code Octal
Binary 001000 10
ASCII 010000 20

Normally you would store text data in a string, and binary data in a byte[]. Do not try to store binary data in a string by applying an arbitrary encoding. For example, if you try to use Encoding.UTF8.GetString(binaryData) then it's highly likely that you won't be able to get the original data back again.
If you need to store binary data in a string, you should use something like Convert.ToBase64String (and then Convert.FromBase64String) which stores 3 bytes of binary data in each 4 characters in the string.
If you need to store data in a string and the type of the original data, you should either store the type separately, or keep a compound form such as "type:data". Sample strings might then be:
int:10
string:10 // The digits 1 and 0, but originally as text, not a number
binary:hFed234= // Base-64-encoded data

Have a look at System.Text.Encoding and System.Text.Decoder.

You need to attempt-parse it into a fitting datatype. Set them up with priority.
Something like:
1) Try to parse integer, if fail continue
2) Try to parse text, if fail continue
3) Save binary
To autodetect encoding see Determine a string's encoding in C#

Related

C# - Store a decimal in file and read back

Is it possible to store a decimal value in JSON format (string,object) within a text file and subsequently retrieve it as a decimal without having to do any type casting / parsing etc?
For example the file may also have doubles stored within, these should not be parsed as decimals.
The file would contain no additional information to determine the variable type:
"num1": 4,
"num2": 5.4, (as double)
"num3": 563.2334 (as decimal)
Yes it is possible through binary writer:
using(var bw = new BinaryWriter(File.OpenWrite("myFile.txt")))
{
bw.Write(1234.01m); //there is actualy 16 bytes written to file. Double is 8 bytes long.
bw.Write((double)5);
}
This way you will be able to avoid parsing its textual represenatation and load right into memory:
using(var br = new BinaryReader(File.OpenRead("myFile.txt")))
{
var myDecimal = br.ReadDecimal();//1234.01
var myDouble = br.ReadDouble();//5
}
You don't need any additional data. Decimal and Double have different size in memory:
Decimal is 16 bytes
Double is 8 bytes
So if you want to store your values in JSON so bad to be able to extract it fast -> don't do this. It is probably not worth it, and this is why:
JSON is parsable format itself, and it will parse/convert it even if you store it inside byte array. If you prefer speed over readability - use other serialization frameworks. For example, protobuf.

C# Save user input as binary

So I just learned how to convert ints to binary with Convert.ToString(int , 2);
Now I can use this so ask a user for a number and then save it with
int name = int.Parse(Console.ReadLine());
and then convert it with
Convert.ToString(name , 2);
to get the number in binary.
But I'm wondering if the user could input it in binary and then I could save it as is, to later convert the user input into decimal with
Convert.ToInt32(binary,2).ToString();
I know this isn't correct but just to give you an idea of what I'm looking for here is an example, instead of int you could use binary. Like
binary name = binary.Parse(Console.ReadLine());
You will read a string input from Console.ReadLine - but you can just use Convert.ToInt32(val, 2) to convert a base 2 string to a number.
If you really want to create a binary number type, you will need to define a struct with implicit or explicit conversion to an int, but this seems unneccessary for your task. But, they key point here is that an int is kind of base agnostic - it's the value represented by a string in base whatever.
I think you're slightly confused - when you "convert int's to binary" all you're actually doing is converting the int into a string in it's binary representation. It's not actually binary - it's just a string that contains 1's and 0's that represent a binary number.
To accomplish your goal, all you need to do is:
string binary = Console.ReadLine();
And the user would need to enter "binary" such as 0100. Again, I put binary in quotes because it's still just a string, not a number. If you want the user to enter a number, like 4 and store it as a string of 0100, just do:
string binary = Convert.ToString(int.Parse(Console.ReadLine()), 2);
which will read the user input (4), convert it to an integer, and then convert it to it's string binary representation (0100).
To read it back to an int, do Convert.ToInt32(binary, 2);

Converting byte array to hexadecimal value using BitConverter class in c#?

I'm trying to convert a byte array into hexadecimal value using Bitconverter class.
long hexValue = 0X780B13436587;
byte[] byteArray = BitConverter.GetBytes ( hexValue );
string hexResult = BitConverter.ToString ( byteArray );
now if I execute the above code line by line, this is what I see
I thought hexResult string would be same as hexValue (i.e. 780B13436587h) but what I get is different, am I missing something, correct me if I'm wrong.
Thanks!
Endianness.
BitConverter uses CPU-endianness, which for most people means: little-endian. When humans write numbers, we tend to write big-endian (broadly speaking: you write the thousands, then hundreds, then tens, then the digits). For a CPU, big-endian means that the most-significant byte is first and the least-significant byte is last. However, unless you're using an Itanium, your CPU is probably little-endian, which means that the most-significant byte is last, and the least-significant byte is first. The CPU is implemented such that this doesn't matter unless you are peeking inside raw memory - it will ensure that numeric and binary arithmetic still works the way you expect. However, BitConverter works by peeking inside raw memory - hence you see the reversed data.
If you want the value in big-endian format, then you'll need to:
do it manually in big-endian order
check the BitConverter.IsLittleEndian value, and if true:
either reverse the input bytes
or reverse the output
If you look closely, the bytes in the output from BitConverter are reversed.
To get the hex-string for a number, you use the Convert class:
Convert.ToString(hexValue, 16);
It is the same number but reversed.
BitConverter.ToString can return string representation in reversed order:
http://msdn.microsoft.com/en-us/library/3a733s97(v=vs.110).aspx
"All the elements of value are converted. The order of hexadecimal strings returned by the ToString method depends on whether the computer architecture is little-endian or big-endian."

Happy 3501.ToString("X") day!

In C#, is there a way to convert an int value to a hex value without using the .ToString("X") method?
Your question is plain wrong (no offense intended). A number has one single value. Hex, Decimal, Binary, Octal, etc. are just different representations of one same integral number. Int32 is agnostic when it comes to what representation you choose to write it with.
So when you ask:
is there a way to convert an int value to a hex value
you are asking something thast doesn't make sense. A valid question would be: is there anyway to write a integer in hexadecimal representation that doesn't involve using .ToString("X")?
The answer is not really. Someway or the other (directly or not by you), .ToString("X") or some other flavor of ToString() will be called to correctly format the string representing the value.
And when you think of hexadecimal as a respresentation (a formatted string) of a given number, then .ToString() does make sense.
Use Convert.ToString( intValue, 16 );
It can be used to convert between any common numeric base, i.e., binary, octal, decimal and hexadecimal.

Efficient Hex Manipulation

I have a byte array represented by hex values, these are time durations. The data could be converted to integer values and multiplied by a constant to get the timings. The decoding of the data will be saved to a file as a series of hex strings. What would be an efficient way of manipulating hex values?
I was looking at performance issues when dealing with data formats, as I have to work with more than one format at different stages (calculations, data display, etc.). Most examples show the conversion from byte[] to hex string ("1A 3C D4"), and viceversa, but I was looking for an alternative, which is to convert to Int16 and use char[] array.
You don't have a byte array representing hex values. You have a byte array representing numbers. The base you represent a number in is only relevant when you're representing it.
To put it a different way: if you thought of your byte array as representing decimal integers instead, how do you imagine it would be different? Is my height different if I represent it in feet and inches instead of metres?
Now, if you're trying to represent 16-bit numbers, I'd suggest that using a byte array is a bad idea. Use a ushort[] or short[] instead, as those are 16-bit values. If you're having trouble getting the data into an array like that, please give details... likewise if you have any other problems with the manipulation. Just be aware that until you're writing the data out as text, there's really no such concept as which base it's in, as far as the computer is concerned.
(Note that this is different for floating point values, where the data really would be different between a decimal and a double, for example... there, the base of representation is part of the data format. It's not for integers. Alternatively, you can think of all integers as just being binary until you decide to format them as text...)
From MSDN:
The hexadecimal ("X") format specifier
converts a number to a string of
hexadecimal digits. The case of the
format specifier indicates whether to
use uppercase or lowercase characters
for hexadecimal digits that are
greater than 9. For example, use "X"
to produce "ABCDEF", and "x" to
produce "abcdef". This format is
supported only for integral types.
The precision specifier indicates the
minimum number of digits desired in
the resulting string. If required, the
number is padded with zeros to its
left to produce the number of digits
given by the precision specifier.
byte x = 60;
string hex = String.Format("0x{0:X4}", x);
Console.WriteLine(hex); // prints "0x003C"

Categories