Is it possible to store a decimal value in JSON format (string,object) within a text file and subsequently retrieve it as a decimal without having to do any type casting / parsing etc?
For example the file may also have doubles stored within, these should not be parsed as decimals.
The file would contain no additional information to determine the variable type:
"num1": 4,
"num2": 5.4, (as double)
"num3": 563.2334 (as decimal)
Yes it is possible through binary writer:
using(var bw = new BinaryWriter(File.OpenWrite("myFile.txt")))
{
bw.Write(1234.01m); //there is actualy 16 bytes written to file. Double is 8 bytes long.
bw.Write((double)5);
}
This way you will be able to avoid parsing its textual represenatation and load right into memory:
using(var br = new BinaryReader(File.OpenRead("myFile.txt")))
{
var myDecimal = br.ReadDecimal();//1234.01
var myDouble = br.ReadDouble();//5
}
You don't need any additional data. Decimal and Double have different size in memory:
Decimal is 16 bytes
Double is 8 bytes
So if you want to store your values in JSON so bad to be able to extract it fast -> don't do this. It is probably not worth it, and this is why:
JSON is parsable format itself, and it will parse/convert it even if you store it inside byte array. If you prefer speed over readability - use other serialization frameworks. For example, protobuf.
Related
I'm trying to parse a binary STereoLithography file(.stl) in .net(C#) which has 32-bit floating-point numbers(IEEE-754) in it.
I need to parse those numbers and then later store those numbers in a string representation in a PovRay script, which is a plain text file.
I tried this in nodejs using the readFloatLE function which gives me back a Number(double-precision value).
In .net I only found the Bitconverter.ToSingle function which reads the binary 32bits and gives me a float which has less decimal precision(7) than the nodejs parsing.
The nodejs parsing gives a povray-script with nubmers like: -14.203535079956055
While the .net only gives me: -14.2035351
So, how do I parse the binary 32 bits to a Double in .net to get the higher precision?
[Edit]
Using the anwser from taffer: casting to converted float to a double and then using the 'round-trip' formatter for string representation.
Comparing to the nodejs output there are still minor rounding differences but those are in the 13th-16th decimals.
You did not lose any precision. Unlike JavaScript, the default .NET ToString uses general number formatting, which may truncate the last digits.
But if you use the round-trip format specifier you get the exact result:
var d = double.Parse("-14.203535079956055");
Console.WriteLine(d.ToString("R")); // displays -14.203535079956055
Edit
In JavaScript there is no 32-bit floating number type. Every number is a double. So even if you use the readFloatLE function, it will be parsed as a double from 32 bits. See the C# code below, which demonstrates what actually happens:
var d = (double)float.Parse("-14.2035351");
Console.WriteLine(d.ToString("R")); // displays -14.203535079956055
Or even more precisely if you read the numbers from a byte buffer:
var d = (double)BitConverter.ToSingle(new byte[] { 174,65,99,193 }, 0);
Console.WriteLine(d.ToString("R")); // displays -14.203535079956055
So I just learned how to convert ints to binary with Convert.ToString(int , 2);
Now I can use this so ask a user for a number and then save it with
int name = int.Parse(Console.ReadLine());
and then convert it with
Convert.ToString(name , 2);
to get the number in binary.
But I'm wondering if the user could input it in binary and then I could save it as is, to later convert the user input into decimal with
Convert.ToInt32(binary,2).ToString();
I know this isn't correct but just to give you an idea of what I'm looking for here is an example, instead of int you could use binary. Like
binary name = binary.Parse(Console.ReadLine());
You will read a string input from Console.ReadLine - but you can just use Convert.ToInt32(val, 2) to convert a base 2 string to a number.
If you really want to create a binary number type, you will need to define a struct with implicit or explicit conversion to an int, but this seems unneccessary for your task. But, they key point here is that an int is kind of base agnostic - it's the value represented by a string in base whatever.
I think you're slightly confused - when you "convert int's to binary" all you're actually doing is converting the int into a string in it's binary representation. It's not actually binary - it's just a string that contains 1's and 0's that represent a binary number.
To accomplish your goal, all you need to do is:
string binary = Console.ReadLine();
And the user would need to enter "binary" such as 0100. Again, I put binary in quotes because it's still just a string, not a number. If you want the user to enter a number, like 4 and store it as a string of 0100, just do:
string binary = Convert.ToString(int.Parse(Console.ReadLine()), 2);
which will read the user input (4), convert it to an integer, and then convert it to it's string binary representation (0100).
To read it back to an int, do Convert.ToInt32(binary, 2);
I am using c# to read information coming out of a scale and I am getting back 6 bytes of Data. The last two contain the weight, in Hexadecimal. The way that it is set up is that if your append byte 5 on to byte 4 and convert to decimal you will get the correct weight.
I am trying to do this right now by using toString on the bytes and appending them but toString is automatically converting them from Hexadecimal to decimal. This is occurring before I can append them so I am getting incorrect weights.
Is there any way to convert a byte to a string without it being formatted from hexadecimal to decimal for you?
Use the X format string when calling ToString on your bytes to keep them in hexadecimal. You can append a number to X to specify the number of "digits" you want.
byte b = 0x0A;
b.ToString("X"); // A
b.ToString("X2"); // 0A
We are rewriting some applications previously developed in Visual FoxPro and redeveloping them using .Net ( using C# )
Here is our scenario:
Our application uses smartcards. We read in data from a smartcard which has a name and number. The name comes back ok in readable text but the number, in this case '900' comes back as a 2 byte character representation (131 & 132) and look like this - ƒ„
Those 2 special characters can be seen in the extended Ascii table.. now as you can see the 2 bytes are 131 and 132 and can vary as there is no single standard extended ascii table ( as far as I can tell reading some of the posts on here )
So... the smart card was previously written to using the BINTOC function in VFP and therefore the 900 was written to the card as ƒ„. And within foxpro those 2 special characters can be converted back into integer format using CTOBIN function.. another built in function in FoxPro..
So ( finally getting to the point ) - So far we have been unable to convert those 2 special characters back to an int ( 900 ) and we are wondering if this is possible in .NET to read the character representation of an integer back to an actual integer.
Or is there a way to rewrite the logic of those 2 VFP functions in C#?
UPDATE:
After some fiddling we realise that to get 900 into 2bytes we need to convert 900 into a 16bit Binary Value, then we need to convert that 16 bit binary value into a decimal value.
So as above we are receiving back 131 and 132 and their corresponding binary values as being 10000011 ( decimal value 131 ) and 10000100 ( decimal value 132 ).
When we concatenate these 2 values to '1000001110000100' it gives the decimal value 33668 however if we removed the leading 1 and transform '000001110000100' to decimal it gives the correct value of 900...
Not too sure why this is though...
Any help would be appreciated.
It looks like VFP is storing your value as a signed 16 bit (short) integer. It seems to have a strange changeover point to me for the negative numbers but it adds 128 to 8 bit numbers and adds 32768 to 16 bit numbers.
So converting your 16 bit numbers from the string should be as easy as reading it as a 16 bit integer and then taking 32768 away from it. If you have to do this manually then the first number has to be multiplied by 256 and then add the second number to get the stored value. Then take 32768 away from this number to get your value.
Examples:
131 * 256 = 33536
33536 + 132 = 33668
33668 - 32768 = 900
You could try using the C# conversions as per http://msdn.microsoft.com/en-us/library/ms131059.aspx and http://msdn.microsoft.com/en-us/library/tw38dw27.aspx to do at least some of the work for you but if not it shouldn't be too hard to code the above manually.
It's a few years late, but here's a working example.
public ulong CharToBin(byte[] s)
{
if (s == null || s.Length < 1 || s.Length > 8)
return 0ul;
var v = s.Select(c => (ulong)c).ToArray();
var result = 0ul;
var multiplier = 1ul;
for (var i = 0; i < v.Length; i++)
{
if (i > 0)
multiplier *= 256ul;
result += v[i] * multiplier;
}
return result;
}
This is a VFP 8 and earlier equivalent for CTOBIN, which covers your scenario. You should be able to write your own BINTOC based on the code above. VFP 9 added support for multiple options like non-reversed binary data, currency and double data types, and signed values. This sample only covers reversed unsigned binary like older VFP supported.
Some notes:
The code supports 1, 2, 4, and 8-byte values, which covers all
unsigned numeric values up to System.UInt64.
Before casting the
result down to your expected numeric type, you should verify the
ceiling. For example, if you need an Int32, then check the result
against Int32.MaxValue before you perform the cast.
The sample avoids the complexity of string encoding by accepting a
byte array. You would need to understand which encoding was used to
read the string, then apply that same encoding to get the byte array
before calling this function. In the VFP world, this is frequently
Encoding.ASCII, but it depends on the application.
Did Binary and ASCII data type had been defined in C# 2.0?
I plan to detect several variables are Binary or ASCII or Integer(sbyte, byte, short, ushort, long, ulong) types.
I can use typeof(sbyte) ect.
But I failed to implement as this typeof(binary) or typeof(ascii).
What I need is something like this typeof function to detect the variables are Binary or ASCII type?
[update]
format code Octal
Binary 001000 10
ASCII 010000 20
Normally you would store text data in a string, and binary data in a byte[]. Do not try to store binary data in a string by applying an arbitrary encoding. For example, if you try to use Encoding.UTF8.GetString(binaryData) then it's highly likely that you won't be able to get the original data back again.
If you need to store binary data in a string, you should use something like Convert.ToBase64String (and then Convert.FromBase64String) which stores 3 bytes of binary data in each 4 characters in the string.
If you need to store data in a string and the type of the original data, you should either store the type separately, or keep a compound form such as "type:data". Sample strings might then be:
int:10
string:10 // The digits 1 and 0, but originally as text, not a number
binary:hFed234= // Base-64-encoded data
Have a look at System.Text.Encoding and System.Text.Decoder.
You need to attempt-parse it into a fitting datatype. Set them up with priority.
Something like:
1) Try to parse integer, if fail continue
2) Try to parse text, if fail continue
3) Save binary
To autodetect encoding see Determine a string's encoding in C#