We are rewriting some applications previously developed in Visual FoxPro and redeveloping them using .Net ( using C# )
Here is our scenario:
Our application uses smartcards. We read in data from a smartcard which has a name and number. The name comes back ok in readable text but the number, in this case '900' comes back as a 2 byte character representation (131 & 132) and look like this - ƒ„
Those 2 special characters can be seen in the extended Ascii table.. now as you can see the 2 bytes are 131 and 132 and can vary as there is no single standard extended ascii table ( as far as I can tell reading some of the posts on here )
So... the smart card was previously written to using the BINTOC function in VFP and therefore the 900 was written to the card as ƒ„. And within foxpro those 2 special characters can be converted back into integer format using CTOBIN function.. another built in function in FoxPro..
So ( finally getting to the point ) - So far we have been unable to convert those 2 special characters back to an int ( 900 ) and we are wondering if this is possible in .NET to read the character representation of an integer back to an actual integer.
Or is there a way to rewrite the logic of those 2 VFP functions in C#?
UPDATE:
After some fiddling we realise that to get 900 into 2bytes we need to convert 900 into a 16bit Binary Value, then we need to convert that 16 bit binary value into a decimal value.
So as above we are receiving back 131 and 132 and their corresponding binary values as being 10000011 ( decimal value 131 ) and 10000100 ( decimal value 132 ).
When we concatenate these 2 values to '1000001110000100' it gives the decimal value 33668 however if we removed the leading 1 and transform '000001110000100' to decimal it gives the correct value of 900...
Not too sure why this is though...
Any help would be appreciated.
It looks like VFP is storing your value as a signed 16 bit (short) integer. It seems to have a strange changeover point to me for the negative numbers but it adds 128 to 8 bit numbers and adds 32768 to 16 bit numbers.
So converting your 16 bit numbers from the string should be as easy as reading it as a 16 bit integer and then taking 32768 away from it. If you have to do this manually then the first number has to be multiplied by 256 and then add the second number to get the stored value. Then take 32768 away from this number to get your value.
Examples:
131 * 256 = 33536
33536 + 132 = 33668
33668 - 32768 = 900
You could try using the C# conversions as per http://msdn.microsoft.com/en-us/library/ms131059.aspx and http://msdn.microsoft.com/en-us/library/tw38dw27.aspx to do at least some of the work for you but if not it shouldn't be too hard to code the above manually.
It's a few years late, but here's a working example.
public ulong CharToBin(byte[] s)
{
if (s == null || s.Length < 1 || s.Length > 8)
return 0ul;
var v = s.Select(c => (ulong)c).ToArray();
var result = 0ul;
var multiplier = 1ul;
for (var i = 0; i < v.Length; i++)
{
if (i > 0)
multiplier *= 256ul;
result += v[i] * multiplier;
}
return result;
}
This is a VFP 8 and earlier equivalent for CTOBIN, which covers your scenario. You should be able to write your own BINTOC based on the code above. VFP 9 added support for multiple options like non-reversed binary data, currency and double data types, and signed values. This sample only covers reversed unsigned binary like older VFP supported.
Some notes:
The code supports 1, 2, 4, and 8-byte values, which covers all
unsigned numeric values up to System.UInt64.
Before casting the
result down to your expected numeric type, you should verify the
ceiling. For example, if you need an Int32, then check the result
against Int32.MaxValue before you perform the cast.
The sample avoids the complexity of string encoding by accepting a
byte array. You would need to understand which encoding was used to
read the string, then apply that same encoding to get the byte array
before calling this function. In the VFP world, this is frequently
Encoding.ASCII, but it depends on the application.
Related
I'm working with a binary file (3d model file for an old video game) in C#. The file format isn't officially documented, but some of it has been reverse-engineered by the game's community.
I'm having trouble understanding how to read/write the 4-byte floating point values. I was provided this explanation by a member of the community:
For example, the bytes EE 62 ED FF represent the value -18.614.
The bytes are little endian ordered. The first 2 bytes represent the decimal part of the value, and the last 2 bytes represent the whole part of the value.
For the decimal part, 62 EE converted to decimal is 25326. This represents the fraction out of 1, which also can be described as 65536/65536. Thus, divide 25326 by 65536 and you'll get 0.386.
For the whole part, FF ED converted to decimal is 65517. 65517 represents the whole number -19 (which is 65517 - 65536).
This makes the value -19 + .386 = -18.614.
This explanation mostly makes sense, but I'm confused by 2 things:
Does the magic number 65536 have any significance?
BinaryWriter.Write(-18.613f) writes the bytes as 79 E9 94 C1, so my assumption is the binary file I'm working with uses its own proprietary method of storing 4-byte floating point values (i.e. I can't use C#'s float interchangably and will need to encode/decode the values first)?
Firstly, this isn't a Floating Point Number its a Fix Point Number
Note : A fixed point number has a specific number of bits (or digits) reserved for the integer part (the part to the left of the decimal point)
Does the magic number 65536 have any significance
Its the max number of values unsigned 16 bit number can hold, or 2^16, yeah its significant, because the number you are working with is 2 * 16 bit values encoded for integral and fractional components.
so my assumption is the binary file I'm working with uses its own
proprietary method of storing 4-byte floating point values
Nope wrong again, floating point values in .Net adhere to the IEEE Standard for Floating-Point Arithmetic (IEEE 754) technical standard
When you use BinaryWriter.Write(float); it basically just shifts the bits into bytes and writes it to the Stream.
uint TmpValue = *(uint *)&value;
_buffer[0] = (byte) TmpValue;
_buffer[1] = (byte) (TmpValue >> 8);
_buffer[2] = (byte) (TmpValue >> 16);
_buffer[3] = (byte) (TmpValue >> 24);
OutStream.Write(_buffer, 0, 4);
If you want to read and write this special value you will need to do the same thing, you are going to have to read and write the bytes and convert them your self
This should be a build-in value unique to the game.
It should be more similar to Fraction Value.
Where 62 EE represent the Fraction Part of the value and FF ED represent the Whole Number Part of the value.
The While Number Part is easy to understand so I'm not going to explain it.
The explanation of Fraction Part is:
For every 2 bytes, there are 65536 possibilities (0 ~ 65535).
256 X 256 = 65536
hence the magic number 65536.
And the game itself must have a build-in algorithm to divide the first 2 bytes by 65536.
Choosing any number other than this will be a waste of memory space and result in decreased accuracy of the value which can be represented.
Of course, it's all depended on what kind of accuracy the game wish to present.
I have a line in my function that calculates the sum of two digits.
I get the sum with this syntax:
sum += get2DigitSum((acctNumber[0] - '0') * 2);
which multiplys the number on index 0 with 2.
public static int get2DigitSum(int num)
{
return (num / 10) + (num % 10);
Lets say we have number 9 on index 0. If i have acctNumber[0] - '0' it passes the 9 into the other function.But if I don't have the - '0' after the acctNumber[0] it passes 12. I don't understand why I get wrong result if I don't use - '0'.
The text "0" and the number 0 are not at all equal to a computer.
The character '0' has in fact the ASCII number 48 (or 0x30 in hex), so to convert the character '0' into the number 0 you need to subtract 48 - in C and most languages based on it, this can be written as subtracting the character '0', which has the numerical value 48.
The beauty is, that the character '1' has the ASCII number 49, so subtracting the number 48 (or the character '0') gives 49-48=1 and so on.
So the important part is: Computers are not only sensitve to data (patterns of bits in some part of the machine), but also to the interpretation of this data - in your case interpreting it as a text and interpreting ist as a number is not the same, but gives a difference of 48, which you need to get rid of by a subtraction.
Because you are providing acctNumber[0] to get2DigitSum.
get2DigitSum accepts an integer, but acctNumber[0] is not an integer, it holds an char which represents a character with an integer value.
Therefore, you need to subtract the '0' to get the integer.
'0' to '9' have ASCII values of 48 to 57.
When you subtract two char values, actually there ASCII values get subtracted. That's why, you need to subtract '0'
Internally all Characters are represented as numbers. Numbers that then get converted into nice pictograms during display only.
Now the digits 0-9 are ASCII codes 48-57. Basically they are offset by +48. Past 57 you find the english alphabet in small and then large. And before that various operators and even a bunch of unprintable characters.
Normally you would not be doing this kind of math at all. You would feed the whole string into a Parse() or TryParse() function and then work with the parsed numbers. There are a few cases where you would not do that and isntead go for "math with Characters":
you did not know about Parse and integers when you made it
you want to support arbitary sized numbers in your calculations. This is a common beginner approach (the proper way is BigInteger).
You might be doing stuff like sorting mixed letter/number strings by the fully interpreted number (so 01 would come before 10). The same way windows sorts files with numbers in them.
You do not have a prewritten parse function. Like I did back when I started learning in C++ back in 2000.
I am working with text readable files which are exported from a client's systems that use a custom XML-like structure. I need to be able to parse and extract data from large numbers of these files with no documentation on how they are structured.
I have mostly worked out the file structure, however I am struggling with how values have been encoded. I can manually look up in the system the correct values as a comparison. Some examples:
Export Data = System Value
D411E848 = 500000
D40F86A = 100000
D41086A = 200000
I'm fairly sure the leading "D" is a token to say the field is a decimal or double value. The reason is that all numeric fields start with "D" and all text fields start with "S". The following "4" may also be part of the field data type, as all numeric fields seem to start with "D4".
However converting from Hex to Decimal on any combination of the export data value does not yield the correct result.
Any ideas how to do the conversion?
Extra data mappings:
Value Export File
1 D3FF
2 D4
3 D4008
4 D401
5 D4014
6 D4018
7 D401C
8 D402
9 D4022
10 D4024
100 D4059
1000 D408F4
100000 D40F86A
500000 D411E848
500001 D411E8484
500002 D411E8488
500003 D411E848C
500004 D411E849
500005 D411E8494
500006 D411E8498
500007 D411E849C
500008 D411E84A
500009 D411E84A4
500010 D411E84A8
Seems like a normal, but truncated, IEEE 754 64-bit (double precision) number.
0x408F400000000000 = 1000
408F4 (truncated)
D408F4 (prefixed with D)
0x411E848000000000 = 500000
411E848 (truncated)
D411E848 (prefixed with D)
Try converting it with the following website as a reference: http://www.binaryconvert.com/result_double.html?decimal=053048048048048048
I can see the pattern, starting from 2. Here are the steps to get decimal value from your custom format.
Skip D4 from the beginning of the string.
If LEN() < 3 fill with 0s to get at least 3 letters long string
Take 2 letters from the beginning of the string and convert using HEX to DEC converter
Add 1 to number get from point 3.
Get rest of the input string, skipping first 2 letters
Convert text from point 5. using HEX to DEC converter
Calculate POW(16, LEN(Y)), where Y is text from point 5.
Calculate X / Y, where X is number from point 6 and Y is text from point 7.
Calculate final result: POW(2, X)*(1 + Y), where X comes from point 4. and Y comes from point 9.
It may looks quite complicated, but it's actually quite simple.
I've created Excel Web App spreadsheat with results for all these steps for your sample inputs: http://sdrv.ms/1bO0wnz
I don't come from a low-level development background, so I'm not sure how to convert the below instruction to an integer...
Basically I have a microprocessor which tells me which IO's are active or inactive. I send the device an ASCII command and it replies with a WORD about which of the 15 I/O's are open/closed... here's the instruction:
Unit Answers "A0001/" for only DIn0 on, "A????/" for All Inputs Active
Awxyz/ - w=High Nibble of MSB in 0 to ? ASCII Character 0001=1, 1111=?, z=Low Nibble of LSB.
At the end of the day I just want to be able to convert it back into a number which will tell me which of the 15 (or 16?) inputs are active.
I have something hooked up to the 15th I/O port, and the reply I get is "A8000", if that helps?
Can someone clear this up for me please?
You can use the BitConverter class to convert an array of bytes to the integer format you need.
If you're getting 16 bits, convert them to a UInt16.
C# does not define the endianness. That is determined by the hardware you are running on. Intel and AMD processors are little-endian. You can learn the endian-ness of your platform using BitConverter.IsLittleEndian. If the computer running .NET and the hardware providing the data do not have the same endian-ness, you would have to swap the two bytes.
byte[] inputFromHardware = { 126, 42 };
ushort value = BitConverter.ToUInt16( inputFromHardware, index );
which of the 15 (or 16?) inputs are active
If the bits are hardware flags, it is plausible that all 16 are used to mean something. When bits are used to represent a signed number, one of the bits is used to represent positive vs. negative. Since the hardware is providing status bits, none of the bits should be interpreted as a sign.
If you want to know if a specific bit is active, you can use a bit mask along with the & operator. For example, the binary mask
0000 0000 0000 0100
corresponds to the hex number
0x0004
To learn if the third bit from the right is toggled on, use
bool thirdBitFromRightIsOn = (value & 0x0004 != 0);
UPDATE
If the manufacturer says the value 8000 (I assume hex) represents Channel 15 being active, look at it like this:
Your bit mask
1000 0000 0000 0000 (binary)
8 0 0 0 (hex)
Based in that info from the manufacturer, the left-most bit corresponds to Channel 15.
You can use that mask like:
bool channel15IsOn = (value & 0x8000 != 0);
A second choice (but for Nibbles only) is to use the Math library like this:
string x = "A0001/"
int y = 0;
for (int i = 3; i >= 0; i--)
{
if (x[4-i] == '1')
y += (int)Math.Pow(2,i);
}
Using the ability to treat a string as an array of characters by just using the brackets and a numeric character as a number by casting you can make the conversion from bits to an integer hardcoded.
It is a novice solution but i think it's a proper too.
Part of my application data contains a set of 9 ternary (base-3) "bits". To keep the data compact for the database, I would like to store that data as a single short. Since 3^9 < 2^15 I can represent any possible 9 digit base-3 number as a short.
My current method is to work with it as a string of length 9. I can read or set any digit by index, and it is nice and easy. To convert it to a short though, I am currently converting to base 10 by hand (using a shift-add loop) and then using Int16.Parse to convert it back to a binary short. To convert a stored value back to the base 3 string, I run the process in reverse. All of this takes time, and I would like to optimize it if at all possible.
What I would like to do is always store the value as a short, and read and set ternary bits in place. Ideally, I would have functions to get and set individual digits from the binary in place.
I have tried playing with some bit shifts and mod functions, but havn't quite come up with the right way to do this. I'm not even sure if it is even possible without going through the full conversion.
Can anyone give me any bitwise arithmetic magic that can help out with this?
public class Base3Handler
{
private static int[] idx = {1, 3, 9, 27, 81, 243, 729, 729*3, 729*9, 729*81};
public static byte ReadBase3Bit(short n, byte position)
{
if ((position > 8) || (position < 0))
throw new Exception("Out of range...");
return (byte)((n%idx[position + 1])/idx[position]);
}
public static short WriteBase3Bit(short n, byte position, byte newBit)
{
byte oldBit = ReadBase3Bit(n, position);
return (short) (n + (newBit - oldBit)*idx[position]);
}
}
These are small numbers. Store them as you wish, efficiently in memory, but then use a table lookup to convert from one form to another as needed.
You can't do bit operations on ternary values. You need to use multiply, divide and modulo to extract and combine values.
To use bit operations you need to limit the packing to 8 ternaries per short (i.e. 2 bits each)