How to convert unpacked decimal back to COMP-3? [closed] - c#

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I had asked a question about converting COMP fields, for which I did not get any answer.
I hope stack-overflow can help me on this question.
I succeeded in converting COMP-3 to decimal. I need your help in converting the unpacked decimal back to COMP-3, in any high level programming language, but preferably in Java or c#.net.

In packed decimal -123 is represented as X'123d' (the last nyble c,d or f being the sign).
One of the simplest ways to handle packed decimal is to simply convert the bytes
to a hex string (or vice versa as required) then use normal string manipulation. This may not be the most efficient but it is easy to implement.
So to convert a Integer (value) to packed decimal is roughly (note: I have not tested the code)
String sign = "c";
if (value < 0) {
sign = "d";
value = -1 * value;
}
String val = value + "d"
byte[] comp3Bytes = new BigInteger(val, 16).toByteArray();
Following are some example code for converting to/from comp3
To retrieve a packed decimal from an array of bytes see method getMainframePackedDecimal
in http://record-editor.svn.sourceforge.net/viewvc/record-editor/Source/JRecord/src/net/sf/JRecord/Common/Conversion.java?revision=3&view=markup
and to set a packed decimal see
setField in http://record-editor.svn.sourceforge.net/viewvc/record-editor/Source/JRecord/src/net/sf/JRecord/Types/TypePackedDecimal.java?revision=3&view=markup
both routines take an array of bytes, a start position and either a length of a field position.
There are other examples of doing this on the web (JRanch I think has code for doing the conversion as well), do a bit of googling.

Converting zoned decimal to comp-3 is quite easy -- flip the nibbles of the low byte and strip off the high nibble of all other bytes.
Consider the number 12345 -- in packed decimal notation, that would be a x'12345C' or x'12345F' (both C and F are +, A B and D are -). When you converted it to zoned decimal, you flipped the low nibble and inserted a "F" in the high nibble between each digit. Turning it into x'F1F2F3F4C5'.
To convert it back, you just reverse the process. Using java, that would look like:
byte[] myDecimal = { 0xF1, 0xF2, 0xF3, 0xF4, 0xF5 };
byte[] myPacked = new byte[3];
//Even low nibble moved to high nibble and merged with odd low nibble
myPacked[0] = ((myDecimal[0] & 0b00001111) << 4)) | (myDecimal[1] & 0b00001111);
myPacked[1] = ((myDecimal[2] & 0b00001111) << 4)) | (myDecimal[3] & 0b00001111);
//Last byte gets filpped for sign
myPacked[2] = ((myDecimal[5] & 0b00001111) << 4)) | (myDecimal[4] & 0b00001111);

When I have messed with COMP-3 in the past with Java I ended up writing a method to read in the bytes and convert them to a number. I don't think I ever had to write COMP-3 out, but I assume I would use the same logic in reverse.

Related

How to convert float value to byte in c# [duplicate]

This question already has answers here:
How do I convert an array of floats to a byte[] and back?
(4 answers)
Closed 1 year ago.
I want to convert 794.328247 value to byte.
I used Convert.ToByte(794.328247) but it showing following error,
Error : - Value was either too large or too small for an unsigned byte.
So anyone can help me to resolve this issue.
Thanks.
You probably want Bitconverter.GetBytes
byte[ ] byteArray = BitConverter.GetBytes( 794.328247);
Note that this produce an array of bytes since a float is 32bit, so it needs 4 bytes to store it. Do the reverse with ToSingle.
The alternative is to truncate the float: var b = (byte) 794.328247;, but this is usually not a good idea since a byte has a far smaller range of values that a float. In some cases you might need to first normalize the input value to some range of values before truncating. I.e. you would remap values from say 750-800 to the 0-255 range.

How to convert double value to two byte array in c# [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am developing a windows forms application. I have a double value 1.5.That value I need to convert into a two byte array. But when I am converting using BitConverter.GetBytes, getting 8 bytes of data. Please refer my code below.
double d = 1.5;
byte[] array = BitConverter.GetBytes(d);
Double is 8 byte value, so if you have an arbitrary double there's no hope for you; however, if you have some restrictions, e.g. the value is in the [0..100] range and has at most 2 digits after the decimal point, you can encode it:
// presuming source in [0..100] with at most 2 digit after decimal point
double source = 1.5;
// short is 2 byte value
short encoded = (short) (source * 100 + 0.5);
byte[] array = BitConvertor.GetBytes(encoded);
// decode back to double
double decoded = encoded / 100.0;
A double is a 64 bit value that just so happens to be 8 bytes. There's nothing that can be done about this.
A double is 8 bytes in length, so unless you want a specific sub-array out of the 8 you're getting (you probably don't), then this is the wrong way to go about it.
You can cast your variable to a single-precision float. That will of course lose some precision, but you will get 4 bytes instead of 8.
If this is still unacceptable, you need to have an implementation of a half-precision float, which sadly doesn't come out-of-the-box with .NET.
I found an implementation here:
http://sourceforge.net/p/csharp-half/code/HEAD/tree/System.Half
You can use it like this:
var half = (Half)yourDouble;
var bytes = Half.GetBytes(half); // 2 bytes

Splitting a 200 bit hexadecimal bitmask [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I have a bitmask of 200 bit stored as a hexadecimal value.
In order to apply the & bit operator, I have to first convert the hex to an integer but 200 bit is too big for a uint64 so my question is : how do I split my bitmask in 4 different hexadecimal value without loosing data?
So that I can also split my 200 bit data and then compare every chunk of data with the corresponding chunk of bitmask without altering the result.
You can use the BigInteger from System.Numerics (it's a separate assembly):
BigInteger bi = BigInteger.Parse("01ABC000000000000000000000000000000000", System.Globalization.NumberStyles.HexNumber);
VERY IMPORTANT: prepend a "0" before the hex number! (because BigInteger.Parse("F", NumberStyles.HexNumber) == -1, while BigInteger.Parse("0F", NumberStyles.HexNumber) == 15
BigInteger implement the "classical" logical operators (&, |, ^)
Requires .NET 4.0
The most efficient way of achieving this is writing a class that can store and do binary operations on 200bits of data, have strings as input, etc.

Convert byte[] to single [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
Problem:
I am trying to convert a byte[] to a single. I've tried using BitConverter.ToSingle() and it doesn't give the desired result.
The Content of the Array is:
0
0
0
100
The desired output is 100; I know an Int would work for this but I just choose that number for easy debugging. I have also tried moving the 100 into every possible position in the array, with no luck.
My output always looks like 9.3345534545E
or something similar with different digits.
Any Ideas?
IEE-754 types (Single and Double - float and double in C#) do not have a trivial binary representation so 0x00 0x00 0x00 0x64 does not represent the value of 0x64 (100 in decimal).
The actual raw, binary representation of IEEE-754 values is rather complicated and setting them and performing the conversion from integer to IEE-754 really isn't worth the effort (unless it's a learning exercise). It's best to let the library/platform or even the processor do it for you:
Because your value is an integer value, you need to convert it into Int32 first, and then use the Convert class (or a simple compiler cast which will perform the type conversion under-the-hood).
Int32 val = BitConverter.ToInt32( yourArray ); // assuming it's little-endian
Single s1 = (Single)val;
Single s2 = Convert.ToSingle( val );

How does the computer convert between types [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
So a common question you see on SO is how to convert between type x and type z but I want to know how does the computer do this?
For example, how does it take an int out of a string?
My theory is that a string is a char array at its core so its going index by index and checking it against the ascii table. If it falls within the range of ints then its added to the integer. Does it happen at an even lower level than this? Is there bitmasking taking place? How does this happen?
Disclaimer: not for school, just curious.
This question can only be answered when restricting the types to a somewhat managable subset. To do so, let us consider the three interesting types: Strings, integers and floats.
The only other truly different basic type is a pointer, which is not usually converted in any meaningful manner (even the NULL check is not actually a conversion, but a special built in semantic for the 0 literal).
int to float and vice versa
Converting integers to floats and vice versa is simple, since modern CPUs provide an instruction to deal with that case directly.
string to integer type
Conversion from string to integer is fairly simple, because no numeric errors will happen. Indeed, any string is just a sequence of code points (which may or may not be represented by char or wchar_t), and the common method to work through this goes along the lines of the following:
unsigned result = 0;
for(size_t i = 0; i < str.size(); ++i) {
unsigned c = str[i] - static_cast<unsigned>('0');
if(c > '9') {
if(i) return result; // ok: integer over
else throw "no integer found";
}
if((MAX_SIZE_T - c) / 10 < result) throw "integer overflow";
result = result * 10 + c;
}
If you wish to consider things like additional bases (e.g. strings like 0x123 as a hexadecimal representation) or negative values, it obivously requires a few more tests, but the basic algorithm stays the same.
int to string
As expected, this basically works in reverse: An implementation will always take the remainder of a division by 10 and then divide by 10. Since this will give the number in reverse, one can either print into a buffer from the back or reverse the result again.
string to floating point type
Parsing strings to a double (or float) is significantly more complex, since the conversion is supposed to happen with the highest possible accuracy. The basic idea here is to read the number as a string of digits while only remembering where the dot was and what the exponent is. Then, you would assemble the mantissa from this information (which basically is a 53 bit integer) and the exponent and assemble the actual bit pattern for the resulting number. This would then be copied into your target value.
While this approach works perfectly fine, there are literally dozens of different approaches in use, all varying in performance, correctness and robustness.
Actual implementations
Note that actual implementations may have to do one more important (and horribly ugly) thing, which is locale. For example, in the German locale the "," is the decimal point and not the thousands seperator, so pi is roughly "3,1415926535".
Perl string to double
TCL string to double
David M. Gay AT&T Paper string to double, double to string and source code
Boost Spirit

Categories