Whats the best way of doing variable length encoding of an unsigned integer value in C# ?
"The actual intent is to append a variable length encoded integer (bytes) to a file header."
For ex: "Content-Length" - Http Header
Can this be achieved with some changes in the logic below.
I have written some code which does that ....
A method I have used, which makes smaller values use fewer bytes, is to encode 7 bits of data + 1 bit of overhead pr. byte.
The encoding works only for positive values starting with zero, but can be modified if necessary to handle negative values as well.
The way the encoding works is like this:
Grab the lowest 7 bits of your value and store them in a byte, this is what you're going to output
Shift the value 7 bits to the right, getting rid of those 7 bits you just grabbed
If the value is non-zero (ie. after you shifted away 7 bits from it), set the high bit of the byte you're going to output before you output it
Output the byte
If the value is non-zero (ie. same check that resulted in setting the high bit), go back and repeat the steps from the start
To decode:
Start at bit-position 0
Read one byte from the file
Store whether the high bit is set, and mask it away
OR in the rest of the byte into your final value, at the bit-position you're at
If the high bit was set, increase the bit-position by 7, and repeat the steps, skipping the first one (don't reset the bit-position)
39 32 31 24 23 16 15 8 7 0
value: |DDDDDDDD|CCCCCCCC|BBBBBBBB|AAAAAAAA|
encoded: |0000DDDD|xDDDDCCC|xCCCCCBB|xBBBBBBA|xAAAAAAA| (note, stored in reverse order)
As you can see, the encoded value might occupy one additional byte that is just half-way used, due to the overhead of the control bits. If you expand this to a 64-bit value, the additional byte will be completely used, so there will still only be one byte of extra overhead.
Note: Since the encoding stores values one byte at a time, always in the same order, big- or little-endian systems will not change the layout of this. The least significant byte is always stored first, etc.
Ranges and their encoded size:
0 - 127 : 1 byte
128 - 16.383 : 2 bytes
16.384 - 2.097.151 : 3 bytes
2.097.152 - 268.435.455 : 4 bytes
268.435.456 - max-int32 : 5 bytes
Here's C# implementations for both:
void Main()
{
using (FileStream stream = new FileStream(#"c:\temp\test.dat", FileMode.Create))
using (BinaryWriter writer = new BinaryWriter(stream))
writer.EncodeInt32(123456789);
using (FileStream stream = new FileStream(#"c:\temp\test.dat", FileMode.Open))
using (BinaryReader reader = new BinaryReader(stream))
reader.DecodeInt32().Dump();
}
// Define other methods and classes here
public static class Extensions
{
/// <summary>
/// Encodes the specified <see cref="Int32"/> value with a variable number of
/// bytes, and writes the encoded bytes to the specified writer.
/// </summary>
/// <param name="writer">
/// The <see cref="BinaryWriter"/> to write the encoded value to.
/// </param>
/// <param name="value">
/// The <see cref="Int32"/> value to encode and write to the <paramref name="writer"/>.
/// </param>
/// <exception cref="ArgumentNullException">
/// <para><paramref name="writer"/> is <c>null</c>.</para>
/// </exception>
/// <exception cref="ArgumentOutOfRangeException">
/// <para><paramref name="value"/> is less than 0.</para>
/// </exception>
/// <remarks>
/// See <see cref="DecodeInt32"/> for how to decode the value back from
/// a <see cref="BinaryReader"/>.
/// </remarks>
public static void EncodeInt32(this BinaryWriter writer, int value)
{
if (writer == null)
throw new ArgumentNullException("writer");
if (value < 0)
throw new ArgumentOutOfRangeException("value", value, "value must be 0 or greater");
do
{
byte lower7bits = (byte)(value & 0x7f);
value >>= 7;
if (value > 0)
lower7bits |= 128;
writer.Write(lower7bits);
} while (value > 0);
}
/// <summary>
/// Decodes a <see cref="Int32"/> value from a variable number of
/// bytes, originally encoded with <see cref="EncodeInt32"/> from the specified reader.
/// </summary>
/// <param name="reader">
/// The <see cref="BinaryReader"/> to read the encoded value from.
/// </param>
/// <returns>
/// The decoded <see cref="Int32"/> value.
/// </returns>
/// <exception cref="ArgumentNullException">
/// <para><paramref name="reader"/> is <c>null</c>.</para>
/// </exception>
public static int DecodeInt32(this BinaryReader reader)
{
if (reader == null)
throw new ArgumentNullException("reader");
bool more = true;
int value = 0;
int shift = 0;
while (more)
{
byte lower7bits = reader.ReadByte();
more = (lower7bits & 128) != 0;
value |= (lower7bits & 0x7f) << shift;
shift += 7;
}
return value;
}
}
You should first make an histogram of your value. If the distribution is random (that is, every bin of your histogram's count is close to the other), then you'll not be able encode more efficiently than the binary representation for this number.
If your histogram is unbalanced (that is, if some values are more present than others), then it might make sense to choose an encoding that's using less bits for these values, while using more bits for the other -unlikely- values.
For example, if the number you need to encode are 2x more likely to be smaller than 15 bits than larger, you can use the 16-th bit to tell so and only store/send 16 bits (if it's zero, then the upcoming byte will form a 16-bits numbers that can fit in a 32 bits number).
If it's 1, then the upcoming 25 bits will form a 32 bits numbers.
You loose one bit here but because it's unlikely, in the end, for a lot of number, you win more bits.
Obviously, this is a trivial case, and the extension of this to more than 2 cases is the Huffman algorithm that affect a "code word" that close-to optimum based on the probability of the numbers to appear.
There's also the arithmetic coding algorithm that does this too (and probably other).
In all cases, there is no solution that can store random value more efficiently than what's being done currently in computer memory.
You have to think about how long and how hard will be the implementation of such solution compared to the saving you'll get in the end to know if it's worth it. The language itself is not relevant here.
If small values are more common than large ones you can use Golomb coding.
I know this question was asked quite a few years ago, however for MIDI developers I thought to share some code from a personal midi project I'm working on. The code block is based on a segment from the book Maximum MIDI by Paul Messick (This example is a tweaked version for my own needs however, the concept is all there...).
public struct VariableLength
{
// Variable Length byte array to int
public VariableLength(byte[] bytes)
{
int index = 0;
int value = 0;
byte b;
do
{
value = (value << 7) | ((b = bytes[index]) & 0x7F);
index++;
} while ((b & 0x80) != 0);
Length = index;
Value = value;
Bytes = new byte[Length];
Array.Copy(bytes, 0, Bytes, 0, Length);
}
// Variable Length int to byte array
public VariableLength(int value)
{
Value = value;
byte[] bytes = new byte[4];
int index = 0;
int buffer = value & 0x7F;
while ((value >>= 7) > 0)
{
buffer <<= 8;
buffer |= 0x80;
buffer += (value & 0x7F);
}
while (true)
{
bytes[index] = (byte)buffer;
index++;
if ((buffer & 0x80) > 0)
buffer >>= 8;
else
break;
}
Length = index;
Bytes = new byte[index];
Array.Copy(bytes, 0, Bytes, 0, Length);
}
// Number of bytes used to store the variable length value
public int Length { get; private set; }
// Variable Length Value
public int Value { get; private set; }
// Bytes representing the integer value
public byte[] Bytes { get; private set; }
}
How to use:
public void Example()
{
//Convert an integer into a variable length byte
int varLenVal = 480;
VariableLength v = new VariableLength(varLenVal);
byte[] bytes = v.Bytes;
//Convert a variable length byte array into an integer
byte[] varLenByte = new byte[2]{131, 96};
VariableLength v = new VariableLength(varLenByte);
int result = v.Length;
}
As Grimbly pointed out, there exists BinaryReader.Read7BitEncodedInt and BinaryWriter.Write7BitEncodedInt. However, these are internal methods that one cannot call from a BinaryReader or -Writer object.
However, what you can do is take the internal implementation and copy it from the reader and the writer:
public static int Read7BitEncodedInt(this BinaryReader br) {
// Read out an Int32 7 bits at a time. The high bit
// of the byte when on means to continue reading more bytes.
int count = 0;
int shift = 0;
byte b;
do {
// Check for a corrupted stream. Read a max of 5 bytes.
// In a future version, add a DataFormatException.
if (shift == 5 * 7) // 5 bytes max per Int32, shift += 7
throw new FormatException("Format_Bad7BitInt32");
// ReadByte handles end of stream cases for us.
b = br.ReadByte();
count |= (b & 0x7F) << shift;
shift += 7;
} while ((b & 0x80) != 0);
return count;
}
public static void Write7BitEncodedInt(this BinaryWriter br, int value) {
// Write out an int 7 bits at a time. The high bit of the byte,
// when on, tells reader to continue reading more bytes.
uint v = (uint)value; // support negative numbers
while (v >= 0x80) {
br.Write((byte)(v | 0x80));
v >>= 7;
}
br.Write((byte)v);
}
When you include this code in any class of your project, you'll be able to use the methods on any BinaryReader/BinaryWriter object. They've only been slightly modified to make them work outside of their original classes (for example by changing ReadByte() to br.ReadByte()). The comments are from the original source.
BinaryReader.Read7BitEncodedInt Method ?
BinaryWriter.Write7BitEncodedInt Method ?
Related
I'm trying to create a function to underflow a number back up or overflow a number back down into a specified range mathematically. I think I was able to get this to work when the numbers are all positive (taking out Math.Abs (used to positivify negative numbers)) but ranges which go negative or negative values fail. I want to solve this with Maths but can't figure out what I'm doing wrong!
This is my current implementation of the failing function:
/// <summary>
/// Wraps a value within the specified range, overflowing or underflowing as necessary.
/// </summary>
/// <param name="value">The number to wrap.</param>
/// <param name="minimumValue">The minimum value in the range.</param>
/// <param name="length">The number of values in the range to wrap across.</param>
/// <returns>The <paramref name="value"/> wrapped to the specified range.</returns>
/// <exception cref="ArgumentException">Thrown if <paramref name="length"/> is <c>0</c>.</exception>
public static int Wrap(this int value, int minimumValue, int length)
{
if (length == 0)
throw new ArgumentException($"{nameof(length)} must not be 0 in order to produce a range to wrap across.");
else
{
var absoluteModulus = System.Math.Abs((value - minimumValue) % length);
return (value < 0 ? length - absoluteModulus : absoluteModulus) + minimumValue;
}
}
Here's some test data and results for the current implementation:
value
minimumValue
length
expected
actual
Comment
128
256
128
256
256
Pass
255
256
256
511
257
Modulo is underflowing backwards!
-3
1
2
1
3
Somehow underflowing out of range!
-4
0
2
0
2
Again, underflowing out of range!
63
128
384
447
193
128 - 63 == 65, 384 - 65 == 319, 319 + 128 == 447, not 193‼
300
100
200
100
100
This overflow works!
You seem to be aware that % is the remainder operation, not modulus (as in modular arithmetic), but simply getting the absolute value of the modulus is not correct. You should use one of the answers here. For example:
private static int Mod(int k, int n) {
int remainder = k % n;
return (remainder < 0) ? remainder + n : remainder;
}
// ... in the else branch, you can directly do
return Mod(value - minimumValue, length) + minimumValue;
You also handled value < 0 differently. There is nothing special about a negative value. What is special here, is a negative remainder, which is a value that will never be produced by the modulus operation. Your code would have worked too if you replaced value < 0 with a check to see if (value - minimumValue) % length is negative.
I have been preparing a big list of words using crc32 for hashing each word, I do this process on C# but there is a process on php, my surprise comes when I see that the crc32 function used in C# produces different hash than the standard crc32 function on PHP.
The function on C# gives an unsigned int and php gives signed int but doing printf("%u\n", $crc_value) you can obtaind the usigned int, however this neither match the C# value.
(I put on the bottom the C# code)
Is there a way to ajust the php function for give me the same results?
I put here hashes produces in each language:
php:
$value = crc32("emisiones")
//signed int => 1277409361
//unsigned int => 1277409361
c#:
Crc32.CRC32String("emisiones");
// unsigned int => 3523227667
Crc32.CRC32Bytes(System.Text.Encoding.ASCII.GetBytes("emisiones"));
// bytes => 101,109,105,115,105,111,110,101,115
// unsigned int => 3525485962
crc32 implementation of c#:
using System;
using System.IO;
namespace Foo
{
/// <summary>
/// A utility class to compute CRC32.
/// </summary>
public class Crc32
{
private uint _crc32 = 0;
static private uint[] crc_32_tab = // CRC polynomial 0xedb88320
{
0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f,
0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2,
0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9,
0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c,
0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423,
0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x01db7106,
0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d,
0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950,
0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7,
0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa,
0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81,
0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683, 0xe3630b12, 0x94643b84,
0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb,
0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e,
0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55,
0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28,
0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f,
0x72076785, 0x05005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242,
0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69,
0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc,
0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693,
0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
};
static private uint UPDC32(byte octet, uint crc)
{
return (crc_32_tab[((crc) ^ ((byte)octet)) & 0xff] ^ ((crc) >> 8));
}
internal uint CheckSum
{
get
{
return _crc32;
}
set
{
_crc32 = value;
}
}
internal uint AddToCRC32(int c)
{
return AddToCRC32((ushort)c);
}
internal uint AddToCRC32(ushort c)
{
byte lowByte, hiByte;
lowByte = (byte)(c & 0x00ff);
hiByte = (byte)(c >> 8);
_crc32 = UPDC32(hiByte, _crc32);
_crc32 = UPDC32(lowByte, _crc32);
return ~_crc32;
}
/// <summary>
/// Compute a checksum for a given string.
/// </summary>
/// <param name="text">The string to compute the checksum for.</param>
/// <returns>The computed checksum.</returns>
static public uint CRC32String(string text)
{
uint oldcrc32;
oldcrc32 = 0xFFFFFFFF;
int len = text.Length;
ushort uCharVal;
byte lowByte, hiByte;
for (int i = 0; len > 0; i++)
{
--len;
uCharVal = text[len];
unchecked
{
lowByte = (byte)(uCharVal & 0x00ff);
hiByte = (byte)(uCharVal >> 8);
}
oldcrc32 = UPDC32(hiByte, oldcrc32);
oldcrc32 = UPDC32(lowByte, oldcrc32);
}
return ~oldcrc32;
}
/// <summary>
/// Compute a checksum for a given array of bytes.
/// </summary>
/// <param name="bytes">The array of bytes to compute the checksum for.</param>
/// <returns>The computed checksum.</returns>
static public uint CRC32Bytes(byte[] bytes)
{
uint oldcrc32;
oldcrc32 = 0xFFFFFFFF;
int len = bytes.Length;
for (int i = 0; len > 0; i++)
{
--len;
oldcrc32 = UPDC32(bytes[len], oldcrc32);
}
return ~oldcrc32;
}
}
}
CRC32String() is running two bytes per character through. If you are expecting one byte per character, as the PHP code would, then you should use CRC32Bytes().
Having done that, you need to fix both of those routines. They are computing the CRC on the reverse of the string. (Who wrote those routines?) If you run CRC32Bytes() on "senoisime", then you will get 1277409361. And if you run php's crc32() on "senoisime", you will get 3525485962.
CRC32String() and CRC32Bytes() use text[len] and bytes[len] when they should be using text[i] and bytes[i]. i counts forward, but len counts backwards. This is clearly just someone's brain fart, since you wouldn't even need the variable i if you actually intended to compute the CRC on the reverse of the string.
Also it's a rather odd way to write the code in the first place. They should have just used the most common C-ish idiom instead: for (i = 0; i < len; i++).
I'm working with phone raw phone sounds and recordings and I want to normalize them to a certain volume level in a .Net C# project.
The sound is a collection of raw audio bytes (mono unheadered 16-bit signed PCM audio 16000Hz).
The audio is split into blocks of 3200 bytes == 100ms.
Any suggestions how to increase the volume/amplitude so the sounds is louder?
I haven't got a clue if I need to add a constant or multiply values, or if I need to do it to every 1,2,3.... bytes? And maybe there is already a open source solution for this?
To answer my own question (for others).
The solution is to multiply every sample (when 16bit PCM that are 2 bytes) with a constant value.
Do avoid overflow\to much increase you can calculate the highest constant value you can use by looking for the highest sample value and calculate the multiply factor to get it to highest sample value possible, in 16bit PCM case thats 32676 or something.
Here is litle example:
public byte[] IncreaseDecibel(byte[] audioBuffer, float multiplier)
{
// Max range -32768 and 32767
var highestValue = GetHighestAbsoluteSample(audioBuffer);
var highestPosibleMultiplier = (float)Int16.MaxValue/highestValue; // Int16.MaxValue = 32767
if (multiplier > highestPosibleMultiplier)
{
multiplier = highestPosibleMultiplier;
}
for (var i = 0; i < audioBuffer.Length; i = i + 2)
{
Int16 sample = BitConverter.ToInt16(audioBuffer, i);
sample *= (Int16)(sample * multiplier);
byte[] sampleBytes = GetLittleEndianBytesFromShort(sample);
audioBuffer[i] = sampleBytes[sampleBytes.Length-2];
audioBuffer[i+1] = sampleBytes[sampleBytes.Length-1];
}
return audioBuffer;
}
// ADDED GetHighestAbsoluteSample, hopefully its still correct because code has changed over time
/// <summary>
/// Peak sample value
/// </summary>
/// <param name="audioBuffer">audio</param>
/// <returns>0 - 32768</returns>
public static short GetHighestAbsoluteSample(byte[] audioBuffer)
{
Int16 highestAbsoluteValue = 0;
for (var i = 0; i < (audioBuffer.Length-1); i = i + 2)
{
Int16 sample = ByteConverter.GetShortFromLittleEndianBytes(audioBuffer, i);
// prevent Math.Abs overflow exception
if (sample == Int16.MinValue)
{
sample += 1;
}
var absoluteValue = Math.Abs(sample);
if (absoluteValue > highestAbsoluteValue)
{
highestAbsoluteValue = absoluteValue;
}
}
return (highestAbsoluteValue > LowestPossibleAmplitude) ?
highestAbsoluteValue : LowestPossibleAmplitude;
}
I'm using the DriveInfo class in my C# project to retrieve the available bytes on given drives. How to I correctly convert this number into Mega- or Gigabytes? Dividing by 1024 will not do the job I guess. The results always differ from those shown in the Windows-Explorer.
1024 is correct for usage in programs.
The reason you may be having differences is likely due to differences in what driveinfo reports as "available space" and what windows considers available space.
Note that only drive manufacturers use 1,000. Within windows and most programs the correct scaling is 1024.
Also, while your compiler should optimize this anyway, this calculation can be done by merely shifting the bits by 10 for each magnitude:
KB = B >> 10
MB = KB >> 10 = B >> 20
GB = MB >> 10 = KB >> 20 = B >> 30
Although for readability I expect successive division by 1024 is clearer.
XKCD has the definite answer:
/// <summary>
/// Function to convert the given bytes to either Kilobyte, Megabyte, or Gigabyte
/// </summary>
/// <param name="bytes">Double -> Total bytes to be converted</param>
/// <param name="type">String -> Type of conversion to perform</param>
/// <returns>Int32 -> Converted bytes</returns>
/// <remarks></remarks>
public static double ConvertSize(double bytes, string type)
{
try
{
const int CONVERSION_VALUE = 1024;
//determine what conversion they want
switch (type)
{
case "BY":
//convert to bytes (default)
return bytes;
break;
case "KB":
//convert to kilobytes
return (bytes / CONVERSION_VALUE);
break;
case "MB":
//convert to megabytes
return (bytes / CalculateSquare(CONVERSION_VALUE));
break;
case "GB":
//convert to gigabytes
return (bytes / CalculateCube(CONVERSION_VALUE));
break;
default:
//default
return bytes;
break;
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return 0;
}
}
/// <summary>
/// Function to calculate the square of the provided number
/// </summary>
/// <param name="number">Int32 -> Number to be squared</param>
/// <returns>Double -> THe provided number squared</returns>
/// <remarks></remarks>
public static double CalculateSquare(Int32 number)
{
return Math.Pow(number, 2);
}
/// <summary>
/// Function to calculate the cube of the provided number
/// </summary>
/// <param name="number">Int32 -> Number to be cubed</param>
/// <returns>Double -> THe provided number cubed</returns>
/// <remarks></remarks>
public static double CalculateCube(Int32 number)
{
return Math.Pow(number, 3);
}
//Sample Useage
String Size = "File is " + ConvertSize(250222,"MB") + " Megabytes in size"
1024 is actually wrong. The International Engineering Community (IEC) has developed a standard in 2000, which is sadly being ignored by the computer industry. This standard basically says that
1000 bytes is a kilobyte, 1000KB are one MB and so on. The abbreviations are KB, MB, GB and so on.
The widely used 1024 bytes = 1 kilobyte should instead by called 1024 bytes = 1 Kibibyte (KiB), 1024 KiB = 1 Mebibyte (MiB), 1024 MiB = 1 Gibibyte (GiB) and so on.
You can all read it up on the IEC SI zone.
So in order for your conversions to be correct and right according to international standardization you should use this scientific notation.
It depends on if you want the actual file size or the size on disk. The actual file size is the actual number of bytes that the file uses in memory. The size on disk is a function of the file size and the block size for your disk/file system.
I have a faint recollection that the answer on whether to use 1000 or 1024 lies in the casing of the prefix.
Example:
If the "scientific" 1000 scaling is used, then the "scientific" unit will be kB (just as in kg, kN etc). If the computer centric 1024 scaling is used, then the unit will be KB. So, uppercasing the scientific prefix makes it computer centric.
Divide by 1024.
Here is simple c++ code sample I have prepared which might be helpful. You need to provide input size in bytes and the function will return in human readable size:
std::string get_human_readable_size(long bytes)
{
long gb = 1024 * 1024 * 1024;
long mb = 1024 * 1024;
long kb = 1024;
if( bytes >= gb) return std::to_string( (float)bytes/gb ) + " GB ";
if( bytes >= mb) return std::to_string( (float)bytes/mb ) + " MB ";
if( bytes >= kb) return std::to_string( (float)bytes/gb ) + " KB ";
return std::to_string(bytes) + " B ";
}
I have a series of ASCII flat files coming in from a mainframe to be processed by a C# application. A new feed has been introduced with a Packed Decimal (COMP-3) field, which needs to be converted to a numerical value.
The files are being transferred via FTP, using ASCII transfer mode. I am concerned that the binary field may contain what will be interpreted as very-low ASCII codes or control characters instead of a value - Or worse, may be lost in the FTP process.
What's more, the fields are being read as strings. I may have the flexibility to work around this part (i.e. a stream of some sort), but the business will give me pushback.
The requirement read "Convert from HEX to ASCII", but clearly that didn't yield the correct values. Any help would be appreciated; it need not be language-specific as long as you can explain the logic of the conversion process.
I have been watching the posts on numerous boards concerning converting Comp-3 BCD data from "legacy" mainframe files to something useable in C#. First, I would like to say that I am less than enamoured by the responses that some of these posts have received - especially those that have said essentially "why are you bothering us with these non-C#/C++ related posts" and also "If you need an answer about some sort of COBOL convention, why don't you go visit a COBOL oriented site". This, to me, is complete BS as there is going to be a need for probably many years to come, (unfortunately), for software developers to understand how to deal with some of these legacy issues that exist in THE REAL WORLD. So, even if I get slammed on this post for the following code, I am going to share with you a REAL WORLD experience that I had to deal with regarding COMP-3/EBCDIC conversion (and yes, I am he who talks of "floppy disks, paper-tape, Disc Packs etc... - I have been a software engineer since 1979").
First - understand that any file that you read from a legacy main-frame system like IBM is going to present the data to you in EBCDIC format and in order to convert any of that data to a C#/C++ string you can deal with you are going to have to use the proper code page translation to get the data into ASCII format. A good example of how to handle this would be:
StreamReader readFile = new StreamReader(path, Encoding.GetEncoding(037); // 037 = EBCDIC to ASCII translation.
This will ensure that anything that you read from this stream will then be converted to ASCII and can be used in a string format. This includes "Zoned Decimal" (Pic 9) and "Text" (Pic X) fields as declared by COBOL. However, this does not necessarily convert COMP-3 fields to the correct "binary" equivelant when read into a char[] or byte[] array. To do this, the only way that you are ever going to get this translated properly (even using UTF-8, UTF-16, Default or whatever) code pages, you are going to want to open the file like this:
FileStream fileStream = new FileStream(path, FIleMode.Open, FIleAccess.Read, FileShare.Read);
Of course, the "FileShare.Read" option is "optional".
When you have isolated the field that you want to convert to a decimal value (and then subsequently to an ASCII string if need be), you can use the following code - and this has been basically stolen from the MicroSoft "UnpackDecimal" posting that you can get at:
http://www.microsoft.com/downloads/details.aspx?familyid=0e4bba52-cc52-4d89-8590-cda297ff7fbd&displaylang=en
I have isolated (I think) what are the most important parts of this logic and consolidated it into two a method that you can do with what you want. For my purposes, I chose to leave this as returning a Decimal value which I could then do with what I wanted. Basically, the method is called "unpack" and you pass it a byte[] array (no longer than 12 bytes) and the scale as an int, which is the number of decimal places you want to have returned in the Decimal value. I hope this works for you as well as it did for me.
private Decimal Unpack(byte[] inp, int scale)
{
long lo = 0;
long mid = 0;
long hi = 0;
bool isNegative;
// this nybble stores only the sign, not a digit.
// "C" hex is positive, "D" hex is negative, and "F" hex is unsigned.
switch (nibble(inp, 0))
{
case 0x0D:
isNegative = true;
break;
case 0x0F:
case 0x0C:
isNegative = false;
break;
default:
throw new Exception("Bad sign nibble");
}
long intermediate;
long carry;
long digit;
for (int j = inp.Length * 2 - 1; j > 0; j--)
{
// multiply by 10
intermediate = lo * 10;
lo = intermediate & 0xffffffff;
carry = intermediate >> 32;
intermediate = mid * 10 + carry;
mid = intermediate & 0xffffffff;
carry = intermediate >> 32;
intermediate = hi * 10 + carry;
hi = intermediate & 0xffffffff;
carry = intermediate >> 32;
// By limiting input length to 14, we ensure overflow will never occur
digit = nibble(inp, j);
if (digit > 9)
{
throw new Exception("Bad digit");
}
intermediate = lo + digit;
lo = intermediate & 0xffffffff;
carry = intermediate >> 32;
if (carry > 0)
{
intermediate = mid + carry;
mid = intermediate & 0xffffffff;
carry = intermediate >> 32;
if (carry > 0)
{
intermediate = hi + carry;
hi = intermediate & 0xffffffff;
carry = intermediate >> 32;
// carry should never be non-zero. Back up with validation
}
}
}
return new Decimal((int)lo, (int)mid, (int)hi, isNegative, (byte)scale);
}
private int nibble(byte[] inp, int nibbleNo)
{
int b = inp[inp.Length - 1 - nibbleNo / 2];
return (nibbleNo % 2 == 0) ? (b & 0x0000000F) : (b >> 4);
}
If you have any questions, post them on here - because I suspect that I am going to get "flamed" like everyone else who has chosen to post questions that are pertinent to todays issues...
Thanks,
John - The Elder.
First of all you must eliminate the end of line (EOL) translation problems that will be caused by ASCII transfer mode. You are absolutely right to be concerned about data corruption when the BCD values happen to correspond to EOL characters. The worst aspect of this problem is that it will occur rarely and unexpectedly.
The best solution is to change the transfer mode to BIN. This is appropriate since the data you are transferring is binary. If it is not possible to use the correct FTP transfer mode, you can undo the ASCII mode damage in code. All you have to do is convert \r\n pairs back to \n. If I were you I would make sure this is well tested.
Once you've dealt with the EOL problem, the COMP-3 conversion is pretty straigtforward. I was able to find this article in the MS knowledgebase with sample code in BASIC. See below for a VB.NET port of this code.
Since you're dealing with COMP-3 values, the file format you're reading almost surely has fixed record sizes with fixed field lengths. If I were you, I would get my hands of a file format specification before you go any further with this. You should be using a BinaryReader to work with this data. If someone is pushing back on this point, I would walk away. Let them find someone else to indulge their folly.
Here's a VB.NET port of the BASIC sample code. I haven't tested this because I don't have access to a COMP-3 file. If this doesn't work, I would refer back to the original MS sample code for guidance, or to references in the other answers to this question.
Imports Microsoft.VisualBasic
Module Module1
'Sample COMP-3 conversion code
'Adapted from http://support.microsoft.com/kb/65323
'This code has not been tested
Sub Main()
Dim Digits%(15) 'Holds the digits for each number (max = 16).
Dim Basiceqv#(1000) 'Holds the Basic equivalent of each COMP-3 number.
'Added to make code compile
Dim MyByte As Char, HighPower%, HighNibble%
Dim LowNibble%, Digit%, E%, Decimal%, FileName$
'Clear the screen, get the filename and the amount of decimal places
'desired for each number, and open the file for sequential input:
FileName$ = InputBox("Enter the COBOL data file name: ")
Decimal% = InputBox("Enter the number of decimal places desired: ")
FileOpen(1, FileName$, OpenMode.Binary)
Do Until EOF(1) 'Loop until the end of the file is reached.
Input(1, MyByte)
If MyByte = Chr(0) Then 'Check if byte is 0 (ASC won't work on 0).
Digits%(HighPower%) = 0 'Make next two digits 0. Increment
Digits%(HighPower% + 1) = 0 'the high power to reflect the
HighPower% = HighPower% + 2 'number of digits in the number
'plus 1.
Else
HighNibble% = Asc(MyByte) \ 16 'Extract the high and low
LowNibble% = Asc(MyByte) And &HF 'nibbles from the byte. The
Digits%(HighPower%) = HighNibble% 'high nibble will always be a
'digit.
If LowNibble% <= 9 Then 'If low nibble is a
'digit, assign it and
Digits%(HighPower% + 1) = LowNibble% 'increment the high
HighPower% = HighPower% + 2 'power accordingly.
Else
HighPower% = HighPower% + 1 'Low nibble was not a digit but a
Digit% = 0 '+ or - signals end of number.
'Start at the highest power of 10 for the number and multiply
'each digit by the power of 10 place it occupies.
For Power% = (HighPower% - 1) To 0 Step -1
Basiceqv#(E%) = Basiceqv#(E%) + (Digits%(Digit%) * (10 ^ Power%))
Digit% = Digit% + 1
Next
'If the sign read was negative, make the number negative.
If LowNibble% = 13 Then
Basiceqv#(E%) = Basiceqv#(E%) - (2 * Basiceqv#(E%))
End If
'Give the number the desired amount of decimal places, print
'the number, increment E% to point to the next number to be
'converted, and reinitialize the highest power.
Basiceqv#(E%) = Basiceqv#(E%) / (10 ^ Decimal%)
Print(Basiceqv#(E%))
E% = E% + 1
HighPower% = 0
End If
End If
Loop
FileClose() 'Close the COBOL data file, and end.
End Sub
End Module
If the original data was in EBCDIC your COMP-3 field has been garbled. The FTP process has done an EBCDIC to ASCII translation of the byte values in the COMP-3 field which isn't what you want. To correct this you can:
1) Use BINARY mode for the transfer so you get the raw EBCDIC data. Then you convert the COMP-3 field to a number and translate any other EBCDIC text on the record to ASCII. A packed field stores each digit in a half byte with the lower half byte as a sign (F is positive and other values, usually D or E, are negative). Storing 123.4 in a PIC 999.99 USAGE COMP-3 would be X'01234F' (three bytes) and -123 in the same field is X'01230D'.
2) Have the sender convert the field into a USAGE IS DISPLAY SIGN IS LEADING(or TRAILING) numeric field. This stores the number as a string of EBCDIC numeric digits with the sign as a separate negative(-) or blank character. All digits and the sign translate correctly to their ASCII equivalent on the FTP transfer.
I apologize if I am way off base here, but perhaps this code sample I'll paste here could help you. This came from VBRocks...
Imports System
Imports System.IO
Imports System.Text
Imports System.Text.Encoding
'4/20/07 submission includes a line spacing addition when a control character is used:
' The line spacing is calculated off of the 3rd control character.
'
' Also includes the 4/18 modification of determining end of file.
'4/26/07 submission inclues an addition of 6 to the record length when the 4th control
' character is an 8. This is because these records were being truncated.
'Authored by Gary A. Lima, aka. VBRocks
''' <summary>
''' Translates an EBCDIC file to an ASCII file.
''' </summary>
''' <remarks></remarks>
Public Class EBCDIC_to_ASCII_Translator
#Region " Example"
Private Sub Example()
'Set your source file and destination file paths
Dim sSourcePath As String = "c:\Temp\MyEBCDICFile"
Dim sDestinationPath As String = "c:\Temp\TranslatedFile.txt"
Dim trans As New EBCDIC_to_ASCII_Translator()
'If your EBCDIC file uses Control records to determine the length of a record, then this to True
trans.UseControlRecord = True
'If the first record of your EBCDIC file is filler (junk), then set this to True
trans.IgnoreFirstRecord = True
'EBCDIC files are written in block lengths, set your block length (Example: 134, 900, Etc.)
trans.BlockLength = 900
'This method will actually translate your source file and output it to the specified destination file path
trans.TranslateFile(sSourcePath, sDestinationPath)
'Here is a alternate example:
'No Control record is used
'trans.UseControlRecord = False
'Translate the whole file, including the first record
'trans.IgnoreFirstRecord = False
'Set the block length
'trans.BlockLength = 134
'Translate...
'trans.TranslateFile(sSourcePath, sDestinationPath)
'*** Some additional methods that you can use are:
'Trim off leading characters from left side of string (position 0 to...)
'trans.LTrim = 15
'Translate 1 EBCDIC character to an ASCII character
'Dim strASCIIChar as String = trans.TranslateCharacter("S")
'Translate an EBCDIC character array to an ASCII string
'trans.TranslateCharacters(chrEBCDICArray)
'Translates an EBCDIC string to an ASCII string
'Dim strASCII As String = trans.TranslateString("EBCDIC String")
End Sub
#End Region 'Example
'Translate characters from EBCDIC to ASCII
Private ASCIIEncoding As Encoding = Encoding.ASCII
Private EBCDICEncoding As Encoding = Encoding.GetEncoding(37) 'EBCDIC
'Block Length: Can be fixed (Ex: 134).
Private miBlockLength As Integer = 0
Private mbUseControlRec As Boolean = True 'If set to False, will return exact block length
Private mbIgnoreFirstRecord As Boolean = True 'Will Ignore first record if set to true (First record may be filler)
Private miLTrim As Integer = 0
''' <summary>
''' Translates SourceFile from EBCDIC to ASCII. Writes output to file path specified by DestinationFile parameter.
''' Set the BlockLength Property to designate block size to read.
''' </summary>
''' <param name="SourceFile">Enter the path of the Source File.</param>
''' <param name="DestinationFile">Enter the path of the Destination File.</param>
''' <remarks></remarks>
Public Sub TranslateFile(ByVal SourceFile As String, ByVal DestinationFile As String)
Dim iRecordLength As Integer 'Stores length of a record, not including the length of the Control Record (if used)
Dim sRecord As String = "" 'Stores the actual record
Dim iLineSpace As Integer = 1 'LineSpace: 1 for Single Space, 2 for Double Space, 3 for Triple Space...
Dim iControlPosSix As Byte() 'Stores the 6th character of a Control Record (used to calculate record length)
Dim iControlRec As Byte() 'Stores the EBCDIC Control Record (First 6 characters of record)
Dim bEOR As Boolean 'End of Record Flag
Dim bBOF As Boolean = True 'Beginning of file
Dim iConsumedChars As Integer = 0 'Stores the number of consumed characters in the current block
Dim bIgnoreRecord As Boolean = mbIgnoreFirstRecord 'Ignores the first record if set.
Dim ControlArray(5) As Char 'Stores Control Record (first 6 bytes)
Dim chrArray As Char() 'Stores characters just after read from file
Dim sr As New StreamReader(SourceFile, EBCDICEncoding)
Dim sw As New StreamWriter(DestinationFile)
'Set the RecordLength to the RecordLength Property (below)
iRecordLength = miBlockLength
'Loop through entire file
Do Until sr.EndOfStream = True
'If using a Control Record, then check record for valid data.
If mbUseControlRec = True Then
'Read the Control Record (first 6 characters of the record)
sr.ReadBlock(ControlArray, 0, 6)
'Update the value of consumed (read) characters
iConsumedChars += ControlArray.Length
'Get the bytes of the Control Record Array
iControlRec = EBCDICEncoding.GetBytes(ControlArray)
'Set the line spacing (position 3 divided by 64)
' (64 decimal = Single Spacing; 128 decimal = Double Spacing)
iLineSpace = iControlRec(2) / 64
'Check the Control record for End of File
'If the Control record has a 8 or 10 in position 1, and a 1 in postion 2, then it is the end of the file
If (iControlRec(0) = 8 OrElse iControlRec(0) = 10) AndAlso _
iControlRec(1) = 1 Then
If bBOF = False Then
Exit Do
Else
'The Beginning of file flag is set to true by default, so when the first
' record is encountered, it is bypassed and the bBOF flag is set to False
bBOF = False
End If 'If bBOF = Fals
End If 'If (iControlRec(0) = 8 OrElse
'Set the default value for the End of Record flag to True
' If the Control Record has all zeros, then it's True, else False
bEOR = True
'If the Control record contains all zeros, bEOR will stay True, else it will be set to False
For i As Integer = 0 To 5
If iControlRec(i) > 0 Then
bEOR = False
Exit For
End If 'If iControlRec(i) > 0
Next 'For i As Integer = 0 To 5
If bEOR = False Then
'Convert EBCDIC character to ASCII
'Multiply the 6th byte by 6 to get record length
' Why multiply by 6? Because it works.
iControlPosSix = EBCDICEncoding.GetBytes(ControlArray(5))
'If the 4th position of the control record is an 8, then add 6
' to the record length to pick up remaining characters.
If iControlRec(3) = 8 Then
iRecordLength = CInt(iControlPosSix(0)) * 6 + 6
Else
iRecordLength = CInt(iControlPosSix(0)) * 6
End If
'Add the length of the record to the Consumed Characters counter
iConsumedChars += iRecordLength
Else
'If the Control Record had all zeros in it, then it is the end of the Block.
'Consume the remainder of the block so we can continue at the beginning of the next block.
ReDim chrArray(miBlockLength - iConsumedChars - 1)
'ReDim chrArray(iRecordLength - iConsumedChars - 1)
'Consume (read) the remaining characters in the block.
' We are not doing anything with them because they are not actual records.
'sr.ReadBlock(chrArray, 0, iRecordLength - iConsumedChars)
sr.ReadBlock(chrArray, 0, miBlockLength - iConsumedChars)
'Reset the Consumed Characters counter
iConsumedChars = 0
'Set the Record Length to 0 so it will not be processed below.
iRecordLength = 0
End If ' If bEOR = False
End If 'If mbUseControlRec = True
If iRecordLength > 0 Then
'Resize our array, dumping previous data. Because Arrays are Zero (0) based, subtract 1 from the Record length.
ReDim chrArray(iRecordLength - 1)
'Read the specfied record length, without the Control Record, because we already consumed (read) it.
sr.ReadBlock(chrArray, 0, iRecordLength)
'Copy Character Array to String Array, Converting in the process, then Join the Array to a string
sRecord = Join(Array.ConvertAll(chrArray, New Converter(Of Char, String)(AddressOf ChrToStr)), "")
'If the record length was 0, then the Join method may return Nothing
If IsNothing(sRecord) = False Then
If bIgnoreRecord = True Then
'Do nothing - bypass record
'Reset flag
bIgnoreRecord = False
Else
'Write the line out, LTrimming the specified number of characters.
If sRecord.Length >= miLTrim Then
sw.WriteLine(sRecord.Remove(0, miLTrim))
Else
sw.WriteLine(sRecord.Remove(0, sRecord.Length))
End If ' If sRecord.Length >= miLTrim
'Write out the number of blank lines specified by the 3rd control character.
For i As Integer = 1 To iLineSpace - 1
sw.WriteLine("")
Next 'For i As Integer = 1 To iLineSpace
End If 'If bIgnoreRecord = True
'Obviously, if we have read more characters from the file than the designated size of the block,
' then subtract the number of characters we have read into the next block from the block size.
If iConsumedChars > miBlockLength Then
'If iConsumedChars > iRecordLength Then
iConsumedChars = iConsumedChars - miBlockLength
'iConsumedChars = iConsumedChars - iRecordLength
End If
End If 'If IsNothing(sRecord) = False
End If 'If iRecordLength > 0
'Allow computer to process (works in a class module, not in a dll)
'Application.DoEvents()
Loop
'Destroy StreamReader (sr)
sr.Close()
sr.Dispose()
'Destroy StreamWriter (sw)
sw.Close()
sw.Dispose()
End Sub
''' <summary>
''' Translates 1 EBCDIC Character (Char) to an ASCII String
''' </summary>
''' <param name="chr"></param>
''' <returns></returns>
''' <remarks></remarks>
Private Function ChrToStr(ByVal chr As Char) As String
Dim sReturn As String = ""
'Convert character into byte
Dim EBCDICbyte As Byte() = EBCDICEncoding.GetBytes(chr)
'Convert EBCDIC byte to ASCII byte
Dim ASCIIByte As Byte() = Encoding.Convert(EBCDICEncoding, ASCIIEncoding, EBCDICbyte)
sReturn = Encoding.ASCII.GetString(ASCIIByte)
Return sReturn
End Function
''' <summary>
''' Translates an EBCDIC String to an ASCII String
''' </summary>
''' <param name="sStringToTranslate"></param>
''' <returns>String</returns>
''' <remarks></remarks>
Public Function TranslateString(ByVal sStringToTranslate As String) As String
Dim i As Integer = 0
Dim sReturn As New System.Text.StringBuilder()
'Loop through the string and translate each character
For i = 0 To sStringToTranslate.Length - 1
sReturn.Append(ChrToStr(sStringToTranslate.Substring(i, 1)))
Next
Return sReturn.ToString()
End Function
''' <summary>
''' Translates 1 EBCDIC Character (Char) to an ASCII String
''' </summary>
''' <param name="sCharacterToTranslate"></param>
''' <returns>String</returns>
''' <remarks></remarks>
Public Function TranslateCharacter(ByVal sCharacterToTranslate As Char) As String
Return ChrToStr(sCharacterToTranslate)
End Function
''' <summary>
''' Translates an EBCDIC Character (Char) Array to an ASCII String
''' </summary>
''' <param name="sCharacterArrayToTranslate"></param>
''' <returns>String</returns>
''' <remarks>Remarks</remarks>
Public Function TranslateCharacters(ByVal sCharacterArrayToTranslate As Char()) As String
Dim sReturn As String = ""
'Copy Character Array to String Array, Converting in the process, then Join the Array to a string
sReturn = Join(Array.ConvertAll(sCharacterArrayToTranslate, _
New Converter(Of Char, String)(AddressOf ChrToStr)), "")
Return sReturn
End Function
''' <summary>
''' Block Length must be set. You can set the BlockLength for specific block sizes (Ex: 134).
''' Set UseControlRecord = False for files with specific block sizes (Default is True)
''' </summary>
''' <value>0</value>
''' <returns>Integer</returns>
''' <remarks></remarks>
Public Property BlockLength() As Integer
Get
Return miBlockLength
End Get
Set(ByVal value As Integer)
miBlockLength = value
End Set
End Property
''' <summary>
''' Determines whether a ControlKey is used to calculate RecordLength of valid data
''' </summary>
''' <value>Default value is True</value>
''' <returns>Boolean</returns>
''' <remarks></remarks>
Public Property UseControlRecord() As Boolean
Get
Return mbUseControlRec
End Get
Set(ByVal value As Boolean)
mbUseControlRec = value
End Set
End Property
''' <summary>
''' Ignores first record if set (Default is True)
''' </summary>
''' <value>Default is True</value>
''' <returns>Boolean</returns>
''' <remarks></remarks>
Public Property IgnoreFirstRecord() As Boolean
Get
Return mbIgnoreFirstRecord
End Get
Set(ByVal value As Boolean)
mbIgnoreFirstRecord = value
End Set
End Property
''' <summary>
''' Trims the left side of every string the specfied number of characters. Default is 0.
''' </summary>
''' <value>Default is 0.</value>
''' <returns>Integer</returns>
''' <remarks></remarks>
Public Property LTrim() As Integer
Get
Return miLTrim
End Get
Set(ByVal value As Integer)
miLTrim = value
End Set
End Property
End Class
Some useful links for EBCDIC translation:
Translation table - useful to do check some of the values in the packed decimal fields:
http://www.simotime.com/asc2ebc1.htm
List of code pages in msdn:
http://msdn.microsoft.com/en-us/library/dd317756(VS.85).aspx
And a piece of code to convert the byte array fields in C#:
// 500 is the code page for IBM EBCDIC International
System.Text.Encoding enc = new System.Text.Encoding(500);
string value = enc.GetString(byteArrayField);
The packed fields are the same in EBCDIC or ASCII. Do not run the EBCDIC to ASCII conversion on them. In .Net dump them into a byte[].
You use bitwise masks and shifts to pack/unpack.
-- But bitwise ops only apply to integer types in .Net so you need to jump through some hoops!
A good COBOL or C artist can point you in the right direction.
Find one of the old guys and pay your dues (about three beers should do it).
The “ASCII transfer type” will transfer the files as regular text files. So files becoming corrupt when we transfer packed decimal or binary data files in ASCII transfer type. The “Binary transfer type” will transfer the data in binary mode which handles the files as binary data instead of text data. So we have to use Binary transfer type here.
Reference : https://www.codeproject.com/Tips/673240/EBCDIC-to-ASCII-Converter
Once your file is ready, here is the code to convert packed decimal to human readable decimal.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApp2
{
class Program
{
static void Main(string[] args)
{
var path = #"C:\FileName.BIN.dat";
var templates = new List<Template>
{
new Template{StartPos=1,CharLength=4,Type="AlphaNum"},
new Template{StartPos=5,CharLength=1,Type="AlphaNum"},
new Template{StartPos=6,CharLength=8,Type="AlphaNum"},
new Template{StartPos=14,CharLength=1,Type="AlphaNum"},
new Template{StartPos=46,CharLength=4,Type="Packed",DecimalPlace=2},
new Template{StartPos=54,CharLength=5,Type="Packed",DecimalPlace=0},
new Template{StartPos=60,CharLength=4,Type="Packed",DecimalPlace=2},
new Template{StartPos=64,CharLength=1,Type="AlphaNum"}
};
var allBytes = File.ReadAllBytes(path);
for (int i = 0; i < allBytes.Length; i += 66)
{
var IsLastline = (allBytes.Length - i) < 66;
var lineLength = IsLastline ? 64 : 66;
byte[] lineBytes = new byte[lineLength];
Array.Copy(allBytes, i, lineBytes, 0, lineLength);
var outArray = new string[templates.Count];
int index = 0;
foreach (var temp in templates)
{
byte[] amoutBytes = new byte[temp.CharLength];
Array.Copy(lineBytes, temp.StartPos - 1, amoutBytes, 0,
temp.CharLength);
var final = "";
if (temp.Type == "Packed")
{
final = Unpack(amoutBytes, temp.DecimalPlace).ToString();
}
else
{
final = ConvertEbcdicString(amoutBytes);
}
outArray[index] = final;
index++;
}
Console.WriteLine(string.Join(" ", outArray));
}
Console.ReadLine();
}
private static string ConvertEbcdicString(byte[] ebcdicBytes)
{
if (ebcdicBytes.All(p => p == 0x00 || p == 0xFF))
{
//Every byte is either 0x00 or 0xFF (fillers)
return string.Empty;
}
Encoding ebcdicEnc = Encoding.GetEncoding("IBM037");
string result = ebcdicEnc.GetString(ebcdicBytes); // convert EBCDIC Bytes ->
Unicode string
return result;
}
private static Decimal Unpack(byte[] inp, int scale)
{
long lo = 0;
long mid = 0;
long hi = 0;
bool isNegative;
// this nybble stores only the sign, not a digit.
// "C" hex is positive, "D" hex is negative, AlphaNumd "F" hex is unsigned.
var ff = nibble(inp, 0);
switch (ff)
{
case 0x0D:
isNegative = true;
break;
case 0x0F:
case 0x0C:
isNegative = false;
break;
default:
throw new Exception("Bad sign nibble");
}
long intermediate;
long carry;
long digit;
for (int j = inp.Length * 2 - 1; j > 0; j--)
{
// multiply by 10
intermediate = lo * 10;
lo = intermediate & 0xffffffff;
carry = intermediate >> 32;
intermediate = mid * 10 + carry;
mid = intermediate & 0xffffffff;
carry = intermediate >> 32;
intermediate = hi * 10 + carry;
hi = intermediate & 0xffffffff;
carry = intermediate >> 32;
// By limiting input length to 14, we ensure overflow will never occur
digit = nibble(inp, j);
if (digit > 9)
{
throw new Exception("Bad digit");
}
intermediate = lo + digit;
lo = intermediate & 0xffffffff;
carry = intermediate >> 32;
if (carry > 0)
{
intermediate = mid + carry;
mid = intermediate & 0xffffffff;
carry = intermediate >> 32;
if (carry > 0)
{
intermediate = hi + carry;
hi = intermediate & 0xffffffff;
carry = intermediate >> 32;
// carry should never be non-zero. Back up with validation
}
}
}
return new Decimal((int)lo, (int)mid, (int)hi, isNegative, (byte)scale);
}
private static int nibble(byte[] inp, int nibbleNo)
{
int b = inp[inp.Length - 1 - nibbleNo / 2];
return (nibbleNo % 2 == 0) ? (b & 0x0000000F) : (b >> 4);
}
class Template
{
public string Name { get; set; }
public string Type { get; set; }
public int StartPos { get; set; }
public int CharLength { get; set; }
public int DecimalPlace { get; set; }
}
}
}
Files must be transferred as binary. Here's a much shorter way to do it:
using System.Linq;
namespace SomeNamespace
{
public static class SomeExtensionClass
{
/// <summary>
/// computes the actual decimal value from an IBM "Packed Decimal" 9(x)v9 (COBOL) format
/// </summary>
/// <param name="value">byte[]</param>
/// <param name="precision">byte; decimal places, default 2</param>
/// <returns>decimal</returns>
public static decimal FromPackedDecimal(this byte[] value, byte precision = 2)
{
if (value.Length < 1)
{
throw new System.InvalidOperationException("Cannot unpack empty bytes.");
}
double power = System.Math.Pow(10, precision);
if (power > long.MaxValue)
{
throw new System.InvalidOperationException(
$"Precision too large for valid calculation: {precision}");
}
string hex = System.BitConverter.ToString(value).Replace("-", "");
var bytes = Enumerable.Range(0, hex.Length)
.Select(x => System.Convert.ToByte($"0{hex.Substring(x, 1)}", 16))
.ToList();
long place = 1;
decimal ret = 0;
for (int i = bytes.Count - 2; i > -1; i--)
{
ret += (bytes[i] * place);
place *= 10;
}
ret /= (long)power;
return (bytes.Last() & (1 << 7)) != 0 ? ret * -1 : ret;
}
}
}