ushort array to byte array - c#

I have an array of ushorts, with each ushort representing a 12-bit word. This needs to be tightly packed into an array of bytes. It should look like this in the end:
| word1 | word2 | word3 | word4 |
| byte1 | byte2 | byte3 | byte4 | byte5 | byte6|
Since each word only uses 12 bits, 2 words will be packed into 3 bytes.
Could someone help? I'm a bit stuck on how to do this in C#.

You're probably going to have to brute-force it.
I'm not a C# guy, but you are looking at something along the lines of (in C):
unsigned incursor, outcursor;
unsigned inlen = length(inputarray); // not literally
for(incursor=0,outcursor=0;incursor < inlen; incursor+=2,outcursor+=3{
outputarray[outcursor+0] = ((inputarray[incursor+0]) >> 4) & 0xFF;
outputarray[outcursor+1] = ((inputarray[incursor+0] & 0x0F)<<4 | ((inputarray[incursor+1]>>8) & 0x0F);
outputarray[outcursor+2] = inputarray[incursor+1] & 0xFF;
}

If you want to use the array as an array of UInt16 while in-memory, and then convert it to a packed byte array for storage, then you'll want a function to do one-shot conversion of the two array types.
public byte[] PackUInt12(ushort[] input)
{
byte[] result = new byte[(input.Length * 3 + 1) / 2]; // the +1 leaves space if we have an odd number of UInt12s. It's the unused half byte at the end of the array.
for(int i = 0; i < input.Length / 2; i++)
{
result[i * 3 + 0] = (byte)input[i * 2 + 0];
result[i * 3 + 1] = (byte)(input[i * 2 + 0] >> 8 | input[i * 2 + 1] << 4);
result[i * 3 + 2] = (byte)(input[i * 2 + 1] >> 4);
}
if(input.Length % 2 == 1)
{
result[i * 3 + 0] = (byte)input[i * 2 + 0];
result[i * 3 + 1] = (byte)(input[i * 2 + 0] >> 8);
}
return result;
}
public ushort[] UnpackUInt12(byte[] input)
{
ushort[] result = new ushort[input.Length * 2 / 3];
for(int i = 0; i < input.Length / 3; i++)
{
result[i * 2 + 0] = (ushort)(((ushort)input[i * 3 + 1]) << 8 & 0x0F00 | input[i * 3 + 0]);
result[i * 2 + 1] = (ushort)(((ushort)input[i * 3 + 1]) << 4 | input[i * 3 + 1] >> 4;)
}
if(result.Length % 2 == 1)
{
result[i * 2 + 0] = (ushort)(((ushort)input[i * 3 + 1]) << 8 & 0x0F00 | input[i * 3 + 0]);
}
return result;
}
If, however, you want to be efficient about memory usage while the application is running, and access this packed array as an array, then you'll want to have a class that returns ushorts, but stores them in byte[].
public class UInt12Array
{
// TODO: Constructors, etc.
private byte[] storage;
public ushort this[int index]
{
get
{
// TODO: throw exceptions if the index is off the array.
int i = index * 2 / 3;
if(index % 2 == 0)
return (ushort)(((ushort)storage[i * 3 + 1]) << 8 & 0x0F00 | storage[i * 3 + 0]);
else
return (ushort)(((ushort)storage[i * 3 + 1]) << 4 | storage[i * 3 + 1] >> 4;)
}
set
{
// TODO: throw exceptions if the index is off the array.
int i = index * 2 / 3;
if(index % 2 == 0)
storage[i * 3 + 0] = (byte)value;
storage[i * 3 + 1] = (byte)(value >> 8 | storage[i * 3 + 1] & 0xF0);
else
storage[i * 3 + 1] = (byte)(storage[i * 3 + 1] & 0x0F | value << 4);
storage[i * 3 + 2] = (byte)(value >> 4);
}
}
}

Why not store the 12-bit words in a byte array and provide a getter and a setter method that read and write the ushort's byte to the correct index in the array?

Trying to solve this with LINQ was fun!
Warning: For entertainment purposes only - do not use the below performance abominations in real code!
First try - group pairs of uints, create three bytes out of each pair, flatten list:
byte[] packedNumbers = (from i in Enumerable.Range(0, unpackedNumbers.Length)
group unpackedNumbers[i] by i - (i % 2) into pairs
let n1 = pairs.First()
let n2 = pairs.Skip(1).First()
let b1 = (byte)(n1 >> 4)
let b2 = (byte)(((n1 & 0xF) << 4) | (n2 & 0xF00) >> 8)
let b3 = (byte)(n2 & 0xFFFF)
select new[] { b1, b2, b3 })
.SelectMany(b => b).ToArray();
Or slightly more compact, but less readable:
byte[] packedNumbers = unpackedNumbers
.Select((Value, Index) => new { Value, Index })
.GroupBy(number => number.Index - (number.Index % 2))
.SelectMany(pair => new byte[] {
(byte)(pair.First().Value >> 4),
(byte)(((pair.First().Value & 0xF) << 4) | (pair.Skip(1).First().Value & 0xF00) >> 8),
(byte)(pair.Skip(1).First().Value & 0xFFFF) }).ToArray();
Strings anyone?
char[] hexChars = unpackedNumbers.SelectMany(n => n.ToString("X4").Substring(1, 3)).ToArray();
byte[] packedNumbers = (from i in Enumerable.Range(0, hexChars.Length / 2)
select byte.Parse(hexChars[i * 2].ToString() + hexChars[i * 2 + 1], NumberStyles.HexNumber))
.ToArray();

According to the comments given, I suppose, the current answers is preferable.
But about this should do it also:
public byte[] ushort2byteArr(ushort[] arr) {
System.IO.MemoryStream ms = new System.IO.MemoryStream();
System.IO.BinaryWriter bw = new System.IO.BinaryWriter(ms);
for (int i = 0; i < arr.Length-1;) { // check upper limit!
// following is wrong! must extend this to pack 8 12 bit words into 3 uint32!
UInt32 tmp = arr[i++] | (arr[i++] << 12) ... ;
bw.Write(tmp);
}
return ms.ToArray();
}
its not tested. take it as pseudocode to get the clue. especially the word -> uint32 conversion. May need some padding at the end?
#edit: made a function out of it for better clearance

Related

Hex Dump EXE File

How do I properly display the contents of an EXE file "C:/Path/To/File.exe" in hexadecimal form? So far, I have:
byte[] BytArr = File.ReadAllBytes("C:/Path/To/File.exe")
I tried using a switch statement (not shown here) that reads every few bytes and should output the appropriate hexadecimal code, but it failed. What should I do? I would really appreciate it if anyone can help me.
Beware that the answer code isn't well formatted and is rather inefficient (source: https://www.codeproject.com/articles/36747/quick-and-dirty-hexdump-of-a-byte-array), but I did make an effort to format it properly.
Answer Code:
using System.Text;
namespace HexDump
{
class Utils
{
public static string HexDump(byte[] bytes, int bytesPerLine = 16)
{
if (bytes == null) return "<null>";
int bytesLength = bytes.Length;
char[] HexChars = "0123456789ABCDEF".ToCharArray();
int firstHexColumn =
8 // 8 characters for the address
+ 3; // 3 spaces
int firstCharColumn = firstHexColumn
+ bytesPerLine * 3 // - 2 digit for the hexadecimal value and 1 space
+ (bytesPerLine - 1) / 8 // - 1 extra space every 8 characters from the 9th
+ 2; // 2 spaces
int lineLength = firstCharColumn
+ bytesPerLine // - characters to show the ascii value
+ Environment.NewLine.Length; // Carriage return and line feed (should normally be 2)
char[] line = (new String(' ', lineLength - Environment.NewLine.Length) + Environment.NewLine).ToCharArray();
int expectedLines = (bytesLength + bytesPerLine - 1) / bytesPerLine;
StringBuilder result = new StringBuilder(expectedLines * lineLength);
for (int i = 0; i < bytesLength; i += bytesPerLine)
{
line[0] = HexChars[(i >> 28) & 0xF];
line[1] = HexChars[(i >> 24) & 0xF];
line[2] = HexChars[(i >> 20) & 0xF];
line[3] = HexChars[(i >> 16) & 0xF];
line[4] = HexChars[(i >> 12) & 0xF];
line[5] = HexChars[(i >> 8) & 0xF];
line[6] = HexChars[(i >> 4) & 0xF];
line[7] = HexChars[(i >> 0) & 0xF];
int hexColumn = firstHexColumn;
int charColumn = firstCharColumn;
for (int j = 0; j < bytesPerLine; j++)
{
if (j > 0 && (j & 7) == 0) hexColumn++;
if (i + j >= bytesLength)
{
line[hexColumn] = ' ';
line[hexColumn + 1] = ' ';
line[charColumn] = ' ';
}
else
{
byte b = bytes[i + j];
line[hexColumn] = HexChars[(b >> 4) & 0xF];
line[hexColumn + 1] = HexChars[b & 0xF];
line[charColumn] = (b < 32 ? 'ยท' : (char)b);
}
hexColumn += 3;
charColumn++;
}
result.Append(line);
}
return result.ToString();
}
}
}
Here's some simple code that will lump the bytes 4 at a time(step) with a space delimiter(delimiter):
int step = 4;
string delimiter = " ";
for(int i = 0; i < BytArr.Length;i += step)
{
for(int j = 0; j < step; j++)
{
Console.Write(BytArr[i + j].ToString("X2"));
}
Console.Write(delimiter);
}
URL shows how to dump in C. Search for C sample which is given towards the end of the page.
This URL shows example in C#

How to read IMediaSample 24 bit PCM data

I have the following method which collects PCM data from the IMediaSample into floats for the FFT:
public int PCMDataCB(IntPtr Buffer, int Length, ref TDSStream Stream, out float[] singleChannel)
{
int numSamples = Length / (Stream.Bits / 8);
int samplesPerChannel = numSamples / Stream.Channels;
float[] samples = new float[numSamples];
if (Stream.Bits == 32 && Stream.Float) {
// this seems to work for 32 bit floating point
byte[] buffer32f = new byte[numSamples * 4];
Marshal.Copy(Buffer, buffer32f, 0, numSamples);
for (int j = 0; j < buffer32f.Length; j+=4)
{
samples[j / 4] = System.BitConverter.ToSingle(new byte[] { buffer32f[j + 0], buffer32f[j + 1], buffer32f[j + 2], buffer32f[j + 3]}, 0);
}
}
else if (Stream.Bits == 24)
{
// I need this code
}
// compress result into one mono channel
float[] result = new float[samplesPerChannel];
for (int i = 0; i < numSamples; i += Stream.Channels)
{
float tmp = 0;
for (int j = 0; j < Stream.Channels; j++)
tmp += samples[i + j] / Stream.Channels;
result[i / Stream.Channels] = tmp;
}
// mono output to be used for visualizations
singleChannel = result;
return 0;
}
Seems to work for 32b float, because I get sensible data in the spectrum analyzer (although it seems too shifted(or compressed?) to the lower frequencies).
I also seem to manage to make it work for 8, 16 and 32 non float, but I can only read garbage when the bits are 24.
How can I adapt this to work with 24 bit PCM coming into Buffer?
Buffer comes from an IMediaSample.
Another thing I am wondering is if the method I use to add all channels to one by summing and dividing by the number of channels is ok...
I figured it out:
byte[] buffer24 = new byte[numSamples * 3];
Marshal.Copy(Buffer, buffer24, 0, numSamples * 3);
var window = (float)(255 << 16 | 255 << 8 | 255);
for (int j = 0; j < buffer24.Length; j+=3)
{
samples[j / 3] = (buffer24[j] << 16 | buffer24[j + 1] << 8 | buffer24[j + 2]) / window;
}
Creates a integer from the three bytes and then scales it into the range 1/-1 by dividing with the max value of three bytes.
Have you tried
byte[] buffer24f = new byte[numSamples * 3];
Marshal.Copy(Buffer, buffer24f, 0, numSamples);
for (int j = 0; j < buffer24f.Length; j+=3)
{
samples[j / 3] = System.BitConverter.ToSingle(
new byte[] {
0,
buffer24f[j + 0],
buffer24f[j + 1],
buffer24f[j + 2]
}, 0);
}

Pack and unpack multiple integers into and from an Uint64

Need to pack and unpack the following into an UInt64
UInt25
UInt5
UInt7
UInt27
Have the following for packing and unpacking UInt27 and UInt5 to from UInt32
But I cannot get past 2
My background is math (not computer science)
UInt32 highlow;
UInt32 high;
byte low;
int two27 = (Int32)Math.Pow(2, 27);
for (UInt32 i = 0; i < two27; i++)
{
highlow = ((UInt32)i) << 5;
high = highlow >> 5;
if (high != i)
{
Debug.WriteLine("high wrong A " + high.ToString() + " " + i.ToString());
}
for (byte j = 0; j < 32; j++)
{
highlow = (((UInt32)i) << 5) | j;
high = highlow >> 5;
if (high != i)
{
Debug.WriteLine("high wrong B " + high.ToString() + " " + i.ToString());
}
low = (byte)(highlow & 0x1f);
if (low != j)
{
Debug.WriteLine("low wrong " + low.ToString() + " " + j.ToString());
}
}
}
Code based on accepted answer (did not test the full loop the i27 loop got to 2)
UInt32 bits27;
UInt32 bits25;
UInt32 bits7;
UInt32 bits5;
UInt32 int27 = (UInt32)Math.Pow(2,27);
UInt32 int25 = (UInt32)Math.Pow(2,25);
UInt32 int7 = (UInt32)Math.Pow(2,7);
UInt32 int5 = (UInt32)Math.Pow(2,5);
UInt64 packed;
//ulong packed = (bits27) | ((ulong)bits25 << 27) | ((ulong)bits7 << 52) | ((ulong)bits5 << 59);
for (UInt32 i27 = 0; i27 < int27; i27++)
{
for (UInt32 i25 = 0; i25 < int25; i25++)
{
for (UInt32 i7 = 0; i7 < int7; i7++)
{
for (UInt32 i5 = 0; i5 < int5; i5++)
{
packed = (UInt64)(i27) | ((UInt64)i25 << 27) | ((UInt64)i7 << 52) | ((UInt64)i5 << 59);
bits27 = (UInt32)(packed & ((1 << 27) - 1));
bits25 = (UInt32)((packed >> 27) & ((1 << 25) - 1));
bits7 = (UInt32)((packed >> 52) & ((1 << 7) - 1));
bits5 = (UInt32)((packed >> 59) & ((1 << 5) - 1));
if (bits27 != i27) Debug.WriteLine("bits27 != i27");
if (bits25 != i25) Debug.WriteLine("bits25 != i25");
if (bits7 != i7) Debug.WriteLine("bits7 != i7");
if (bits5 != i5) Debug.WriteLine("bits5 != i5");
}
}
}
}
The shift operators are the right solution, but note that they won't automatically make the result wider than the inputs -- you need to cast the input.
Pack:
ulong packed = (bits27) | ((ulong)bits25 << 27) | ((ulong)bits7 << 52) | ((ulong)bits5 << 59);
Unpack:
bits27 = (uint) (packed & ((1 << 27) - 1));
bits25 = (uint)((packed >> 27) & ((1 << 25) - 1));
bits7 = (uint)((packed >> 52) & ((1 << 7) - 1));
bits5 = (uint)((packed >> 59) & ((1 << 5) - 1));
It seems like it would be far easier to convert the numbers to binary, pad or truncate to the correct length, concatenate them and then construct your 64-bit type from binary.
var packedInt64 = Convert.ToInt64(Convert.ToString(ui25, 2).PadLeft(25, '0') +
Convert.ToString(ui5, 2).PadLeft(5, '0') +
Convert.ToString(ui7, 2).PadLeft(7, '0') +
Convert.ToString(ui27, 2).PadLeft(2, '0'), 2);
To unpack:
var binary = Convert.ToString(packedInt64, 2);
ui25 = Convert.ToUInt32(binary.Substring(0, 24));
ui5 = Convert.ToUInt32(binary.Substring(24, 5));
etc.

Faster way to swap endianness in C# with 32 bit words

In this question, the following code:
public static void Swap(byte[] data)
{
for (int i = 0; i < data.Length; i += 2)
{
byte b = data[i];
data[i] = data[i + 1];
data[i + 1] = b;
}
}
was rewritten in unsafe code to improve its performance:
public static unsafe void SwapX2(Byte[] Source)
{
fixed (Byte* pSource = &Source[0])
{
Byte* bp = pSource;
Byte* bp_stop = bp + Source.Length;
while (bp < bp_stop)
{
*(UInt16*)bp = (UInt16)(*bp << 8 | *(bp + 1));
bp += 2;
}
}
}
Assuming that one wanted to do the same thing with 32 bit words:
public static void SwapX4(byte[] data)
{
byte temp;
for (int i = 0; i < data.Length; i += 4)
{
temp = data[i];
data[i] = data[i + 3];
data[i + 3] = temp;
temp = data[i + 1];
data[i + 1] = data[i + 2];
data[i + 2] = temp;
}
}
how would this be rewritten in a similar fashion?
public static unsafe void SwapX4(Byte[] Source)
{
fixed (Byte* pSource = &Source[0])
{
Byte* bp = pSource;
Byte* bp_stop = bp + Source.Length;
while (bp < bp_stop)
{
*(UInt32*)bp = (UInt32)(
(*bp << 24) |
(*(bp + 1) << 16) |
(*(bp + 2) << 8) |
(*(bp + 3) ));
bp += 4;
}
}
}
Note that both of these functions (my SwapX4 and your SwapX2) will only swap anything on a little-endian host; when run on a big-endian host, they are an expensive no-op.
This version will not exceed the bounds of the buffer. Works on both Little and Big Endian architectures. And is faster on larger data. (Update: Add build configurations for x86 and x64, predefine X86 for 32 bit(x86) and X64 for 64 bit(x64) and it'll be slightly faster.)
public static unsafe void Swap4(byte[] source)
{
fixed (byte* psource = source)
{
#if X86
var length = *((uint*)(psource - 4)) & 0xFFFFFFFEU;
#elif X64
var length = *((uint*)(psource - 8)) & 0xFFFFFFFEU;
#else
var length = (source.Length & 0xFFFFFFFE);
#endif
while (length > 7)
{
length -= 8;
ulong* pulong = (ulong*)(psource + length);
*pulong = ( ((*pulong >> 24) & 0x000000FF000000FFUL)
| ((*pulong >> 8) & 0x0000FF000000FF00UL)
| ((*pulong << 8) & 0x00FF000000FF0000UL)
| ((*pulong << 24) & 0xFF000000FF000000UL));
}
if(length != 0)
{
uint* puint = (uint*)psource;
*puint = ( ((*puint >> 24))
| ((*puint >> 8) & 0x0000FF00U)
| ((*puint << 8) & 0x00FF0000U)
| ((*puint << 24)));
}
}
}

how to decompose integer array to a byte array (pixel codings)

Hi sorry for being annoying by rephrasing my question but I am just on the point of discovering my answer.
I have an array of int composed of RGB values, I need to decompose that int array into a byte array, but it should be in BGR order.
The array of int composed of RGB values is being created like so:
pix[index++] = (255 << 24) | (red << 16) | blue;
C# code
// convert integer array representing [argb] values to byte array representing [bgr] values
private byte[] convertArray(int[] array)
{
byte[] newarray = new byte[array.Length * 3];
for (int i = 0; i < array.Length; i++)
{
newarray[i * 3] = (byte)array[i];
newarray[i * 3 + 1] = (byte)(array[i] >> 8);
newarray[i * 3 + 2] = (byte)(array[i] >> 16);
}
return newarray;
}
#define N something
unsigned char bytes[N*3];
unsigned int ints[N];
for(int i=0; i<N; i++) {
bytes[i*3] = ints[i]; // Blue
bytes[i*3+1] = ints[i] >> 8; // Green
bytes[i*3+2] = ints[i] >> 16; // Red
}
Using Linq:
pix.SelectMany(i => new byte[] {
(byte)(i >> 0),
(byte)(i >> 8),
(byte)(i >> 16),
}).ToArray();
Or
return (from i in pix
from x in new[] { 0, 8, 16 }
select (byte)(i >> x)
).ToArray();
Try to use Buffer Class
byte[] bytes = new byte[ints.Length*4];
Buffer.BlockCopy(ints, 0, bytes, 0, ints.Length * 4);
r = (pix[index] >> 16) & 0xFF
the rest is similar, just change 16 to 8 or 24.

Categories