how to decompose integer array to a byte array (pixel codings) - c#

Hi sorry for being annoying by rephrasing my question but I am just on the point of discovering my answer.
I have an array of int composed of RGB values, I need to decompose that int array into a byte array, but it should be in BGR order.
The array of int composed of RGB values is being created like so:
pix[index++] = (255 << 24) | (red << 16) | blue;

C# code
// convert integer array representing [argb] values to byte array representing [bgr] values
private byte[] convertArray(int[] array)
{
byte[] newarray = new byte[array.Length * 3];
for (int i = 0; i < array.Length; i++)
{
newarray[i * 3] = (byte)array[i];
newarray[i * 3 + 1] = (byte)(array[i] >> 8);
newarray[i * 3 + 2] = (byte)(array[i] >> 16);
}
return newarray;
}

#define N something
unsigned char bytes[N*3];
unsigned int ints[N];
for(int i=0; i<N; i++) {
bytes[i*3] = ints[i]; // Blue
bytes[i*3+1] = ints[i] >> 8; // Green
bytes[i*3+2] = ints[i] >> 16; // Red
}

Using Linq:
pix.SelectMany(i => new byte[] {
(byte)(i >> 0),
(byte)(i >> 8),
(byte)(i >> 16),
}).ToArray();
Or
return (from i in pix
from x in new[] { 0, 8, 16 }
select (byte)(i >> x)
).ToArray();

Try to use Buffer Class
byte[] bytes = new byte[ints.Length*4];
Buffer.BlockCopy(ints, 0, bytes, 0, ints.Length * 4);

r = (pix[index] >> 16) & 0xFF
the rest is similar, just change 16 to 8 or 24.

Related

Writing a double array to a wave file in c#

I'm trying to write a wave file from scratch using c#. I managed to write 16 bit samples with no issue. But when it comes to 24 bit, apparently all bets are off.
I tried various ways of converting an int to a 3-byte array, which I would proceed to write to the data chunk L-L-L-R-R-R (as its a 24 bit stereo PCM wav).
For the 16bit part, I used this to generate the samples:
//numberOfBytes = 2 - for 16bit. slice = something like 2*Math.Pi*frequency/samplerate
private static byte[,] BuildByteWave(double slice, int numberOfBytes=2)
{
double dataPt = 0;
byte[,] output = new byte[Convert.ToInt32(Samples),numberOfBytes];
for (int i = 0; i < Samples; i++)
{
dataPt = Math.Sin(i * slice) * Settings.Amplitude;
int data = Convert.ToInt32(dataPt * Settings.Volume * 32767);
for (int j = 0; j < numberOfBytes; j++)
{
output[i, j] = ExtractByte(data, j);
}
}
return output;
}
This returns an array I later use to write to the data chunk like so
writer.WriteByte(samples[1][0]); //write to the left channel
writer.WriteByte(samples[1][1]); //write to the left channel
writer.WriteByte(samples[2][0]); //now to the second channel
writer.WriteByte(samples[2][1]); //and yet again.
Where 1 and 2 represent a certain sine wave.
However, if I tried the above with numberOfBytes = 3, it fails hard. The wave is a bunch of non-sense. (the header is formatted correctly).
I understood that I need to convert int32 to int24 and that I need to "pad" the samples, but I found no concrete 24bit tutorial anywhere.
Could you please point me in the right direction?
Edited for clarity.
There is no int24 - you will need to do it yourself. for/switch is a bit of an anti-pattern, too.
int[] samples = /* samples scaled to +/- 8388607 (0x7f`ffff) */;
byte[] data = new byte[samples.Length * 3];
for (int i = 0, j = 0; i < samples.Length; i++, j += 3)
{
// WAV is little endian
data[j + 0] = (byte)((i >> 0) & 0xff);
data[j + 1] = (byte)((i >> 8) & 0xff);
data[j + 2] = (byte)((i >> 16) & 0xff);
}
// data now has the 24-bit samples.
As an example, here's a program (Github) which generates a 15 second 44.1kHz 24-bit stereo wav file with 440 Hz in the left channel and 1 kHz in the right channel:
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
namespace WavGeneratorDemo
{
class Program
{
const int INT24_MAX = 0x7f_ffff;
static void Main(string[] args)
{
const int sampleRate = 44100;
const int lengthInSeconds = 15 /* sec */;
const int channels = 2;
const double channelSamplesPerSecond = sampleRate * channels;
var samples = new double[lengthInSeconds * sampleRate * channels];
// Left is 440 Hz sine wave
FillWithSineWave(samples, channels, channelSamplesPerSecond, 0 /* Left */, 440 /* Hz */);
// Right is 1 kHz sine wave
FillWithSineWave(samples, channels, channelSamplesPerSecond, 1 /* Right */, 1000 /* Hz */);
WriteWavFile(samples, sampleRate, channels, "out.wav");
}
private static void WriteWavFile(double[] samples, uint sampleRate, ushort channels, string fileName)
{
using (var wavFile = File.OpenWrite(fileName))
{
const int chunkHeaderSize = 8,
waveHeaderSize = 4,
fmtChunkSize = 16;
uint samplesByteLength = (uint)samples.Length * 3u;
// RIFF header
wavFile.WriteAscii("RIFF");
wavFile.WriteLittleEndianUInt32(
waveHeaderSize
+ chunkHeaderSize + fmtChunkSize
+ chunkHeaderSize + samplesByteLength);
wavFile.WriteAscii("WAVE");
// fmt header
wavFile.WriteAscii("fmt ");
wavFile.WriteLittleEndianUInt32(fmtChunkSize);
wavFile.WriteLittleEndianUInt16(1); // AudioFormat = PCM
wavFile.WriteLittleEndianUInt16(channels);
wavFile.WriteLittleEndianUInt32(sampleRate);
wavFile.WriteLittleEndianUInt32(sampleRate * channels);
wavFile.WriteLittleEndianUInt16((ushort)(3 * channels)); // Block Align (stride)
wavFile.WriteLittleEndianUInt16(24); // Bits per sample
// samples data
wavFile.WriteAscii("data");
wavFile.WriteLittleEndianUInt32(samplesByteLength);
for (int i = 0; i < samples.Length; i++)
{
var scaledValue = DoubleToInt24(samples[i]);
wavFile.WriteLittleEndianInt24(scaledValue);
}
}
}
private static void FillWithSineWave(double[] samples, int channels, double channelSamplesPerSecond, int channelNo, double freq)
{
for (int i = channelNo; i < samples.Length; i += channels)
{
var t = (i - channelNo) / channelSamplesPerSecond;
samples[i] = Math.Sin(t * (freq * Math.PI * 2));
}
}
private static int DoubleToInt24(double value)
{
if (value < -1 || value > 1)
{
throw new ArgumentOutOfRangeException(nameof(value));
}
return (int)(value * INT24_MAX);
}
}
static class StreamExtensions
{
public static void WriteAscii(this Stream s, string str) => s.Write(Encoding.ASCII.GetBytes(str));
public static void WriteLittleEndianUInt32(this Stream s, UInt32 i)
{
var b = new byte[4];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
b[2] = (byte)((i >> 16) & 0xff);
b[3] = (byte)((i >> 24) & 0xff);
s.Write(b);
}
public static void WriteLittleEndianInt24(this Stream s, Int32 i)
{
var b = new byte[3];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
b[2] = (byte)((i >> 16) & 0xff);
s.Write(b);
}
public static void WriteLittleEndianUInt16(this Stream s, UInt16 i)
{
var b = new byte[2];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
s.Write(b);
}
}
}
Which generates:

Converting 32bit wav array to x-bit

I have a 32 bit wav array and I wanted to convert it to 8 bit
So I tried to take this function which converts 32 to 16
void _waveIn_DataAvailable(object sender, WaveInEventArgs e)
{
byte[] newArray16Bit = new byte[e.BytesRecorded / 2];
short two;
float value;
for (int i = 0, j = 0; i < e.BytesRecorded; i += 4, j += 2)
{
value = (BitConverter.ToSingle(e.Buffer, i));
two = (short)(value * short.MaxValue);
newArray16Bit[j] = (byte)(two & 0xFF);
newArray16Bit[j + 1] = (byte)((two >> 8) & 0xFF);
}
}
And modify it to take x bit as destination
private byte[] Convert32BitRateToNewBitRate(byte[] bytes, int newBitRate)
{
var sourceBitRate = 32;
byte[] newArray = new byte[bytes.Length / (sourceBitRate / newBitRate)];
for (int i = 0, j = 0; i < bytes.Length; i += (sourceBitRate / 8), j += (newBitRate / 8))
{
var value = (BitConverter.ToSingle(bytes, i));
var two = (short)(value * short.MaxValue);
newArray[j] = (byte)(two & 0xFF);
newArray[j + 1] = (byte)((two >> 8) & 0xFF);
}
return newArray;
}
My problem is that I wasn't sure how to convert the code within the "for" loop, I tried to debug it but I couldn't quite figure out how it works.
I saw here : simple wav 16-bit / 8-bit converter source code?, that they divided the value by 256 to get from 16 to 8, I tried to divide by 256 to get from 32 to 16 but it didn't work
for (int i = 0, j = 0; i < bytes.Length; i += sourceBitRateBytes, j += newBitRateBytes)
{
var value = BitConverter.ToInt32(bytes, i);
value /= (int)Math.Pow(256, sourceBitRate / newBitRate / 2.0);
var valueBytes = BitConverter.GetBytes(value);
for (int k = 0; k < newBitRateBytes; k++)
{
newArray[k + j] = valueBytes[k];
}
The for loop is still using 16 bit in the following places:
short.MaxValue. Use byte.MaxValue instead.
by assigning two bytes at [j] and [j+1]. Assign one byte only.
I don't have the rest of the program and no sample data, so it's hard for me to try. But I'd say the following sounds about right
for (int i = 0, j = 0; i < bytes.Length; i += (sourceBitRate / 8), j += (newBitRate / 8))
{
var value = (BitConverter.ToSingle(bytes, i));
var two = (byte)(value * byte.MaxValue);
newArray[j] = two;
}
Be aware that this works for 8 bit only, so newBitRate must be 8, otherwise it does not work. It should probably not be a parameter to the method.

include two CRC16 bytes in a program ...?

i want to send some bytes via RS232 to a DSPIC33F that controls a robot motors, the DSPIC must receive 9 bytes orderly the last 2 bytes are for CRC16, am working in C#, so how can i calculate the CRC bytes meant to be sent.
the program that calculates the CRC16, i have found it in the internet :
using System;
using System.Collections.Generic;
using System.Text;
namespace SerialPortTerminal
{
public enum InitialCrcValue { Zeros, NonZero1 = 0xffff, NonZero2 = 0x1D0F }
public class Crc16Ccitt
{
const ushort poly = 4129;
ushort[] table = new ushort[256];
ushort initialValue = 0;
public ushort ComputeChecksum(byte[] bytes)
{
ushort crc = this.initialValue;
for (int i = 0; i < bytes.Length; i++)
{
crc = (ushort)((crc << 8) ^ table[((crc >> 8) ^ (0xff & bytes[i]))]);
}
return crc;
}
public byte[] ComputeChecksumBytes(byte[] bytes)
{
ushort crc = ComputeChecksum(bytes);
return new byte[] { (byte)(crc >> 8), (byte)(crc & 0x00ff) };
}
public Crc16Ccitt(InitialCrcValue initialValue)
{
this.initialValue = (ushort)initialValue;
ushort temp, a;
for (int i = 0; i < table.Length; i++)
{
temp = 0;
a = (ushort)(i << 8);
for (int j = 0; j < 8; j++)
{
if (((temp ^ a) & 0x8000) != 0)
{
temp = (ushort)((temp << 1) ^ poly);
}
else
{
temp <<= 1;
}
a <<= 1;
}
table[i] = temp;
}
}
}
}
With the class you provided you would create the buffer of data that you want:
byte[] data = new byte[7];
data[0] = 1; // This example is just random numbers
data[1] = 12;
data[2] = 17;
data[3] = 9;
data[4] = 106;
data[5] = 12;
data[6] = 0;
Then calculate the checksum bytes:
Crc16Ccitt calculator = new Crc16Ccitt();
byte[] checksum = calculator.ComputeChecksumBytes(data);
Then either write the two parts of the data
port.Write(data);
port.Write(checksum);
Or build a packet to be written from the two parts:
byte[] finalData = new byte[9];
Buffer.BlockCopy(data, 0, finalData, 0, 7);
Buffer.BlockCopy(data, 7, checksum, 0, 2);
port.Write(finalData);
The class you've posted could be rewritten slightly to make it a bit more efficient and easier/cleaner to use but it should suffice as long as it calculates the CRC 16 in the same way as the device you're communicating with. If this doesn't work then you need to consult the documentation for the device, or ask the manufacturer for the details you need.

How to read IMediaSample 24 bit PCM data

I have the following method which collects PCM data from the IMediaSample into floats for the FFT:
public int PCMDataCB(IntPtr Buffer, int Length, ref TDSStream Stream, out float[] singleChannel)
{
int numSamples = Length / (Stream.Bits / 8);
int samplesPerChannel = numSamples / Stream.Channels;
float[] samples = new float[numSamples];
if (Stream.Bits == 32 && Stream.Float) {
// this seems to work for 32 bit floating point
byte[] buffer32f = new byte[numSamples * 4];
Marshal.Copy(Buffer, buffer32f, 0, numSamples);
for (int j = 0; j < buffer32f.Length; j+=4)
{
samples[j / 4] = System.BitConverter.ToSingle(new byte[] { buffer32f[j + 0], buffer32f[j + 1], buffer32f[j + 2], buffer32f[j + 3]}, 0);
}
}
else if (Stream.Bits == 24)
{
// I need this code
}
// compress result into one mono channel
float[] result = new float[samplesPerChannel];
for (int i = 0; i < numSamples; i += Stream.Channels)
{
float tmp = 0;
for (int j = 0; j < Stream.Channels; j++)
tmp += samples[i + j] / Stream.Channels;
result[i / Stream.Channels] = tmp;
}
// mono output to be used for visualizations
singleChannel = result;
return 0;
}
Seems to work for 32b float, because I get sensible data in the spectrum analyzer (although it seems too shifted(or compressed?) to the lower frequencies).
I also seem to manage to make it work for 8, 16 and 32 non float, but I can only read garbage when the bits are 24.
How can I adapt this to work with 24 bit PCM coming into Buffer?
Buffer comes from an IMediaSample.
Another thing I am wondering is if the method I use to add all channels to one by summing and dividing by the number of channels is ok...
I figured it out:
byte[] buffer24 = new byte[numSamples * 3];
Marshal.Copy(Buffer, buffer24, 0, numSamples * 3);
var window = (float)(255 << 16 | 255 << 8 | 255);
for (int j = 0; j < buffer24.Length; j+=3)
{
samples[j / 3] = (buffer24[j] << 16 | buffer24[j + 1] << 8 | buffer24[j + 2]) / window;
}
Creates a integer from the three bytes and then scales it into the range 1/-1 by dividing with the max value of three bytes.
Have you tried
byte[] buffer24f = new byte[numSamples * 3];
Marshal.Copy(Buffer, buffer24f, 0, numSamples);
for (int j = 0; j < buffer24f.Length; j+=3)
{
samples[j / 3] = System.BitConverter.ToSingle(
new byte[] {
0,
buffer24f[j + 0],
buffer24f[j + 1],
buffer24f[j + 2]
}, 0);
}

ushort array to byte array

I have an array of ushorts, with each ushort representing a 12-bit word. This needs to be tightly packed into an array of bytes. It should look like this in the end:
| word1 | word2 | word3 | word4 |
| byte1 | byte2 | byte3 | byte4 | byte5 | byte6|
Since each word only uses 12 bits, 2 words will be packed into 3 bytes.
Could someone help? I'm a bit stuck on how to do this in C#.
You're probably going to have to brute-force it.
I'm not a C# guy, but you are looking at something along the lines of (in C):
unsigned incursor, outcursor;
unsigned inlen = length(inputarray); // not literally
for(incursor=0,outcursor=0;incursor < inlen; incursor+=2,outcursor+=3{
outputarray[outcursor+0] = ((inputarray[incursor+0]) >> 4) & 0xFF;
outputarray[outcursor+1] = ((inputarray[incursor+0] & 0x0F)<<4 | ((inputarray[incursor+1]>>8) & 0x0F);
outputarray[outcursor+2] = inputarray[incursor+1] & 0xFF;
}
If you want to use the array as an array of UInt16 while in-memory, and then convert it to a packed byte array for storage, then you'll want a function to do one-shot conversion of the two array types.
public byte[] PackUInt12(ushort[] input)
{
byte[] result = new byte[(input.Length * 3 + 1) / 2]; // the +1 leaves space if we have an odd number of UInt12s. It's the unused half byte at the end of the array.
for(int i = 0; i < input.Length / 2; i++)
{
result[i * 3 + 0] = (byte)input[i * 2 + 0];
result[i * 3 + 1] = (byte)(input[i * 2 + 0] >> 8 | input[i * 2 + 1] << 4);
result[i * 3 + 2] = (byte)(input[i * 2 + 1] >> 4);
}
if(input.Length % 2 == 1)
{
result[i * 3 + 0] = (byte)input[i * 2 + 0];
result[i * 3 + 1] = (byte)(input[i * 2 + 0] >> 8);
}
return result;
}
public ushort[] UnpackUInt12(byte[] input)
{
ushort[] result = new ushort[input.Length * 2 / 3];
for(int i = 0; i < input.Length / 3; i++)
{
result[i * 2 + 0] = (ushort)(((ushort)input[i * 3 + 1]) << 8 & 0x0F00 | input[i * 3 + 0]);
result[i * 2 + 1] = (ushort)(((ushort)input[i * 3 + 1]) << 4 | input[i * 3 + 1] >> 4;)
}
if(result.Length % 2 == 1)
{
result[i * 2 + 0] = (ushort)(((ushort)input[i * 3 + 1]) << 8 & 0x0F00 | input[i * 3 + 0]);
}
return result;
}
If, however, you want to be efficient about memory usage while the application is running, and access this packed array as an array, then you'll want to have a class that returns ushorts, but stores them in byte[].
public class UInt12Array
{
// TODO: Constructors, etc.
private byte[] storage;
public ushort this[int index]
{
get
{
// TODO: throw exceptions if the index is off the array.
int i = index * 2 / 3;
if(index % 2 == 0)
return (ushort)(((ushort)storage[i * 3 + 1]) << 8 & 0x0F00 | storage[i * 3 + 0]);
else
return (ushort)(((ushort)storage[i * 3 + 1]) << 4 | storage[i * 3 + 1] >> 4;)
}
set
{
// TODO: throw exceptions if the index is off the array.
int i = index * 2 / 3;
if(index % 2 == 0)
storage[i * 3 + 0] = (byte)value;
storage[i * 3 + 1] = (byte)(value >> 8 | storage[i * 3 + 1] & 0xF0);
else
storage[i * 3 + 1] = (byte)(storage[i * 3 + 1] & 0x0F | value << 4);
storage[i * 3 + 2] = (byte)(value >> 4);
}
}
}
Why not store the 12-bit words in a byte array and provide a getter and a setter method that read and write the ushort's byte to the correct index in the array?
Trying to solve this with LINQ was fun!
Warning: For entertainment purposes only - do not use the below performance abominations in real code!
First try - group pairs of uints, create three bytes out of each pair, flatten list:
byte[] packedNumbers = (from i in Enumerable.Range(0, unpackedNumbers.Length)
group unpackedNumbers[i] by i - (i % 2) into pairs
let n1 = pairs.First()
let n2 = pairs.Skip(1).First()
let b1 = (byte)(n1 >> 4)
let b2 = (byte)(((n1 & 0xF) << 4) | (n2 & 0xF00) >> 8)
let b3 = (byte)(n2 & 0xFFFF)
select new[] { b1, b2, b3 })
.SelectMany(b => b).ToArray();
Or slightly more compact, but less readable:
byte[] packedNumbers = unpackedNumbers
.Select((Value, Index) => new { Value, Index })
.GroupBy(number => number.Index - (number.Index % 2))
.SelectMany(pair => new byte[] {
(byte)(pair.First().Value >> 4),
(byte)(((pair.First().Value & 0xF) << 4) | (pair.Skip(1).First().Value & 0xF00) >> 8),
(byte)(pair.Skip(1).First().Value & 0xFFFF) }).ToArray();
Strings anyone?
char[] hexChars = unpackedNumbers.SelectMany(n => n.ToString("X4").Substring(1, 3)).ToArray();
byte[] packedNumbers = (from i in Enumerable.Range(0, hexChars.Length / 2)
select byte.Parse(hexChars[i * 2].ToString() + hexChars[i * 2 + 1], NumberStyles.HexNumber))
.ToArray();
According to the comments given, I suppose, the current answers is preferable.
But about this should do it also:
public byte[] ushort2byteArr(ushort[] arr) {
System.IO.MemoryStream ms = new System.IO.MemoryStream();
System.IO.BinaryWriter bw = new System.IO.BinaryWriter(ms);
for (int i = 0; i < arr.Length-1;) { // check upper limit!
// following is wrong! must extend this to pack 8 12 bit words into 3 uint32!
UInt32 tmp = arr[i++] | (arr[i++] << 12) ... ;
bw.Write(tmp);
}
return ms.ToArray();
}
its not tested. take it as pseudocode to get the clue. especially the word -> uint32 conversion. May need some padding at the end?
#edit: made a function out of it for better clearance

Categories