Faster way to swap endianness in C# with 32 bit words - c#

In this question, the following code:
public static void Swap(byte[] data)
{
for (int i = 0; i < data.Length; i += 2)
{
byte b = data[i];
data[i] = data[i + 1];
data[i + 1] = b;
}
}
was rewritten in unsafe code to improve its performance:
public static unsafe void SwapX2(Byte[] Source)
{
fixed (Byte* pSource = &Source[0])
{
Byte* bp = pSource;
Byte* bp_stop = bp + Source.Length;
while (bp < bp_stop)
{
*(UInt16*)bp = (UInt16)(*bp << 8 | *(bp + 1));
bp += 2;
}
}
}
Assuming that one wanted to do the same thing with 32 bit words:
public static void SwapX4(byte[] data)
{
byte temp;
for (int i = 0; i < data.Length; i += 4)
{
temp = data[i];
data[i] = data[i + 3];
data[i + 3] = temp;
temp = data[i + 1];
data[i + 1] = data[i + 2];
data[i + 2] = temp;
}
}
how would this be rewritten in a similar fashion?

public static unsafe void SwapX4(Byte[] Source)
{
fixed (Byte* pSource = &Source[0])
{
Byte* bp = pSource;
Byte* bp_stop = bp + Source.Length;
while (bp < bp_stop)
{
*(UInt32*)bp = (UInt32)(
(*bp << 24) |
(*(bp + 1) << 16) |
(*(bp + 2) << 8) |
(*(bp + 3) ));
bp += 4;
}
}
}
Note that both of these functions (my SwapX4 and your SwapX2) will only swap anything on a little-endian host; when run on a big-endian host, they are an expensive no-op.

This version will not exceed the bounds of the buffer. Works on both Little and Big Endian architectures. And is faster on larger data. (Update: Add build configurations for x86 and x64, predefine X86 for 32 bit(x86) and X64 for 64 bit(x64) and it'll be slightly faster.)
public static unsafe void Swap4(byte[] source)
{
fixed (byte* psource = source)
{
#if X86
var length = *((uint*)(psource - 4)) & 0xFFFFFFFEU;
#elif X64
var length = *((uint*)(psource - 8)) & 0xFFFFFFFEU;
#else
var length = (source.Length & 0xFFFFFFFE);
#endif
while (length > 7)
{
length -= 8;
ulong* pulong = (ulong*)(psource + length);
*pulong = ( ((*pulong >> 24) & 0x000000FF000000FFUL)
| ((*pulong >> 8) & 0x0000FF000000FF00UL)
| ((*pulong << 8) & 0x00FF000000FF0000UL)
| ((*pulong << 24) & 0xFF000000FF000000UL));
}
if(length != 0)
{
uint* puint = (uint*)psource;
*puint = ( ((*puint >> 24))
| ((*puint >> 8) & 0x0000FF00U)
| ((*puint << 8) & 0x00FF0000U)
| ((*puint << 24)));
}
}
}

Related

Writing a double array to a wave file in c#

I'm trying to write a wave file from scratch using c#. I managed to write 16 bit samples with no issue. But when it comes to 24 bit, apparently all bets are off.
I tried various ways of converting an int to a 3-byte array, which I would proceed to write to the data chunk L-L-L-R-R-R (as its a 24 bit stereo PCM wav).
For the 16bit part, I used this to generate the samples:
//numberOfBytes = 2 - for 16bit. slice = something like 2*Math.Pi*frequency/samplerate
private static byte[,] BuildByteWave(double slice, int numberOfBytes=2)
{
double dataPt = 0;
byte[,] output = new byte[Convert.ToInt32(Samples),numberOfBytes];
for (int i = 0; i < Samples; i++)
{
dataPt = Math.Sin(i * slice) * Settings.Amplitude;
int data = Convert.ToInt32(dataPt * Settings.Volume * 32767);
for (int j = 0; j < numberOfBytes; j++)
{
output[i, j] = ExtractByte(data, j);
}
}
return output;
}
This returns an array I later use to write to the data chunk like so
writer.WriteByte(samples[1][0]); //write to the left channel
writer.WriteByte(samples[1][1]); //write to the left channel
writer.WriteByte(samples[2][0]); //now to the second channel
writer.WriteByte(samples[2][1]); //and yet again.
Where 1 and 2 represent a certain sine wave.
However, if I tried the above with numberOfBytes = 3, it fails hard. The wave is a bunch of non-sense. (the header is formatted correctly).
I understood that I need to convert int32 to int24 and that I need to "pad" the samples, but I found no concrete 24bit tutorial anywhere.
Could you please point me in the right direction?
Edited for clarity.
There is no int24 - you will need to do it yourself. for/switch is a bit of an anti-pattern, too.
int[] samples = /* samples scaled to +/- 8388607 (0x7f`ffff) */;
byte[] data = new byte[samples.Length * 3];
for (int i = 0, j = 0; i < samples.Length; i++, j += 3)
{
// WAV is little endian
data[j + 0] = (byte)((i >> 0) & 0xff);
data[j + 1] = (byte)((i >> 8) & 0xff);
data[j + 2] = (byte)((i >> 16) & 0xff);
}
// data now has the 24-bit samples.
As an example, here's a program (Github) which generates a 15 second 44.1kHz 24-bit stereo wav file with 440 Hz in the left channel and 1 kHz in the right channel:
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
namespace WavGeneratorDemo
{
class Program
{
const int INT24_MAX = 0x7f_ffff;
static void Main(string[] args)
{
const int sampleRate = 44100;
const int lengthInSeconds = 15 /* sec */;
const int channels = 2;
const double channelSamplesPerSecond = sampleRate * channels;
var samples = new double[lengthInSeconds * sampleRate * channels];
// Left is 440 Hz sine wave
FillWithSineWave(samples, channels, channelSamplesPerSecond, 0 /* Left */, 440 /* Hz */);
// Right is 1 kHz sine wave
FillWithSineWave(samples, channels, channelSamplesPerSecond, 1 /* Right */, 1000 /* Hz */);
WriteWavFile(samples, sampleRate, channels, "out.wav");
}
private static void WriteWavFile(double[] samples, uint sampleRate, ushort channels, string fileName)
{
using (var wavFile = File.OpenWrite(fileName))
{
const int chunkHeaderSize = 8,
waveHeaderSize = 4,
fmtChunkSize = 16;
uint samplesByteLength = (uint)samples.Length * 3u;
// RIFF header
wavFile.WriteAscii("RIFF");
wavFile.WriteLittleEndianUInt32(
waveHeaderSize
+ chunkHeaderSize + fmtChunkSize
+ chunkHeaderSize + samplesByteLength);
wavFile.WriteAscii("WAVE");
// fmt header
wavFile.WriteAscii("fmt ");
wavFile.WriteLittleEndianUInt32(fmtChunkSize);
wavFile.WriteLittleEndianUInt16(1); // AudioFormat = PCM
wavFile.WriteLittleEndianUInt16(channels);
wavFile.WriteLittleEndianUInt32(sampleRate);
wavFile.WriteLittleEndianUInt32(sampleRate * channels);
wavFile.WriteLittleEndianUInt16((ushort)(3 * channels)); // Block Align (stride)
wavFile.WriteLittleEndianUInt16(24); // Bits per sample
// samples data
wavFile.WriteAscii("data");
wavFile.WriteLittleEndianUInt32(samplesByteLength);
for (int i = 0; i < samples.Length; i++)
{
var scaledValue = DoubleToInt24(samples[i]);
wavFile.WriteLittleEndianInt24(scaledValue);
}
}
}
private static void FillWithSineWave(double[] samples, int channels, double channelSamplesPerSecond, int channelNo, double freq)
{
for (int i = channelNo; i < samples.Length; i += channels)
{
var t = (i - channelNo) / channelSamplesPerSecond;
samples[i] = Math.Sin(t * (freq * Math.PI * 2));
}
}
private static int DoubleToInt24(double value)
{
if (value < -1 || value > 1)
{
throw new ArgumentOutOfRangeException(nameof(value));
}
return (int)(value * INT24_MAX);
}
}
static class StreamExtensions
{
public static void WriteAscii(this Stream s, string str) => s.Write(Encoding.ASCII.GetBytes(str));
public static void WriteLittleEndianUInt32(this Stream s, UInt32 i)
{
var b = new byte[4];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
b[2] = (byte)((i >> 16) & 0xff);
b[3] = (byte)((i >> 24) & 0xff);
s.Write(b);
}
public static void WriteLittleEndianInt24(this Stream s, Int32 i)
{
var b = new byte[3];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
b[2] = (byte)((i >> 16) & 0xff);
s.Write(b);
}
public static void WriteLittleEndianUInt16(this Stream s, UInt16 i)
{
var b = new byte[2];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
s.Write(b);
}
}
}
Which generates:

Trouble with bitshift operations

I try to realize hashing from local standart.
But it return wrong results in simple shift functions. I tried shift message:
byte[] test = Hash.StringToByteArrayFastest("EFCDAB8967452301");
Console.WriteLine(ToHex(Hash.ShLo(test)));
Console.WriteLine(ToHex(Hash.ShHi(test)));
And I expect to get:
ShLo : 77E6D5C4B3A2918016
ShHi : DF9B5712CE8A460216
but get this:
ShLo : f7e6d5c4b3a29100
ShHi : de9b5713cf8a4602
Here's my code
public static byte[] ShHi(byte[] B)
{
return BitConverter.GetBytes(BitConverter.ToUInt64(B, 0) << 1);
}
public static byte[] ShLo(byte[] B)
{
return BitConverter.GetBytes(BitConverter.ToUInt64(B, 0) >> 1);
}
public static byte[] StringToByteArrayFastest(string hex)
{
if (hex.Length % 2 == 1)
throw new Exception("The binary key cannot have an odd number of digits");
byte[] arr = new byte[hex.Length >> 1];
for (int i = 0; i < hex.Length >> 1; ++i)
{
arr[i] = (byte)((GetHexVal(hex[i << 1]) << 4) + (GetHexVal(hex[(i << 1) + 1])));
}
return arr;
}
public static int GetHexVal(char hex)
{
int val = (int)hex;
return val - (val < 58 ? 48 : 55);
}
public static string ToHex(byte[] bytes)
{
char[] c = new char[bytes.Length * 2];
byte b;
for (int bx = 0, cx = 0; bx < bytes.Length; ++bx, ++cx)
{
b = ((byte)(bytes[bx] >> 4));
c[cx] = (char)(b > 9 ? b + 0x37 + 0x20 : b + 0x30);
b = ((byte)(bytes[bx] & 0x0F));
c[++cx] = (char)(b > 9 ? b + 0x37 + 0x20 : b + 0x30);
}
return new string(c);
}
public static string ToHex(byte[] bytes)
{
char[] c = new char[bytes.Length * 2];
byte b;
for (int bx = 0, cx = c.Length - 1; bx < bytes.Length; ++bx)
{
b = ((byte)(bytes[bx] & 0x0F));
c[cx--] = (char)(b > 9 ? b + 0x37 + 0x20 : b + 0x30);
b = ((byte)(bytes[bx] >> 4));
c[cx--] = (char)(b > 9 ? b + 0x37 + 0x20 : b + 0x30);
}
return new string(c);
}
public static byte[] StringToByteArrayFastest(string hex)
{
if (hex.Length % 2 == 1) throw new Exception("The binary key cannot have an odd number of digits");
byte[] arr = new byte[hex.Length >> 1];
for (int i = 0, j = arr.Length - 1; i < arr.Length; ++i)
{
arr[j--] = (byte)((GetHexVal(hex[i << 1]) << 4) + (GetHexVal(hex[(i << 1) + 1])));
}
return arr;
}

Pack and unpack multiple integers into and from an Uint64

Need to pack and unpack the following into an UInt64
UInt25
UInt5
UInt7
UInt27
Have the following for packing and unpacking UInt27 and UInt5 to from UInt32
But I cannot get past 2
My background is math (not computer science)
UInt32 highlow;
UInt32 high;
byte low;
int two27 = (Int32)Math.Pow(2, 27);
for (UInt32 i = 0; i < two27; i++)
{
highlow = ((UInt32)i) << 5;
high = highlow >> 5;
if (high != i)
{
Debug.WriteLine("high wrong A " + high.ToString() + " " + i.ToString());
}
for (byte j = 0; j < 32; j++)
{
highlow = (((UInt32)i) << 5) | j;
high = highlow >> 5;
if (high != i)
{
Debug.WriteLine("high wrong B " + high.ToString() + " " + i.ToString());
}
low = (byte)(highlow & 0x1f);
if (low != j)
{
Debug.WriteLine("low wrong " + low.ToString() + " " + j.ToString());
}
}
}
Code based on accepted answer (did not test the full loop the i27 loop got to 2)
UInt32 bits27;
UInt32 bits25;
UInt32 bits7;
UInt32 bits5;
UInt32 int27 = (UInt32)Math.Pow(2,27);
UInt32 int25 = (UInt32)Math.Pow(2,25);
UInt32 int7 = (UInt32)Math.Pow(2,7);
UInt32 int5 = (UInt32)Math.Pow(2,5);
UInt64 packed;
//ulong packed = (bits27) | ((ulong)bits25 << 27) | ((ulong)bits7 << 52) | ((ulong)bits5 << 59);
for (UInt32 i27 = 0; i27 < int27; i27++)
{
for (UInt32 i25 = 0; i25 < int25; i25++)
{
for (UInt32 i7 = 0; i7 < int7; i7++)
{
for (UInt32 i5 = 0; i5 < int5; i5++)
{
packed = (UInt64)(i27) | ((UInt64)i25 << 27) | ((UInt64)i7 << 52) | ((UInt64)i5 << 59);
bits27 = (UInt32)(packed & ((1 << 27) - 1));
bits25 = (UInt32)((packed >> 27) & ((1 << 25) - 1));
bits7 = (UInt32)((packed >> 52) & ((1 << 7) - 1));
bits5 = (UInt32)((packed >> 59) & ((1 << 5) - 1));
if (bits27 != i27) Debug.WriteLine("bits27 != i27");
if (bits25 != i25) Debug.WriteLine("bits25 != i25");
if (bits7 != i7) Debug.WriteLine("bits7 != i7");
if (bits5 != i5) Debug.WriteLine("bits5 != i5");
}
}
}
}
The shift operators are the right solution, but note that they won't automatically make the result wider than the inputs -- you need to cast the input.
Pack:
ulong packed = (bits27) | ((ulong)bits25 << 27) | ((ulong)bits7 << 52) | ((ulong)bits5 << 59);
Unpack:
bits27 = (uint) (packed & ((1 << 27) - 1));
bits25 = (uint)((packed >> 27) & ((1 << 25) - 1));
bits7 = (uint)((packed >> 52) & ((1 << 7) - 1));
bits5 = (uint)((packed >> 59) & ((1 << 5) - 1));
It seems like it would be far easier to convert the numbers to binary, pad or truncate to the correct length, concatenate them and then construct your 64-bit type from binary.
var packedInt64 = Convert.ToInt64(Convert.ToString(ui25, 2).PadLeft(25, '0') +
Convert.ToString(ui5, 2).PadLeft(5, '0') +
Convert.ToString(ui7, 2).PadLeft(7, '0') +
Convert.ToString(ui27, 2).PadLeft(2, '0'), 2);
To unpack:
var binary = Convert.ToString(packedInt64, 2);
ui25 = Convert.ToUInt32(binary.Substring(0, 24));
ui5 = Convert.ToUInt32(binary.Substring(24, 5));
etc.

Send binary data as parameter to a method in Web Service?

In server side (C#.NET,Windows 2003) I have a web service with a method and in client side (Visual C++ v6, WinINet, POST) I want to call that method and pass binary data as a parameter to it.
when I send Binary data an error rise and when I send ASCII data it called successful.
How can I send binary data as parameter of a method?
To send binary data to a web method you can base64 encode it and then decode it in the web service. If your variable is a byte array called data then you would do the following.
In C++ you need to create a base64 header and cpp file. The following example is from http://www.adp-gmbh.ch/cpp/common/base64.html
base64.h
#include <string>
std::string base64_encode(unsigned char const* , unsigned int len);
std::string base64_decode(std::string const& s);
base64.cpp
#include "base64.h"
#include <iostream>
static const std::string base64_chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz"
"0123456789+/";
static inline bool is_base64(unsigned char c) {
return (isalnum(c) || (c == '+') || (c == '/'));
}
std::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len) {
std::string ret;
int i = 0;
int j = 0;
unsigned char char_array_3[3];
unsigned char char_array_4[4];
while (in_len--) {
char_array_3[i++] = *(bytes_to_encode++);
if (i == 3) {
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
char_array_4[3] = char_array_3[2] & 0x3f;
for(i = 0; (i <4) ; i++)
ret += base64_chars[char_array_4[i]];
i = 0;
}
}
if (i)
{
for(j = i; j < 3; j++)
char_array_3[j] = '\0';
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
char_array_4[3] = char_array_3[2] & 0x3f;
for (j = 0; (j < i + 1); j++)
ret += base64_chars[char_array_4[j]];
while((i++ < 3))
ret += '=';
}
return ret;
}
std::string base64_decode(std::string const& encoded_string) {
int in_len = encoded_string.size();
int i = 0;
int j = 0;
int in_ = 0;
unsigned char char_array_4[4], char_array_3[3];
std::string ret;
while (in_len-- && ( encoded_string[in_] != '=') && is_base64(encoded_string[in_])) {
char_array_4[i++] = encoded_string[in_]; in_++;
if (i ==4) {
for (i = 0; i <4; i++)
char_array_4[i] = base64_chars.find(char_array_4[i]);
char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
for (i = 0; (i < 3); i++)
ret += char_array_3[i];
i = 0;
}
}
if (i) {
for (j = i; j <4; j++)
char_array_4[j] = 0;
for (j = 0; j <4; j++)
char_array_4[j] = base64_chars.find(char_array_4[j]);
char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
for (j = 0; (j < i - 1); j++) ret += char_array_3[j];
}
return ret;
}
After you have your base64 implementation you can create the string to pass.
std::string encoded = base64_encode(data, sizeof(data));
You have to be mindful however of the fact that your data will expand by encoding it to Base64 so your uploads will take longer as the data will be larger. It will be approximately 37% larger (see the MIME section of http://en.wikipedia.org/wiki/Base64)
To decode the data on the other end you would simply do the following.
byte[] data = System.Convert.FromBase64String(yourParameterName);

How to convert a byte array to uint64 and back in C#?

I have been trying this for long. I have a byte array, which I want to convert to ulong and return the value to another function and that function should get the byte values back.
I tried bitshifting, but it was unsuccessfull in few cases. Is there any alternate to bitshifting? or do you have any short example? Thanks for the help.
Here is the bitshift code that I used, I don't understant why the second entry is not 00000001:
using System;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
int[]responseBuffer = {0,1,2,3,4,5};
UInt64 my = (UInt64)(((UInt64)(((responseBuffer[0]) << 40) & 0xFF0000000000)) |
(UInt64)(((responseBuffer[1]) << 32) & 0x00FF00000000) |
(UInt64)(((responseBuffer[2]) << 24) & 0x0000FF000000) |
(UInt64)(((responseBuffer[3]) << 16) & 0x000000FF0000) |
(UInt64)(((responseBuffer[4]) << 8) & 0x00000000FF00) |
(UInt64)(responseBuffer[5] & 0xFF));
UInt64[] m_buffer = {(UInt64)((my >> 40) & 0xff),
(UInt64)((my >> 33) & 0xff) ,
(UInt64)((my >> 24) & 0xff) ,
(UInt64)((my>> 16) & 0xff) ,
(UInt64)((my>> 8) & 0xff) ,
(UInt64)(my& 0xff)};
Console.WriteLine("" + m_buffer[1]);
//string m_s = "";
StringBuilder sb = new StringBuilder();
for (int k = 0; k < 6; k++)
{
int value = (int)m_buffer[k];
for (int i = 7; i >= 0; i--)
{
if ((value >> i & 0x1) > 0)
{
sb.Append("1");
value &= (Byte)~(0x1 << i);
}
else
sb.Append("0");
}
sb.Append(" ");
}
Console.WriteLine(sb.ToString());
Console.Read();
}
}
}
Firstly I'd work out what went wrong with bitshifting, in case you ever needed it again. It should work fine.
Secondly, there's an alternative with BitConverter.ToUInt64 and BitConverter.GetBytes(ulong) if you're happy using the system endianness.
If you want to be able to specify the endianness, I have an EndianBitConverter class in my MiscUtil library which you could use.
(If you just need it to be reversible on the same sort of machine, I'd stick with the built in one though.)
I'm not sure what the point of the left bitshifting and right bitshifting you're doing initially.(i'm assuming you're trying to generate Uint64 values to test your function with).
To fix your function, just cast the numbers to UInt64 and then test them. Alternatively you can create long literals by using a suffix of l. such as UInt64[] responseBuffer = {0l,1l};
static void Main(string[] args)
{
int[] responseBuffer = { 0, 1, 2, 3, 4, 5 };
List<UInt64> bufferList = new List<ulong>();
foreach (var r in responseBuffer)
bufferList.Add((UInt64)r);
UInt64[] m_buffer = bufferList.ToArray();
foreach (var item in m_buffer)
Console.WriteLine(item);
//string m_s = "";
StringBuilder sb = new StringBuilder();
for (int k = 0; k < m_buffer.Length; k++)
{
int value = (int)m_buffer[k];
for (int i = 7; i >= 0; i--)
{
if ((value >> i & 0x1) > 0)
{
sb.Append("1");
value &= (Byte)~(0x1 << i);
}
else
sb.Append("0");
}
sb.Append(" ");
}
Console.WriteLine(sb.ToString());
}

Categories