How to convert a byte array to uint64 and back in C#? - c#

I have been trying this for long. I have a byte array, which I want to convert to ulong and return the value to another function and that function should get the byte values back.
I tried bitshifting, but it was unsuccessfull in few cases. Is there any alternate to bitshifting? or do you have any short example? Thanks for the help.
Here is the bitshift code that I used, I don't understant why the second entry is not 00000001:
using System;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
int[]responseBuffer = {0,1,2,3,4,5};
UInt64 my = (UInt64)(((UInt64)(((responseBuffer[0]) << 40) & 0xFF0000000000)) |
(UInt64)(((responseBuffer[1]) << 32) & 0x00FF00000000) |
(UInt64)(((responseBuffer[2]) << 24) & 0x0000FF000000) |
(UInt64)(((responseBuffer[3]) << 16) & 0x000000FF0000) |
(UInt64)(((responseBuffer[4]) << 8) & 0x00000000FF00) |
(UInt64)(responseBuffer[5] & 0xFF));
UInt64[] m_buffer = {(UInt64)((my >> 40) & 0xff),
(UInt64)((my >> 33) & 0xff) ,
(UInt64)((my >> 24) & 0xff) ,
(UInt64)((my>> 16) & 0xff) ,
(UInt64)((my>> 8) & 0xff) ,
(UInt64)(my& 0xff)};
Console.WriteLine("" + m_buffer[1]);
//string m_s = "";
StringBuilder sb = new StringBuilder();
for (int k = 0; k < 6; k++)
{
int value = (int)m_buffer[k];
for (int i = 7; i >= 0; i--)
{
if ((value >> i & 0x1) > 0)
{
sb.Append("1");
value &= (Byte)~(0x1 << i);
}
else
sb.Append("0");
}
sb.Append(" ");
}
Console.WriteLine(sb.ToString());
Console.Read();
}
}
}

Firstly I'd work out what went wrong with bitshifting, in case you ever needed it again. It should work fine.
Secondly, there's an alternative with BitConverter.ToUInt64 and BitConverter.GetBytes(ulong) if you're happy using the system endianness.
If you want to be able to specify the endianness, I have an EndianBitConverter class in my MiscUtil library which you could use.
(If you just need it to be reversible on the same sort of machine, I'd stick with the built in one though.)

I'm not sure what the point of the left bitshifting and right bitshifting you're doing initially.(i'm assuming you're trying to generate Uint64 values to test your function with).
To fix your function, just cast the numbers to UInt64 and then test them. Alternatively you can create long literals by using a suffix of l. such as UInt64[] responseBuffer = {0l,1l};
static void Main(string[] args)
{
int[] responseBuffer = { 0, 1, 2, 3, 4, 5 };
List<UInt64> bufferList = new List<ulong>();
foreach (var r in responseBuffer)
bufferList.Add((UInt64)r);
UInt64[] m_buffer = bufferList.ToArray();
foreach (var item in m_buffer)
Console.WriteLine(item);
//string m_s = "";
StringBuilder sb = new StringBuilder();
for (int k = 0; k < m_buffer.Length; k++)
{
int value = (int)m_buffer[k];
for (int i = 7; i >= 0; i--)
{
if ((value >> i & 0x1) > 0)
{
sb.Append("1");
value &= (Byte)~(0x1 << i);
}
else
sb.Append("0");
}
sb.Append(" ");
}
Console.WriteLine(sb.ToString());
}

Related

How to generate Crc-64 table having all negative integer Constants and checksum?

I have some example code for Crc-64 Table generator. I tried to check the unsigned integer's sign and discovered that it generates mixed table Constants both negative & positive integers. Same for the Crc-64 Checksum, it may be negative or positive. Is it possible to implement a modified Crc-64 Table generator that should produce all negative signed Constants and also the Checksum? Or otherwise all positive signed Constants and Checksum. Kindly help me with some information and example implementation.
Here is the example code for the Crc-64 Table generator:
public class Crc64
{
public const UInt64 POLYNOMIAL = 0xD800000000000000;
public UInt64[] CRCTable;
public Crc64()
{
this.CRCTable = new ulong[256];
for (int i = 0; i <= 255; i++)
{
int j;
UInt64 part = (UInt64)i;
for (j = 0; j < 8; j++)
{
if ((part & 1) != 0)
part = (part >> 1) ^ POLYNOMIAL;
else
part >>= 1;
}
CRCTable[i] = part;
}
}
}
UPDATE: Kindly inform, is this implementation correct as per my question:
public static List<UInt64> generateCrcTable(UInt64 POLY)
{
const ulong TOPBIT_MASK = 0x8000000000000000;
const ulong LOWBIT_MASK = 0x0000000000000001;
List<UInt64> list = new List<UInt64>();
for (int i = 0; i <= 255; i++)
{
UInt64 part = (UInt64)(i); // << 56;
for (byte bit = 0; bit < 63; bit++)
{
if ((part & LOWBIT_MASK) != 0) // 0x8000000000000000) != 0) //1) != 0) // 0x8000000000000000) != 0)
{
part >>= 1;
part ^= POLY;
}
else
{
part >>= 1;
//currentValue |= 0x8000000000000000;
}
}
part |= TOPBIT_MASK; // 0x8000000000000000;
list.Add(part);
}
return list;
}
I got code to work. No table. crc2 is always zero. I run CRC twice. Once to create CRC then add CRC to original data. Then run again to validate.
if you only want 63 bits then change the shift from 56 to 55. Then use mask starting with 0x4 instead of 0x8. And then on the return AND with 0x7FFFFFFFFFFFFFFF
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Random rand = new Random();
List<byte> data = Enumerable.Range(0, 1000).Select(x => (byte)rand.Next(0,256)).ToList();
ulong crc = Crc64.ComputeCrc64(data);
List<byte> results = new List<byte>();
for (int i = 7; i >= 0; i--)
{
results.Add((byte)((crc >> (8 * i)) & 0xFF));
}
data.AddRange(results);
ulong crc2 = Crc64.ComputeCrc64(data);
}
}
public class Crc64
{
const ulong POLYNOMIAL = 0xD800000000000000;
public static ulong ComputeCrc64(List<byte> data)
{
ulong crc = 0; /* CRC value is 64bit */
foreach (byte b in data)
{
crc ^= (ulong)b << 56; /* move byte into MSB of 63bit CRC */
for (int i = 0; i < 8; i++)
{
if ((crc & 0x8000000000000000) != 0) /* test for MSB = bit 63 */
{
crc = (ulong)((crc << 1) ^ POLYNOMIAL);
}
else
{
crc <<= 1;
}
}
}
return crc;
}
}
}

Writing a double array to a wave file in c#

I'm trying to write a wave file from scratch using c#. I managed to write 16 bit samples with no issue. But when it comes to 24 bit, apparently all bets are off.
I tried various ways of converting an int to a 3-byte array, which I would proceed to write to the data chunk L-L-L-R-R-R (as its a 24 bit stereo PCM wav).
For the 16bit part, I used this to generate the samples:
//numberOfBytes = 2 - for 16bit. slice = something like 2*Math.Pi*frequency/samplerate
private static byte[,] BuildByteWave(double slice, int numberOfBytes=2)
{
double dataPt = 0;
byte[,] output = new byte[Convert.ToInt32(Samples),numberOfBytes];
for (int i = 0; i < Samples; i++)
{
dataPt = Math.Sin(i * slice) * Settings.Amplitude;
int data = Convert.ToInt32(dataPt * Settings.Volume * 32767);
for (int j = 0; j < numberOfBytes; j++)
{
output[i, j] = ExtractByte(data, j);
}
}
return output;
}
This returns an array I later use to write to the data chunk like so
writer.WriteByte(samples[1][0]); //write to the left channel
writer.WriteByte(samples[1][1]); //write to the left channel
writer.WriteByte(samples[2][0]); //now to the second channel
writer.WriteByte(samples[2][1]); //and yet again.
Where 1 and 2 represent a certain sine wave.
However, if I tried the above with numberOfBytes = 3, it fails hard. The wave is a bunch of non-sense. (the header is formatted correctly).
I understood that I need to convert int32 to int24 and that I need to "pad" the samples, but I found no concrete 24bit tutorial anywhere.
Could you please point me in the right direction?
Edited for clarity.
There is no int24 - you will need to do it yourself. for/switch is a bit of an anti-pattern, too.
int[] samples = /* samples scaled to +/- 8388607 (0x7f`ffff) */;
byte[] data = new byte[samples.Length * 3];
for (int i = 0, j = 0; i < samples.Length; i++, j += 3)
{
// WAV is little endian
data[j + 0] = (byte)((i >> 0) & 0xff);
data[j + 1] = (byte)((i >> 8) & 0xff);
data[j + 2] = (byte)((i >> 16) & 0xff);
}
// data now has the 24-bit samples.
As an example, here's a program (Github) which generates a 15 second 44.1kHz 24-bit stereo wav file with 440 Hz in the left channel and 1 kHz in the right channel:
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
namespace WavGeneratorDemo
{
class Program
{
const int INT24_MAX = 0x7f_ffff;
static void Main(string[] args)
{
const int sampleRate = 44100;
const int lengthInSeconds = 15 /* sec */;
const int channels = 2;
const double channelSamplesPerSecond = sampleRate * channels;
var samples = new double[lengthInSeconds * sampleRate * channels];
// Left is 440 Hz sine wave
FillWithSineWave(samples, channels, channelSamplesPerSecond, 0 /* Left */, 440 /* Hz */);
// Right is 1 kHz sine wave
FillWithSineWave(samples, channels, channelSamplesPerSecond, 1 /* Right */, 1000 /* Hz */);
WriteWavFile(samples, sampleRate, channels, "out.wav");
}
private static void WriteWavFile(double[] samples, uint sampleRate, ushort channels, string fileName)
{
using (var wavFile = File.OpenWrite(fileName))
{
const int chunkHeaderSize = 8,
waveHeaderSize = 4,
fmtChunkSize = 16;
uint samplesByteLength = (uint)samples.Length * 3u;
// RIFF header
wavFile.WriteAscii("RIFF");
wavFile.WriteLittleEndianUInt32(
waveHeaderSize
+ chunkHeaderSize + fmtChunkSize
+ chunkHeaderSize + samplesByteLength);
wavFile.WriteAscii("WAVE");
// fmt header
wavFile.WriteAscii("fmt ");
wavFile.WriteLittleEndianUInt32(fmtChunkSize);
wavFile.WriteLittleEndianUInt16(1); // AudioFormat = PCM
wavFile.WriteLittleEndianUInt16(channels);
wavFile.WriteLittleEndianUInt32(sampleRate);
wavFile.WriteLittleEndianUInt32(sampleRate * channels);
wavFile.WriteLittleEndianUInt16((ushort)(3 * channels)); // Block Align (stride)
wavFile.WriteLittleEndianUInt16(24); // Bits per sample
// samples data
wavFile.WriteAscii("data");
wavFile.WriteLittleEndianUInt32(samplesByteLength);
for (int i = 0; i < samples.Length; i++)
{
var scaledValue = DoubleToInt24(samples[i]);
wavFile.WriteLittleEndianInt24(scaledValue);
}
}
}
private static void FillWithSineWave(double[] samples, int channels, double channelSamplesPerSecond, int channelNo, double freq)
{
for (int i = channelNo; i < samples.Length; i += channels)
{
var t = (i - channelNo) / channelSamplesPerSecond;
samples[i] = Math.Sin(t * (freq * Math.PI * 2));
}
}
private static int DoubleToInt24(double value)
{
if (value < -1 || value > 1)
{
throw new ArgumentOutOfRangeException(nameof(value));
}
return (int)(value * INT24_MAX);
}
}
static class StreamExtensions
{
public static void WriteAscii(this Stream s, string str) => s.Write(Encoding.ASCII.GetBytes(str));
public static void WriteLittleEndianUInt32(this Stream s, UInt32 i)
{
var b = new byte[4];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
b[2] = (byte)((i >> 16) & 0xff);
b[3] = (byte)((i >> 24) & 0xff);
s.Write(b);
}
public static void WriteLittleEndianInt24(this Stream s, Int32 i)
{
var b = new byte[3];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
b[2] = (byte)((i >> 16) & 0xff);
s.Write(b);
}
public static void WriteLittleEndianUInt16(this Stream s, UInt16 i)
{
var b = new byte[2];
b[0] = (byte)((i >> 0) & 0xff);
b[1] = (byte)((i >> 8) & 0xff);
s.Write(b);
}
}
}
Which generates:

Hex Dump EXE File

How do I properly display the contents of an EXE file "C:/Path/To/File.exe" in hexadecimal form? So far, I have:
byte[] BytArr = File.ReadAllBytes("C:/Path/To/File.exe")
I tried using a switch statement (not shown here) that reads every few bytes and should output the appropriate hexadecimal code, but it failed. What should I do? I would really appreciate it if anyone can help me.
Beware that the answer code isn't well formatted and is rather inefficient (source: https://www.codeproject.com/articles/36747/quick-and-dirty-hexdump-of-a-byte-array), but I did make an effort to format it properly.
Answer Code:
using System.Text;
namespace HexDump
{
class Utils
{
public static string HexDump(byte[] bytes, int bytesPerLine = 16)
{
if (bytes == null) return "<null>";
int bytesLength = bytes.Length;
char[] HexChars = "0123456789ABCDEF".ToCharArray();
int firstHexColumn =
8 // 8 characters for the address
+ 3; // 3 spaces
int firstCharColumn = firstHexColumn
+ bytesPerLine * 3 // - 2 digit for the hexadecimal value and 1 space
+ (bytesPerLine - 1) / 8 // - 1 extra space every 8 characters from the 9th
+ 2; // 2 spaces
int lineLength = firstCharColumn
+ bytesPerLine // - characters to show the ascii value
+ Environment.NewLine.Length; // Carriage return and line feed (should normally be 2)
char[] line = (new String(' ', lineLength - Environment.NewLine.Length) + Environment.NewLine).ToCharArray();
int expectedLines = (bytesLength + bytesPerLine - 1) / bytesPerLine;
StringBuilder result = new StringBuilder(expectedLines * lineLength);
for (int i = 0; i < bytesLength; i += bytesPerLine)
{
line[0] = HexChars[(i >> 28) & 0xF];
line[1] = HexChars[(i >> 24) & 0xF];
line[2] = HexChars[(i >> 20) & 0xF];
line[3] = HexChars[(i >> 16) & 0xF];
line[4] = HexChars[(i >> 12) & 0xF];
line[5] = HexChars[(i >> 8) & 0xF];
line[6] = HexChars[(i >> 4) & 0xF];
line[7] = HexChars[(i >> 0) & 0xF];
int hexColumn = firstHexColumn;
int charColumn = firstCharColumn;
for (int j = 0; j < bytesPerLine; j++)
{
if (j > 0 && (j & 7) == 0) hexColumn++;
if (i + j >= bytesLength)
{
line[hexColumn] = ' ';
line[hexColumn + 1] = ' ';
line[charColumn] = ' ';
}
else
{
byte b = bytes[i + j];
line[hexColumn] = HexChars[(b >> 4) & 0xF];
line[hexColumn + 1] = HexChars[b & 0xF];
line[charColumn] = (b < 32 ? 'ยท' : (char)b);
}
hexColumn += 3;
charColumn++;
}
result.Append(line);
}
return result.ToString();
}
}
}
Here's some simple code that will lump the bytes 4 at a time(step) with a space delimiter(delimiter):
int step = 4;
string delimiter = " ";
for(int i = 0; i < BytArr.Length;i += step)
{
for(int j = 0; j < step; j++)
{
Console.Write(BytArr[i + j].ToString("X2"));
}
Console.Write(delimiter);
}
URL shows how to dump in C. Search for C sample which is given towards the end of the page.
This URL shows example in C#

Send binary data as parameter to a method in Web Service?

In server side (C#.NET,Windows 2003) I have a web service with a method and in client side (Visual C++ v6, WinINet, POST) I want to call that method and pass binary data as a parameter to it.
when I send Binary data an error rise and when I send ASCII data it called successful.
How can I send binary data as parameter of a method?
To send binary data to a web method you can base64 encode it and then decode it in the web service. If your variable is a byte array called data then you would do the following.
In C++ you need to create a base64 header and cpp file. The following example is from http://www.adp-gmbh.ch/cpp/common/base64.html
base64.h
#include <string>
std::string base64_encode(unsigned char const* , unsigned int len);
std::string base64_decode(std::string const& s);
base64.cpp
#include "base64.h"
#include <iostream>
static const std::string base64_chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz"
"0123456789+/";
static inline bool is_base64(unsigned char c) {
return (isalnum(c) || (c == '+') || (c == '/'));
}
std::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len) {
std::string ret;
int i = 0;
int j = 0;
unsigned char char_array_3[3];
unsigned char char_array_4[4];
while (in_len--) {
char_array_3[i++] = *(bytes_to_encode++);
if (i == 3) {
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
char_array_4[3] = char_array_3[2] & 0x3f;
for(i = 0; (i <4) ; i++)
ret += base64_chars[char_array_4[i]];
i = 0;
}
}
if (i)
{
for(j = i; j < 3; j++)
char_array_3[j] = '\0';
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
char_array_4[3] = char_array_3[2] & 0x3f;
for (j = 0; (j < i + 1); j++)
ret += base64_chars[char_array_4[j]];
while((i++ < 3))
ret += '=';
}
return ret;
}
std::string base64_decode(std::string const& encoded_string) {
int in_len = encoded_string.size();
int i = 0;
int j = 0;
int in_ = 0;
unsigned char char_array_4[4], char_array_3[3];
std::string ret;
while (in_len-- && ( encoded_string[in_] != '=') && is_base64(encoded_string[in_])) {
char_array_4[i++] = encoded_string[in_]; in_++;
if (i ==4) {
for (i = 0; i <4; i++)
char_array_4[i] = base64_chars.find(char_array_4[i]);
char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
for (i = 0; (i < 3); i++)
ret += char_array_3[i];
i = 0;
}
}
if (i) {
for (j = i; j <4; j++)
char_array_4[j] = 0;
for (j = 0; j <4; j++)
char_array_4[j] = base64_chars.find(char_array_4[j]);
char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
for (j = 0; (j < i - 1); j++) ret += char_array_3[j];
}
return ret;
}
After you have your base64 implementation you can create the string to pass.
std::string encoded = base64_encode(data, sizeof(data));
You have to be mindful however of the fact that your data will expand by encoding it to Base64 so your uploads will take longer as the data will be larger. It will be approximately 37% larger (see the MIME section of http://en.wikipedia.org/wiki/Base64)
To decode the data on the other end you would simply do the following.
byte[] data = System.Convert.FromBase64String(yourParameterName);

Faster way to swap endianness in C# with 32 bit words

In this question, the following code:
public static void Swap(byte[] data)
{
for (int i = 0; i < data.Length; i += 2)
{
byte b = data[i];
data[i] = data[i + 1];
data[i + 1] = b;
}
}
was rewritten in unsafe code to improve its performance:
public static unsafe void SwapX2(Byte[] Source)
{
fixed (Byte* pSource = &Source[0])
{
Byte* bp = pSource;
Byte* bp_stop = bp + Source.Length;
while (bp < bp_stop)
{
*(UInt16*)bp = (UInt16)(*bp << 8 | *(bp + 1));
bp += 2;
}
}
}
Assuming that one wanted to do the same thing with 32 bit words:
public static void SwapX4(byte[] data)
{
byte temp;
for (int i = 0; i < data.Length; i += 4)
{
temp = data[i];
data[i] = data[i + 3];
data[i + 3] = temp;
temp = data[i + 1];
data[i + 1] = data[i + 2];
data[i + 2] = temp;
}
}
how would this be rewritten in a similar fashion?
public static unsafe void SwapX4(Byte[] Source)
{
fixed (Byte* pSource = &Source[0])
{
Byte* bp = pSource;
Byte* bp_stop = bp + Source.Length;
while (bp < bp_stop)
{
*(UInt32*)bp = (UInt32)(
(*bp << 24) |
(*(bp + 1) << 16) |
(*(bp + 2) << 8) |
(*(bp + 3) ));
bp += 4;
}
}
}
Note that both of these functions (my SwapX4 and your SwapX2) will only swap anything on a little-endian host; when run on a big-endian host, they are an expensive no-op.
This version will not exceed the bounds of the buffer. Works on both Little and Big Endian architectures. And is faster on larger data. (Update: Add build configurations for x86 and x64, predefine X86 for 32 bit(x86) and X64 for 64 bit(x64) and it'll be slightly faster.)
public static unsafe void Swap4(byte[] source)
{
fixed (byte* psource = source)
{
#if X86
var length = *((uint*)(psource - 4)) & 0xFFFFFFFEU;
#elif X64
var length = *((uint*)(psource - 8)) & 0xFFFFFFFEU;
#else
var length = (source.Length & 0xFFFFFFFE);
#endif
while (length > 7)
{
length -= 8;
ulong* pulong = (ulong*)(psource + length);
*pulong = ( ((*pulong >> 24) & 0x000000FF000000FFUL)
| ((*pulong >> 8) & 0x0000FF000000FF00UL)
| ((*pulong << 8) & 0x00FF000000FF0000UL)
| ((*pulong << 24) & 0xFF000000FF000000UL));
}
if(length != 0)
{
uint* puint = (uint*)psource;
*puint = ( ((*puint >> 24))
| ((*puint >> 8) & 0x0000FF00U)
| ((*puint << 8) & 0x00FF0000U)
| ((*puint << 24)));
}
}
}

Categories