I am trying to send a UDP packet of bytes corresponding to the numbers 1-1000 in sequence. How do I convert each number (1,2,3,4,...,998,999,1000) into the minimum number of bytes required and put them in a sequence that I can send as a UDP packet?
I've tried the following with no success. Any help would be greatly appreciated!
List<byte> byteList = new List<byte>();
for (int i = 1; i <= 255; i++)
{
byte[] nByte = BitConverter.GetBytes((byte)i);
foreach (byte b in nByte)
{
byteList.Add(b);
}
}
for (int g = 256; g <= 1000; g++)
{
UInt16 st = Convert.ToUInt16(g);
byte[] xByte = BitConverter.GetBytes(st);
foreach (byte c in xByte)
{
byteList.Add(c);
}
}
byte[] sendMsg = byteList.ToArray();
Thank you.
You need to use :
BitConverter.GetBytes(INTEGER);
Think about how you are going to be able to tell the difference between:
260, 1 -> 0x1, 0x4, 0x1
1, 4, 1 -> 0x1, 0x4, 0x1
If you use one byte for numbers up to 255 and two bytes for the numbers 256-1000, you won't be able to work out at the other end which number corresponds to what.
If you just need to encode them as described without worrying about how they are decoded, it smacks to me of a contrived homework assignment or test, and I'm uninclined to solve it for you.
I think you are looking for something along the lines of a 7-bit encoded integer:
protected void Write7BitEncodedInt(int value)
{
uint num = (uint) value;
while (num >= 0x80)
{
this.Write((byte) (num | 0x80));
num = num >> 7;
}
this.Write((byte) num);
}
(taken from System.IO.BinaryWriter.Write(String)).
The reverse is found in the System.IO.BinaryReader class and looks something like this:
protected internal int Read7BitEncodedInt()
{
byte num3;
int num = 0;
int num2 = 0;
do
{
if (num2 == 0x23)
{
throw new FormatException(Environment.GetResourceString("Format_Bad7BitInt32"));
}
num3 = this.ReadByte();
num |= (num3 & 0x7f) << num2;
num2 += 7;
}
while ((num3 & 0x80) != 0);
return num;
}
I do hope this is not homework, even though is really smells like it.
EDIT:
Ok, so to put it all together for you:
using System;
using System.IO;
namespace EncodedNumbers
{
class Program
{
protected static void Write7BitEncodedInt(BinaryWriter bin, int value)
{
uint num = (uint)value;
while (num >= 0x80)
{
bin.Write((byte)(num | 0x80));
num = num >> 7;
}
bin.Write((byte)num);
}
static void Main(string[] args)
{
MemoryStream ms = new MemoryStream();
BinaryWriter bin = new BinaryWriter(ms);
for(int i = 1; i < 1000; i++)
{
Write7BitEncodedInt(bin, i);
}
byte[] data = ms.ToArray();
int size = data.Length;
Console.WriteLine("Total # of Bytes = " + size);
Console.ReadLine();
}
}
}
The total size I get is 1871 bytes for numbers 1-1000.
Btw, could you simply state whether or not this is homework? Obviously, we will still help either way. But we would much rather you try a little harder so you can actually learn for yourself.
EDIT #2:
If you want to just pack them in ignoring the ability to decode them back, you can do something like this:
protected static void WriteMinimumInt(BinaryWriter bin, int value)
{
byte[] bytes = BitConverter.GetBytes(value);
int skip = bytes.Length-1;
while (bytes[skip] == 0)
{
skip--;
}
for (int i = 0; i <= skip; i++)
{
bin.Write(bytes[i]);
}
}
This ignores any bytes that are zero (from MSB to LSB). So for 0-255 it will use one byte.
As states elsewhere, this will not allow you to decode the data back since the stream is now ambiguous. As a side note, this approach crams it down to 1743 bytes (as opposed to 1871 using 7-bit encoding).
A byte can only hold 256 distinct values, so you cannot store the numbers above 255 in one byte. The easiest way would be to use short, which is 16 bits. If you realy need to conserve space, you can use 10 bit numbers and pack that into a byte array ( 10 bits = 2^10 = 1024 possible values).
Naively (also, untested):
List<byte> bytes = new List<byte>();
for (int i = 1; i <= 1000; i++)
{
byte[] nByte = BitConverter.GetBytes(i);
foreach(byte b in nByte) bytes.Add(b);
}
byte[] byteStream = bytes.ToArray();
Will give you a stream of bytes were each group of 4 bytes is a number [1, 1000].
You might be tempted to do some work so that i < 256 take a single byte, i < 65535 take two bytes, etc. However, if you do this you can't read the values out of the stream. Instead, you'd add length encoding or sentinels bits or something of the like.
I'd say, don't. Just compress the stream, either using a built-in class, or gin up a Huffman encoding implementation using an agree'd upon set of frequencies.
Related
I have a simple task: determine how many bytes is necessary to encode some number (byte array length) to byte array and encode final value (implement this article: Encoded Length and Value Bytes).
Originally I wrote a quick method that accomplish the task:
public static Byte[] Encode(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
List<Byte> computedRawData = new List<Byte> { enclosingtag };
// if array size is less than 128, encode length directly. No questions here
if (rawData.Length < 128) {
computedRawData.Add((Byte)rawData.Length);
} else {
// convert array size to a hex string
String hexLength = rawData.Length.ToString("x2");
// if hex string has odd length, align it to even by prepending hex string
// with '0' character
if (hexLength.Length % 2 == 1) { hexLength = "0" + hexLength; }
// take a pair of hex characters and convert each octet to a byte
Byte[] lengthBytes = Enumerable.Range(0, hexLength.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(hexLength.Substring(x, 2), 16))
.ToArray();
// insert padding byte, set bit 7 to 1 and add byte count required
// to encode length bytes
Byte paddingByte = (Byte)(128 + lengthBytes.Length);
computedRawData.Add(paddingByte);
computedRawData.AddRange(lengthBytes);
}
computedRawData.AddRange(rawData);
return computedRawData.ToArray();
}
This is an old code and was written in an awful way.
Now I'm trying to optimize the code by using either, bitwise operators, or BitConverter class. Here is an example of bitwise-edition:
public static Byte[] Encode2(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
List<Byte> computedRawData = new List<Byte>(rawData);
if (rawData.Length < 128) {
computedRawData.Insert(0, (Byte)rawData.Length);
} else {
// temp number
Int32 num = rawData.Length;
// track byte count, this will be necessary further
Int32 counter = 1;
// simply make bitwise AND to extract byte value
// and shift right while remaining value is still more than 255
// (there are more than 8 bits)
while (num >= 256) {
counter++;
computedRawData.Insert(0, (Byte)(num & 255));
num = num >> 8;
}
// compose final array
computedRawData.InsertRange(0, new[] { (Byte)(128 + counter), (Byte)num });
}
computedRawData.Insert(0, enclosingtag);
return computedRawData.ToArray();
}
and the final implementation with BitConverter class:
public static Byte[] Encode3(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
List<Byte> computedRawData = new List<Byte>(rawData);
if (rawData.Length < 128) {
computedRawData.Insert(0, (Byte)rawData.Length);
} else {
// convert integer to a byte array
Byte[] bytes = BitConverter.GetBytes(rawData.Length);
// start from the end of a byte array to skip unnecessary zero bytes
for (int i = bytes.Length - 1; i >= 0; i--) {
// once the byte value is non-zero, take everything starting
// from the current position up to array start.
if (bytes[i] > 0) {
// we need to reverse the array to get the proper byte order
computedRawData.InsertRange(0, bytes.Take(i + 1).Reverse());
// compose final array
computedRawData.Insert(0, (Byte)(128 + i + 1));
computedRawData.Insert(0, enclosingtag);
return computedRawData.ToArray();
}
}
}
return null;
}
All methods do their work as expected. I used an example from Stopwatch class page to measure performance. And performance tests surprised me. My test method performed 1000 runs of the method to encode a byte array (actually, only array sixe) with 100 000 elements and average times are:
Encode -- around 200ms
Encode2 -- around 270ms
Encode3 -- around 320ms
I personally like method Encode2, because the code looks more readable, but its performance isn't that good.
The question: what you woul suggest to improve Encode2 method performance or to improve Encode readability?
Any help will be appreciated.
===========================
Update: Thanks to all who participated in this thread. I took into consideration all suggestions and ended up with this solution:
public static Byte[] Encode6(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
Byte[] retValue;
if (rawData.Length < 128) {
retValue = new Byte[rawData.Length + 2];
retValue[0] = enclosingtag;
retValue[1] = (Byte)rawData.Length;
} else {
Byte[] lenBytes = new Byte[3];
Int32 num = rawData.Length;
Int32 counter = 0;
while (num >= 256) {
lenBytes[counter] = (Byte)(num & 255);
num >>= 8;
counter++;
}
// 3 is: len byte and enclosing tag
retValue = new byte[rawData.Length + 3 + counter];
rawData.CopyTo(retValue, 3 + counter);
retValue[0] = enclosingtag;
retValue[1] = (Byte)(129 + counter);
retValue[2] = (Byte)num;
Int32 n = 3;
for (Int32 i = counter - 1; i >= 0; i--) {
retValue[n] = lenBytes[i];
n++;
}
}
return retValue;
}
Eventually I moved away from lists to fixed-sized byte arrays. Avg time against the same data set is now about 65ms. It appears that lists (not bitwise operations) gives me a significant penalty in performance.
The main problem here is almost certainly the allocation of the List, and the allocation needed when you are inserting new elements, and when the list is converted to an array in the end. This code probably spend most of its time in the garbage collector and memory allocator. The use vs non-use of bitwise operators probably means very little in comparison, and I would have looked into ways to reduce the amount of memory you allocate first.
One way is to send in a reference to a byte array allocated in advance and and an index to where you are in the array instead of allocating and returning the data, and then return an integer telling how many bytes you have written. Working on large arrays is usually more efficient than working on many small objects. As others have mentioned, use a profiler, and see where your code spend its time.
Of cause the optimization I mentioned will makes your code more low level in nature, and more close to what you would typically do in C, but there is often a trade off between readability and performance.
Using "reverse, append, reverse" instead of "insert at front", and preallocating everything, it might be something like this: (not tested)
public static byte[] Encode4(byte[] rawData, byte enclosingtag) {
if (rawData == null) {
return new byte[] { enclosingtag, 0 };
}
List<byte> computedRawData = new List<byte>(rawData.Length + 6);
computedRawData.AddRange(rawData);
if (rawData.Length < 128) {
computedRawData.InsertRange(0, new byte[] { enclosingtag, (byte)rawData.Length });
} else {
computedRawData.Reverse();
// temp number
int num = rawData.Length;
// track byte count, this will be necessary further
int counter = 1;
// simply cast to byte to extract byte value
// and shift right while remaining value is still more than 255
// (there are more than 8 bits)
while (num >= 256) {
counter++;
computedRawData.Add((byte)num);
num >>= 8;
}
// compose final array
computedRawData.Add((byte)num);
computedRawData.Add((byte)(counter + 128));
computedRawData.Add(enclosingtag);
computedRawData.Reverse();
}
return computedRawData.ToArray();
}
I don't know for sure whether it's going to be faster, but it would make sense - now the expensive "insert at front" operation is mostly avoided, except in the case where there would be only one of them (probably not enough to balance with the two reverses).
An other idea is to limit the insert at front to only one time in an other way: collect all the things that have to be inserted there and then do it once. Could look something like this: (not tested)
public static byte[] Encode5(byte[] rawData, byte enclosingtag) {
if (rawData == null) {
return new byte[] { enclosingtag, 0 };
}
List<byte> computedRawData = new List<byte>(rawData);
if (rawData.Length < 128) {
computedRawData.InsertRange(0, new byte[] { enclosingtag, (byte)rawData.Length });
} else {
// list of all things that will be inserted
List<byte> front = new List<byte>(8);
// temp number
int num = rawData.Length;
// track byte count, this will be necessary further
int counter = 1;
// simply cast to byte to extract byte value
// and shift right while remaining value is still more than 255
// (there are more than 8 bits)
while (num >= 256) {
counter++;
front.Insert(0, (byte)num); // inserting in tiny list, not so bad
num >>= 8;
}
// compose final array
front.InsertRange(0, new[] { (byte)(128 + counter), (byte)num });
front.Insert(0, enclosingtag);
computedRawData.InsertRange(0, front);
}
return computedRawData.ToArray();
}
If it's not good enough or didn't help (or if this is worse - hey, could be), I'll try to come up with more ideas.
I've got a byte array that was created using a hash function. I would like to convert this array into a string. So far so good, it will give me hexadecimal string.
Now I would like to use something different than hexadecimal characters, I would like to encode the byte array with these 36 characters: [a-z][0-9].
How would I go about?
Edit: the reason I would to do this, is because I would like to have a smaller string, than a hexadecimal string.
I adapted my arbitrary-length base conversion function from this answer to C#:
static string BaseConvert(string number, int fromBase, int toBase)
{
var digits = "0123456789abcdefghijklmnopqrstuvwxyz";
var length = number.Length;
var result = string.Empty;
var nibbles = number.Select(c => digits.IndexOf(c)).ToList();
int newlen;
do {
var value = 0;
newlen = 0;
for (var i = 0; i < length; ++i) {
value = value * fromBase + nibbles[i];
if (value >= toBase) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = value / toBase;
value %= toBase;
}
else if (newlen > 0) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = 0;
}
}
length = newlen;
result = digits[value] + result; //
}
while (newlen != 0);
return result;
}
As it's coming from PHP it might not be too idiomatic C#, there are also no parameter validity checks. However, you can feed it a hex-encoded string and it will work just fine with
var result = BaseConvert(hexEncoded, 16, 36);
It's not exactly what you asked for, but encoding the byte[] into hex is trivial.
See it in action.
Earlier tonight I came across a codereview question revolving around the same algorithm being discussed here. See: https://codereview.stackexchange.com/questions/14084/base-36-encoding-of-a-byte-array/
I provided a improved implementation of one of its earlier answers (both use BigInteger). See: https://codereview.stackexchange.com/a/20014/20654. The solution takes a byte[] and returns a Base36 string. Both the original and mine include simple benchmark information.
For completeness, the following is the method to decode a byte[] from an string. I'll include the encode function from the link above as well. See the text after this code block for some simple benchmark info for decoding.
const int kByteBitCount= 8; // number of bits in a byte
// constants that we use in FromBase36String and ToBase36String
const string kBase36Digits= "0123456789abcdefghijklmnopqrstuvwxyz";
static readonly double kBase36CharsLengthDivisor= Math.Log(kBase36Digits.Length, 2);
static readonly BigInteger kBigInt36= new BigInteger(36);
// assumes the input 'chars' is in big-endian ordering, MSB->LSB
static byte[] FromBase36String(string chars)
{
var bi= new BigInteger();
for (int x= 0; x < chars.Length; x++)
{
int i= kBase36Digits.IndexOf(chars[x]);
if (i < 0) return null; // invalid character
bi *= kBigInt36;
bi += i;
}
return bi.ToByteArray();
}
// characters returned are in big-endian ordering, MSB->LSB
static string ToBase36String(byte[] bytes)
{
// Estimate the result's length so we don't waste time realloc'ing
int result_length= (int)
Math.Ceiling(bytes.Length * kByteBitCount / kBase36CharsLengthDivisor);
// We use a List so we don't have to CopyTo a StringBuilder's characters
// to a char[], only to then Array.Reverse it later
var result= new System.Collections.Generic.List<char>(result_length);
var dividend= new BigInteger(bytes);
// IsZero's computation is less complex than evaluating "dividend > 0"
// which invokes BigInteger.CompareTo(BigInteger)
while (!dividend.IsZero)
{
BigInteger remainder;
dividend= BigInteger.DivRem(dividend, kBigInt36, out remainder);
int digit_index= Math.Abs((int)remainder);
result.Add(kBase36Digits[digit_index]);
}
// orientate the characters in big-endian ordering
result.Reverse();
// ToArray will also trim the excess chars used in length prediction
return new string(result.ToArray());
}
"A test 1234. Made slightly larger!" encodes to Base64 as "165kkoorqxin775ct82ist5ysteekll7kaqlcnnu6mfe7ag7e63b5"
To decode that Base36 string 1,000,000 times takes 12.6558909 seconds on my machine (I used the same build and machine conditions as provided in my answer on codereview)
You mentioned that you were dealing with a byte[] for the MD5 hash, rather than a hexadecimal string representation of it, so I think this solution provide the least overhead for you.
If you want a shorter string and can accept [a-zA-Z0-9] and + and / then look at Convert.ToBase64String
Using BigInteger (needs the System.Numerics reference)
Using BigInteger (needs the System.Numerics reference)
const string chars = "0123456789abcdefghijklmnopqrstuvwxyz";
// The result is padded with chars[0] to make the string length
// (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2))
// (so that for any value [0...0]-[255...255] of bytes the resulting
// string will have same length)
public static string ToBaseN(byte[] bytes, string chars, bool littleEndian = true, int len = -1)
{
if (bytes.Length == 0 || len == 0)
{
return String.Empty;
}
// BigInteger saves in the last byte the sign. > 7F negative,
// <= 7F positive.
// If we have a "negative" number, we will prepend a 0 byte.
byte[] bytes2;
if (littleEndian)
{
if (bytes[bytes.Length - 1] <= 0x7F)
{
bytes2 = bytes;
}
else
{
// Note that Array.Resize doesn't modify the original array,
// but creates a copy and sets the passed reference to the
// new array
bytes2 = bytes;
Array.Resize(ref bytes2, bytes.Length + 1);
}
}
else
{
bytes2 = new byte[bytes[0] > 0x7F ? bytes.Length + 1 : bytes.Length];
// We copy and reverse the array
for (int i = bytes.Length - 1, j = 0; i >= 0; i--, j++)
{
bytes2[j] = bytes[i];
}
}
BigInteger bi = new BigInteger(bytes2);
// A little optimization. We will do many divisions based on
// chars.Length .
BigInteger length = chars.Length;
// We pre-calc the length of the string. We know the bits of
// "information" of a byte are 8. Using Log2 we calc the bits of
// information of our new base.
if (len == -1)
{
len = (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2));
}
// We will build our string on a char[]
var chs = new char[len];
int chsIndex = 0;
while (bi > 0)
{
BigInteger remainder;
bi = BigInteger.DivRem(bi, length, out remainder);
chs[littleEndian ? chsIndex : len - chsIndex - 1] = chars[(int)remainder];
chsIndex++;
if (chsIndex < 0)
{
if (bi > 0)
{
throw new OverflowException();
}
}
}
// We append the zeros that we skipped at the beginning
if (littleEndian)
{
while (chsIndex < len)
{
chs[chsIndex] = chars[0];
chsIndex++;
}
}
else
{
while (chsIndex < len)
{
chs[len - chsIndex - 1] = chars[0];
chsIndex++;
}
}
return new string(chs);
}
public static byte[] FromBaseN(string str, string chars, bool littleEndian = true, int len = -1)
{
if (str.Length == 0 || len == 0)
{
return new byte[0];
}
// This should be the maximum length of the byte[] array. It's
// the opposite of the one used in ToBaseN.
// Note that it can be passed as a parameter
if (len == -1)
{
len = (int)Math.Ceiling(str.Length * Math.Log(chars.Length, 2) / 8);
}
BigInteger bi = BigInteger.Zero;
BigInteger length2 = chars.Length;
BigInteger mult = BigInteger.One;
for (int j = 0; j < str.Length; j++)
{
int ix = chars.IndexOf(littleEndian ? str[j] : str[str.Length - j - 1]);
// We didn't find the character
if (ix == -1)
{
throw new ArgumentOutOfRangeException();
}
bi += ix * mult;
mult *= length2;
}
var bytes = bi.ToByteArray();
int len2 = bytes.Length;
// BigInteger adds a 0 byte for positive numbers that have the
// last byte > 0x7F
if (len2 >= 2 && bytes[len2 - 1] == 0)
{
len2--;
}
int len3 = Math.Min(len, len2);
byte[] bytes2;
if (littleEndian)
{
if (len == bytes.Length)
{
bytes2 = bytes;
}
else
{
bytes2 = new byte[len];
Array.Copy(bytes, bytes2, len3);
}
}
else
{
bytes2 = new byte[len];
for (int i = 0; i < len3; i++)
{
bytes2[len - i - 1] = bytes[i];
}
}
for (int i = len3; i < len2; i++)
{
if (bytes[i] != 0)
{
throw new OverflowException();
}
}
return bytes2;
}
Be aware that they are REALLY slow! REALLY REALLY slow! (2 minutes for 100k). To speed them up you would probably need to rewrite the division/mod operation so that they work directly on a buffer, instead of each time recreating the scratch pads as it's done by BigInteger. And it would still be SLOW. The problem is that the time needed to encode the first byte is O(n) where n is the length of the byte array (this because all the array needs to be divided by 36). Unless you want to work with blocks of 5 bytes and lose some bits. Each symbol of Base36 carries around 5.169925001 bits. So 8 of these symbols would carry 41.35940001 bits. Very near 40 bytes.
Note that these methods can work both in little-endian mode and in big-endian mode. The endianness of the input and of the output is the same. Both methods accept a len parameter. You can use it to trim excess 0 (zeroes). Note that if you try to make an output too much small to contain the input, an OverflowException will be thrown.
System.Text.Encoding enc = System.Text.Encoding.ASCII;
string myString = enc.GetString(myByteArray);
You can play with what encoding you need:
System.Text.ASCIIEncoding,
System.Text.UnicodeEncoding,
System.Text.UTF7Encoding,
System.Text.UTF8Encoding
To match the requrements [a-z][0-9] you can use it:
Byte[] bytes = new Byte[] { 200, 180, 34 };
string result = String.Join("a", bytes.Select(x => x.ToString()).ToArray());
You will have string representation of bytes with char separator. To convert back you will need to split, and convert the string[] to byte[] using the same approach with .Select().
Usually a power of 2 is used - that way one character maps to a fixed number of bits. An alphabet of 32 bits for instance would map to 5 bits. The only challenge in that case is how to deserialize variable-length strings.
For 36 bits you could treat the data as a large number, and then:
divide by 36
add the remainder as character to your result
repeat until the division results in 0
Easier said than done perhaps.
you can use modulu.
this example encode your byte array to string of [0-9][a-z].
change it if you want.
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length];
for (i = 0; i < byteArr.Length; i++)
{
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[i] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[i] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
If you don't want to lose data for de-encoding you can use this example:
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length*2];
for (i = 0; i < byteArr.Length; i++)
{
charArr[2 * i] = (char)((int)byteArr[i] / 36+48);
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[2*i+1] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[2*i+1] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
and now you have a string double-lengthed when odd char is the multiply of 36 and even char is the residu. for example: 200=36*5+20 => "5k".
Have a problem, much like this post: How to read a .NET Guid into a Java UUID.
Except, from a remote svc I get a hex str formatted like this: ABCDEFGH-IJKL-MNOP-QRST-123456.
I need to match the GUID.ToByteArray() generated .net byte array GH-EF-CD-AB-KL-IJ-OP-MN- QR- ST-12-34-56 in Java for hashing purposes.
I'm kinda at a loss as to how to parse this. Do I cut off the QRST-123456 part and perhaps use something like the Commons IO EndianUtils on the other part, then stitch the 2 arrays back together as well? Seems way too complicated.
I can rearrange the string, but I shouldn't have to do any of these. Mr. Google doesn't wanna help me neither..
BTW, what is the logic in Little Endian land that keeps those last 6 char unchanged?
Yes, for reference, here's what I've done {sorry for 'answer', but had trouble formatting it properly in comment}:
String s = "3C0EA2F3-B3A0-8FB0-23F0-9F36DEAA3F7E";
String[] splitz = s.split("-");
String rebuilt = "";
for (int i = 0; i < 3; i++) {
// Split into 2 char chunks. '..' = nbr of chars in chunks
String[] parts = splitz[i].split("(?<=\\G..)");
for (int k = parts.length -1; k >=0; k--) {
rebuilt += parts[k];
}
}
rebuilt += splitz[3]+splitz[4];
I know, it's hacky, but it'll do for testing.
Make it into a byte[] and skip the first 3 bytes:
package guid;
import java.util.Arrays;
public class GuidConvert {
static byte[] convertUuidToBytes(String guid) {
String hexdigits = guid.replaceAll("-", "");
byte[] bytes = new byte[hexdigits.length()/2];
for (int i = 0; i < bytes.length; i++) {
int x = Integer.parseInt(hexdigits.substring(i*2, (i+1)*2), 16);
bytes[i] = (byte) x;
}
return bytes;
}
static String bytesToHexString(byte[] bytes) {
StringBuilder buf = new StringBuilder();
for (byte b : bytes) {
int i = b >= 0 ? b : (int) b + 256;
buf.append(Integer.toHexString(i / 16));
buf.append(Integer.toHexString(i % 16));
}
return buf.toString();
}
public static void main(String[] args) {
String guid = "3C0EA2F3-B3A0-8FB0-23F0-9F36DEAA3F7E";
byte[] bytes = convertUuidToBytes(guid);
System.err.println("GUID = "+ guid);
System.err.println("bytes = "+ bytesToHexString(bytes));
byte[] tail = Arrays.copyOfRange(bytes, 3, bytes.length);
System.err.println("tail = "+ bytesToHexString(tail));
}
}
The last group of 6 bytes is not reversed because it is an array of bytes. The first four groups are reversed because they are a four-byte integer followed by three two-byte integers.
I have a string representing bits, such as:
"0000101000010000"
I want to convert it to get an array of bytes such as:
{0x0A, 0x10}
The number of bytes is variable but there will always be padding to form 8 bits per byte (so 1010 becomes 000010101).
Use the builtin Convert.ToByte() and read in chunks of 8 chars without reinventing the thing..
Unless this is something that should teach you about bitwise operations.
Update:
Stealing from Adam (and overusing LINQ, probably. This might be too concise and a normal loop might be better, depending on your own (and your coworker's!) preferences):
public static byte[] GetBytes(string bitString) {
return Enumerable.Range(0, bitString.Length/8).
Select(pos => Convert.ToByte(
bitString.Substring(pos*8, 8),
2)
).ToArray();
}
public static byte[] GetBytes(string bitString)
{
byte[] output = new byte[bitString.Length / 8];
for (int i = 0; i < output.Length; i++)
{
for (int b = 0; b <= 7; b++)
{
output[i] |= (byte)((bitString[i * 8 + b] == '1' ? 1 : 0) << (7 - b));
}
}
return output;
}
Here's a quick and straightforward solution (and I think it will meet all your requirements): http://vbktech.wordpress.com/2011/07/08/c-net-converting-a-string-of-bits-to-a-byte-array/
This should get you to your answer: How can I convert bits to bytes?
You could just convert your string into an array like that article has, and from there use the same logic to perform the conversion.
Get the characers in groups of eight, and parse to a byte:
string bits = "0000101000010000";
byte[] data =
Regex.Matches(bits, ".{8}").Cast<Match>()
.Select(m => Convert.ToByte(m.Groups[0].Value, 2))
.ToArray();
private static byte[] GetBytes(string bitString)
{
byte[] result = Enumerable.Range(0, bitString.Length / 8).
Select(pos => Convert.ToByte(
bitString.Substring(pos * 8, 8),
2)
).ToArray();
List<byte> mahByteArray = new List<byte>();
for (int i = result.Length - 1; i >= 0; i--)
{
mahByteArray.Add(result[i]);
}
return mahByteArray.ToArray();
}
private static String ToBitString(BitArray bits)
{
var sb = new StringBuilder();
for (int i = bits.Count - 1; i >= 0; i--)
{
char c = bits[i] ? '1' : '0';
sb.Append(c);
}
return sb.ToString();
}
You can go any of below,
byte []bytes = System.Text.Encoding.UTF8.GetBytes("Hi");
string str = System.Text.Encoding.UTF8.GetString(bytes);
byte []bytesNew = System.Convert.FromBase64String ("Hello!");
string strNew = System.Convert.ToBase64String(bytesNew);
How to write bits (not bytes) to a file with c#, .net? I'm preety stuck with it.
Edit: i'm looking for a different way that just writing every 8 bits as a byte
The smallest amount of data you can write at one time is a byte.
If you need to write individual bit-values. (Like for instance a binary format that requires a 1 bit flag, a 3 bit integer and a 4 bit integer); you would need to buffer the individual values in memory and write to the file when you have a whole byte to write. (For performance, it makes sense to buffer more and write larger chunks to the file).
Accumulate the bits in a buffer (a single byte can qualify as a "buffer")
When adding a bit, left-shift the buffer and put the new bit in the lowest position using OR
Once the buffer is full, append it to the file
I've made something like this to emulate a BitsWriter.
private BitArray bitBuffer = new BitArray(new byte[65536]);
private int bitCount = 0;
// Write one int. In my code, this is a byte
public void write(int b)
{
BitArray bA = new BitArray((byte)b);
int[] pattern = new int[8];
writeBitArray(bA);
}
// Write one bit. In my code, this is a binary value, and the amount of times
public void write(int b, int len)
{
int[] pattern = new int[len];
BitArray bA = new BitArray(len);
for (int i = 0; i < len; i++)
{
bA.Set(i, (b == 1));
}
writeBitArray(bA);
}
private void writeBitArray(BitArray bA)
{
for (int i = 0; i < bA.Length; i++)
{
bitBuffer.Set(bitCount + i, bA[i]);
bitCount++;
}
if (bitCount % 8 == 0)
{
BitArray bitBufferWithLength = new BitArray(new byte[bitCount / 8]);
byte[] res = new byte[bitBuffer.Count / 8];
for (int i = 0; i < bitCount; i++)
{
bitBufferWithLength.Set(i, (bitBuffer[i]));
}
bitBuffer.CopyTo(res, 0);
bitCount = 0;
base.BaseStream.Write(res, 0, res.Length);
}
}
You will have to use bitshifts or binary arithmetic, as you can only write one byte at a time, not individual bits.