Best way to shorten UTF8 string based on byte length - c#

A recent project called for importing data into an Oracle database. The program that will do this is a C# .Net 3.5 app and I'm using the Oracle.DataAccess connection library to handle the actual inserting.
I ran into a problem where I'd receive this error message when inserting a particular field:
ORA-12899 Value too large for column X
I used Field.Substring(0, MaxLength); but still got the error (though not for every record).
Finally I saw what should have been obvious, my string was in ANSI and the field was UTF8. Its length is defined in bytes, not characters.
This gets me to my question. What is the best way to trim my string to fix the MaxLength?
My substring code works by character length. Is there simple C# function that can trim a UT8 string intelligently by byte length (ie not hack off half a character) ?

I think we can do better than naively counting the total length of a string with each addition. LINQ is cool, but it can accidentally encourage inefficient code. What if I wanted the first 80,000 bytes of a giant UTF string? That's a lot of unnecessary counting. "I've got 1 byte. Now I've got 2. Now I've got 13... Now I have 52,384..."
That's silly. Most of the time, at least in l'anglais, we can cut exactly on that nth byte. Even in another language, we're less than 6 bytes away from a good cutting point.
So I'm going to start from #Oren's suggestion, which is to key off of the leading bit of a UTF8 char value. Let's start by cutting right at the n+1th byte, and use Oren's trick to figure out if we need to cut a few bytes earlier.
Three possibilities
If the first byte after the cut has a 0 in the leading bit, I know I'm cutting precisely before a single byte (conventional ASCII) character, and can cut cleanly.
If I have a 11 following the cut, the next byte after the cut is the start of a multi-byte character, so that's a good place to cut too!
If I have a 10, however, I know I'm in the middle of a multi-byte character, and need to go back to check to see where it really starts.
That is, though I want to cut the string after the nth byte, if that n+1th byte comes in the middle of a multi-byte character, cutting would create an invalid UTF8 value. I need to back up until I get to one that starts with 11 and cut just before it.
Code
Notes: I'm using stuff like Convert.ToByte("11000000", 2) so that it's easy to tell what bits I'm masking (a little more about bit masking here). In a nutshell, I'm &ing to return what's in the byte's first two bits and bringing back 0s for the rest. Then I check the XX from XX000000 to see if it's 10 or 11, where appropriate.
I found out today that C# 6.0 might actually support binary representations, which is cool, but we'll keep using this kludge for now to illustrate what's going on.
The PadLeft is just because I'm overly OCD about output to the Console.
So here's a function that'll cut you down to a string that's n bytes long or the greatest number less than n that's ends with a "complete" UTF8 character.
public static string CutToUTF8Length(string str, int byteLength)
{
byte[] byteArray = Encoding.UTF8.GetBytes(str);
string returnValue = string.Empty;
if (byteArray.Length > byteLength)
{
int bytePointer = byteLength;
// Check high bit to see if we're [potentially] in the middle of a multi-byte char
if (bytePointer >= 0
&& (byteArray[bytePointer] & Convert.ToByte("10000000", 2)) > 0)
{
// If so, keep walking back until we have a byte starting with `11`,
// which means the first byte of a multi-byte UTF8 character.
while (bytePointer >= 0
&& Convert.ToByte("11000000", 2) != (byteArray[bytePointer] & Convert.ToByte("11000000", 2)))
{
bytePointer--;
}
}
// See if we had 1s in the high bit all the way back. If so, we're toast. Return empty string.
if (0 != bytePointer)
{
returnValue = Encoding.UTF8.GetString(byteArray, 0, bytePointer); // hat tip to #NealEhardt! Well played. ;^)
}
}
else
{
returnValue = str;
}
return returnValue;
}
I initially wrote this as a string extension. Just add back the this before string str to put it back into extension format, of course. I removed the this so that we could just slap the method into Program.cs in a simple console app to demonstrate.
Test and expected output
Here's a good test case, with the output it create below, written expecting to be the Main method in a simple console app's Program.cs.
static void Main(string[] args)
{
string testValue = "12345“”67890”";
for (int i = 0; i < 15; i++)
{
string cutValue = Program.CutToUTF8Length(testValue, i);
Console.WriteLine(i.ToString().PadLeft(2) +
": " + Encoding.UTF8.GetByteCount(cutValue).ToString().PadLeft(2) +
":: " + cutValue);
}
Console.WriteLine();
Console.WriteLine();
foreach (byte b in Encoding.UTF8.GetBytes(testValue))
{
Console.WriteLine(b.ToString().PadLeft(3) + " " + (char)b);
}
Console.WriteLine("Return to end.");
Console.ReadLine();
}
Output follows. Notice that the "smart quotes" in testValue are three bytes long in UTF8 (though when we write the chars to the console in ASCII, it outputs dumb quotes). Also note the ?s output for the second and third bytes of each smart quote in the output.
The first five characters of our testValue are single bytes in UTF8, so 0-5 byte values should be 0-5 characters. Then we have a three-byte smart quote, which can't be included in its entirety until 5 + 3 bytes. Sure enough, we see that pop out at the call for 8.Our next smart quote pops out at 8 + 3 = 11, and then we're back to single byte characters through 14.
0: 0::
1: 1:: 1
2: 2:: 12
3: 3:: 123
4: 4:: 1234
5: 5:: 12345
6: 5:: 12345
7: 5:: 12345
8: 8:: 12345"
9: 8:: 12345"
10: 8:: 12345"
11: 11:: 12345""
12: 12:: 12345""6
13: 13:: 12345""67
14: 14:: 12345""678
49 1
50 2
51 3
52 4
53 5
226 â
128 ?
156 ?
226 â
128 ?
157 ?
54 6
55 7
56 8
57 9
48 0
226 â
128 ?
157 ?
Return to end.
So that's kind of fun, and I'm in just before the question's five year anniversary. Though Oren's description of the bits had a small error, that's exactly the trick you want to use. Thanks for the question; neat.

Here are two possible solution - a LINQ one-liner processing the input left to right and a traditional for-loop processing the input from right to left. Which processing direction is faster depends on the string length, the allowed byte length, and the number and distribution of multibyte characters and is hard to give a general suggestion. The decision between LINQ and traditional code I probably a matter of taste (or maybe speed).
If speed matters, one could think about just accumulating the byte length of each character until reaching the maximum length instead of calculating the byte length of the whole string in each iteration. But I am not sure if this will work because I don't know UTF-8 encoding well enough. I could theoreticaly imagine that the byte length of a string does not equal the sum of the byte lengths of all characters.
public static String LimitByteLength(String input, Int32 maxLength)
{
return new String(input
.TakeWhile((c, i) =>
Encoding.UTF8.GetByteCount(input.Substring(0, i + 1)) <= maxLength)
.ToArray());
}
public static String LimitByteLength2(String input, Int32 maxLength)
{
for (Int32 i = input.Length - 1; i >= 0; i--)
{
if (Encoding.UTF8.GetByteCount(input.Substring(0, i + 1)) <= maxLength)
{
return input.Substring(0, i + 1);
}
}
return String.Empty;
}

Shorter version of ruffin's answer. Takes advantage of the design of UTF8:
public static string LimitUtf8ByteCount(this string s, int n)
{
// quick test (we probably won't be trimming most of the time)
if (Encoding.UTF8.GetByteCount(s) <= n)
return s;
// get the bytes
var a = Encoding.UTF8.GetBytes(s);
// if we are in the middle of a character (highest two bits are 10)
if (n > 0 && ( a[n]&0xC0 ) == 0x80)
{
// remove all bytes whose two highest bits are 10
// and one more (start of multi-byte sequence - highest bits should be 11)
while (--n > 0 && ( a[n]&0xC0 ) == 0x80)
;
}
// convert back to string (with the limit adjusted)
return Encoding.UTF8.GetString(a, 0, n);
}

All of the other answers appear to miss the fact that this functionality is already built into .NET, in the Encoder class. For bonus points, this approach will also work for other encodings.
public static string LimitByteLength(string message, int maxLength)
{
if (string.IsNullOrEmpty(message) || Encoding.UTF8.GetByteCount(message) <= maxLength)
{
return message;
}
var encoder = Encoding.UTF8.GetEncoder();
byte[] buffer = new byte[maxLength];
char[] messageChars = message.ToCharArray();
encoder.Convert(
chars: messageChars,
charIndex: 0,
charCount: messageChars.Length,
bytes: buffer,
byteIndex: 0,
byteCount: buffer.Length,
flush: false,
charsUsed: out int charsUsed,
bytesUsed: out int bytesUsed,
completed: out bool completed);
// I don't think we can return message.Substring(0, charsUsed)
// as that's the number of UTF-16 chars, not the number of codepoints
// (think about surrogate pairs). Therefore I think we need to
// actually convert bytes back into a new string
return Encoding.UTF8.GetString(buffer, 0, bytesUsed);
}
If you're using .NET Standard 2.1+, you can simplify it a bit:
public static string LimitByteLength(string message, int maxLength)
{
if (string.IsNullOrEmpty(message) || Encoding.UTF8.GetByteCount(message) <= maxLength)
{
return message;
}
var encoder = Encoding.UTF8.GetEncoder();
byte[] buffer = new byte[maxLength];
encoder.Convert(message.AsSpan(), buffer.AsSpan(), false, out _, out int bytesUsed, out _);
return Encoding.UTF8.GetString(buffer, 0, bytesUsed);
}
None of the other answers account for extended grapheme clusters, such as 👩🏽‍🚒. This is composed of 4 Unicode scalars (👩, 🏽, a zero-width joiner, and 🚒), so you need knowledge of the Unicode standard to avoid splitting it in the middle and producing 👩 or 👩🏽.
In .NET 5 onwards, you can write this as:
public static string LimitByteLength(string message, int maxLength)
{
if (string.IsNullOrEmpty(message) || Encoding.UTF8.GetByteCount(message) <= maxLength)
{
return message;
}
var enumerator = StringInfo.GetTextElementEnumerator(message);
var result = new StringBuilder();
int lengthBytes = 0;
while (enumerator.MoveNext())
{
lengthBytes += Encoding.UTF8.GetByteCount(enumerator.GetTextElement());
if (lengthBytes <= maxLength)
{
result.Append(enumerator.GetTextElement());
}
}
return result.ToString();
}
(This same code runs on earlier versions of .NET, but due to a bug it won't produce the correct result before .NET 5).

If a UTF-8 byte has a zero-valued high order bit, it's the beginning of a character. If its high order bit is 1, it's in the 'middle' of a character. The ability to detect the beginning of a character was an explicit design goal of UTF-8.
Check out the Description section of the wikipedia article for more detail.

Is there a reason that you need the database column to be declared in terms of bytes? That's the default, but it's not a particularly useful default if the database character set is variable width. I'd strongly prefer declaring the column in terms of characters.
CREATE TABLE length_example (
col1 VARCHAR2( 10 BYTE ),
col2 VARCHAR2( 10 CHAR )
);
This will create a table where COL1 will store 10 bytes of data and col2 will store 10 characters worth of data. Character length semantics make far more sense in a UTF8 database.
Assuming you want all the tables you create to use character length semantics by default, you can set the initialization parameter NLS_LENGTH_SEMANTICS to CHAR. At that point, any tables you create will default to using character length semantics rather than byte length semantics if you don't specify CHAR or BYTE in the field length.

Following Oren Trutner's comment here are two more solutions to the problem:
here we count the number of bytes to remove from the end of the string according to each character at the end of the string, so we don't evaluate the entire string in every iteration.
string str = "朣楢琴执执 瑩浻牡楧硰执执獧浻牡楧敬瑦 瀰 絸朣杢执獧扻捡杫潲湵 潣"
int maxBytesLength = 30;
var bytesArr = Encoding.UTF8.GetBytes(str);
int bytesToRemove = 0;
int lastIndexInString = str.Length -1;
while(bytesArr.Length - bytesToRemove > maxBytesLength)
{
bytesToRemove += Encoding.UTF8.GetByteCount(new char[] {str[lastIndexInString]} );
--lastIndexInString;
}
string trimmedString = Encoding.UTF8.GetString(bytesArr,0,bytesArr.Length - bytesToRemove);
//Encoding.UTF8.GetByteCount(trimmedString);//get the actual length, will be <= 朣楢琴执执 瑩浻牡楧硰执执獧浻牡楧敬瑦 瀰 絸朣杢执獧扻捡杫潲湵 潣潬昣昸昸慢正
And an even more efficient(and maintainable) solution:
get the string from the bytes array according to desired length and cut the last character because it might be corrupted
string str = "朣楢琴执执 瑩浻牡楧硰执执獧浻牡楧敬瑦 瀰 絸朣杢执獧扻捡杫潲湵 潣"
int maxBytesLength = 30;
string trimmedWithDirtyLastChar = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(str),0,maxBytesLength);
string trimmedString = trimmedWithDirtyLastChar.Substring(0,trimmedWithDirtyLastChar.Length - 1);
The only downside with the second solution is that we might cut a perfectly fine last character, but we are already cutting the string, so it might fit with the requirements.
Thanks to Shhade who thought about the second solution

This is another solution based on binary search:
public string LimitToUTF8ByteLength(string text, int size)
{
if (size <= 0)
{
return string.Empty;
}
int maxLength = text.Length;
int minLength = 0;
int length = maxLength;
while (maxLength >= minLength)
{
length = (maxLength + minLength) / 2;
int byteLength = Encoding.UTF8.GetByteCount(text.Substring(0, length));
if (byteLength > size)
{
maxLength = length - 1;
}
else if (byteLength < size)
{
minLength = length + 1;
}
else
{
return text.Substring(0, length);
}
}
// Round down the result
string result = text.Substring(0, length);
if (size >= Encoding.UTF8.GetByteCount(result))
{
return result;
}
else
{
return text.Substring(0, length - 1);
}
}

public static string LimitByteLength3(string input, Int32 maxLenth)
{
string result = input;
int byteCount = Encoding.UTF8.GetByteCount(input);
if (byteCount > maxLenth)
{
var byteArray = Encoding.UTF8.GetBytes(input);
result = Encoding.UTF8.GetString(byteArray, 0, maxLenth);
}
return result;
}

Related

Convert string of integers to ASCII chars

What is the best way to convert a string of digits into their equivalent ASCII characters?
I think that I am over-complicating this.
Console.WriteLine($"Enter the word to decrypt: ");
//store the values to convert into a string
string vWord = Console.ReadLine();
for (int i = 0; i < vWord.Length; i++)
{
int convertedIndex = vWord[i];
char character = (char)convertedIndex;
finalValue += character.ToString();
Console.WriteLine($"Input: {vWord[i]} Index: {convertedIndex} Char {character}");
}
If the expected input values are something like this: 65 66 67 97 98 99, you could just split the input and cast the converted int values to char:
string vWord = "65 66 67 97 98 99";
string result = string.Join("", vWord.Split().Select(n => (char)(int.Parse(n))));
Console.WriteLine($"Result string: {result}");
This method, however, doesn't perform any error checking on the input string. When dealing with user input, this is not a great idea. We better use int.TryParse() to validate the input parts:
var result = new StringBuilder();
var ASCIIValues = vWord.Split();
foreach (string CharValue in ASCIIValues) {
if (int.TryParse(CharValue, out int n) && n < 127) {
result.Append((char)n);
}
else {
Console.WriteLine($"{CharValue} is not a vaid input");
break;
}
}
Console.WriteLine($"Result string: {result.ToString()}");
You could also use the Encoding.ASCII.GetString method to convert to string the Byte array generated by the byte.Parse method. For example, using LINQ's Select:
string vWord = "65 66 67 97 98 267";
try
{
var CharArray = vWord.Split().Select(n => byte.Parse(n)).ToArray();
string result = Encoding.ASCII.GetString(CharArray);
Console.WriteLine($"String result: {result}");
}
catch (Exception)
{
Console.WriteLine("Not a vaid input");
}
This will print "Not a vaid input", because one of the value is > 255.
Should you decide to allow an input string composed of contiguous values:
651016667979899112101 => "AeBCabcpe"
You could adopt this variation:
string vWord2 = "11065666797989911210110177";
int step = 2;
var result2 = new StringBuilder();
for (int i = 0; i < vWord2.Length; i += step)
{
if (int.TryParse(vWord2.Substring(i, step), out int n) && n < 127)
{
if (n <= 12 & i == 0) {
i = -3; step = 3; ;
}
else if(n <= 12 & i >= 2) {
step = 3; i -= step;
}
else {
result2.Append((char)n);
if (step == 3) ++i;
step = 2;
}
}
else {
Console.WriteLine($"{vWord2.Substring(i, step)} is not a vaid input");
break;
}
}
Console.WriteLine($"Result string: {result2.ToString()}");
Result string: nABCabcpeeM
As Tom Blodget requested, a note about the automatic conversion
between ASCII characters-set and Unicode CodePoints.
This code produces some ASCII characters using an integer value, corresponding to the character in the ASCII table, casting the value to a char type and converting the result to a Windows standard Unicode (UTF-16LE) string.
Why there's no need to explicitly convert the ASCII chars to their Unicode representation?
Because, for historical reasons, the lower Unicode CodePoints directly map to the standard ASCII table (the US-ASCII table).
Hence, no conversion is required, or it can be considered implicit.
But, since the .Net string type uses UTF-16LE Unicode internally (which uses a 16-bit unit for each character in the lower Plane, two 16-bit code units for CodePoints greater or equal to 216), the memory allocation in bytes for the string is double the number of characters.
In the .Net Reference Source, StringBuilder.ToString() will call the internal wstrcpy method:
wstrcpy(char *dmem, char *smem, int charCount)
which will then call Buffer.Memcpy:
Buffer.Memcpy((byte*)dmem, (byte*)smem, charCount * 2);
where the size in bytes is set to charCount * 2.
Since the first draft, in the '80s (when the first Universal Character Set (UCS) was developed), one of the primary objectives of the IEEE and the Unicode Consortium (the two main entities that were developing the standard) was to preserve the compatibility with the pre-existing 256 character-set widely used at the time.
Preserving the CodePoints definition, thus preserving compatibility over time, is a strict rule in the Unicode world. This concept and rules apply to all modern variable length Unicode encodings (UTF-8, UTF-16, UTF-16LE, UTF-32 etc.) and to all CodePoints in the Basic Multilingual Plane (CodePoints in the ranges U+0000 to U+D7FF and U+E000 to U+FFFF).
On the other hand, there's no explicit guarantee that the same Local CodePage encoding (often referred to as ANSI Encoding) will produce the same result in two machines, even when the same System (and System version) is in use.
Some other notes about Localization and the Unicode Common Locale Data Repository (CLDR)
You can break the problem down into two parts:
P1. You want to take a string input of space-separated numbers, and convert them to int values:
private static int[] NumbersFromString(string input)
{
var parts = input.Split(new string[] { " " }, StringSplitOptions.RemoveEmptyEntries);
var values = new List<int>(parts.Length);
foreach (var part in parts)
{
int value;
if (!int.TryParse(part, out value))
{
throw new ArgumentException("One or more values in the input string are invalid.", "input");
}
values.Add(value);
}
return values.ToArray();
}
P2. You want to convert those numbers into character representations:
private static string AsciiCodesToString(int[] inputValues)
{
var builder = new StringBuilder();
foreach (var value in inputValues)
{
builder.Append((char)value);
}
return builder.ToString();
}
You can then call it something like this:
Console.WriteLine(AsciiCodesToString(NumbersFromString(input)));
Try it online

Conversion from Base 64 error

I'm trying to convert from a Base64 string. First I tried this:
string a = "BTQmJiI6JzFkZ2ZhY";
byte[] b = Convert.FromBase64String(a);
string c = System.Text.Encoding.ASCII.GetString(b);
Then got the exception - System.FormatException was caught Message=Invalid length for a Base-64 char array.
So after googling,I tried this:
string a1 = "BTQmJiI6JzFkZ2ZhY";
int mod4 = a1.Length % 4;
if (mod4 > 0)
{
a1 += new string('=', 4 - mod4);
}
byte[] b1 = Convert.FromBase64String(a1);
string c1 = System.Text.Encoding.ASCII.GetString(b1);
Here I got the exception - System.FormatException was caught Message=Invalid character in a Base-64 string.
Is there any invalid character in "BTQmJiI6JzFkZ2ZhY"? Or is it the length issue?
EDIT: I first decrypt the input string using the below code:
string sourstr, deststr,strchar;
int strlen;
decimal ascvalue, ConvValue;
deststr = "";
sourstr = "InputString";
strlen = sourstr.Length;
for (int intI = 0; intI <= strlen - 1; intI++)
{
strchar = sourstr.Substring(intI, 1);
ascvalue = (decimal)strchar[0];
ConvValue = (decimal)((int)ascvalue ^ 85);
if ((char)ConvValue.ToString().Length == 0)
{
deststr = deststr + strchar;
}
else
{
deststr = deststr + (char)ConvValue;
}
}
This output deststr is passed to below code
Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes(deststr));
This is where I got "BTQmJiI6JzFkZ2ZhY"
You cannot get such base64 string by encoding whole number of bytes. While encoding, every 3 bytes are represented as 4 characters, because 3 bytes is 24 bits, and each base64 character is 6 bits (2^6=64), so 4 of them is also 24 bits. If number of bytes to encode is not divisable by 3 - you have some bytes left. You can have 2 or 1 bytes left.
If you have 2 bytes left - that's 16 bits and you need at least 3 characters to encode that (2 characters is just 12 bits - not enough). So in case you have 2 bytes left - you encode them with 3 characters and apply "=" padding.
If you have 1 byte left - that's 8 bits. You need at least 2 characters for that. You encode to 2 characters and apply "==" padding.
Note that there is no way to encode something to just one character (and for that reason - there is no "===" padding).
Your string can be divided in 4 character blocks: "BTQm", "JiI6", "JzFk", "Z2Zh", "Y". 4 first blocks each represent 3 bytes, but what "Y" represents? Who knows. You can say that it represents 1 byte in range 0-63, but from above you can see that's not how it works, so to interpret it like that you have to do it yourself.
From above you can see that you cannot get base64 string with length 17 (without padding). You can get 16, 18, 19, 20, but never 17
Are you sure you took all chars from base64 output?
Appending "==" at the end of the string will make your first approach work without any problems. Although there is strange character at the beginning of the output. So the next question is: Are you sure it is "ASCI" Encoding?

Convert int32 to string in base 16

I'm currently trying to convert a .NET JSON Encoder to NETMF but have hit a problem with Convert.ToString() as there isn't such thing in NETMF.
The original line of the encoder looks like this:
Convert.ToString(codepoint, 16);
And after looking at the documentation for Convert.ToString(Int32, Int32) it says it's for converting an int32 into int 2, 8, 10 or 16 by providing the int as the first parameter and the base as the second.
What are some low level code of how to do this or how would I go about doing this?
As you can see from the code, I only need conversion from an Int32 to Int16.
EDIT
Ah, the encoder also then wants to do:
PadLeft(4, '0');
on the string, is this just adding 4 '0' + '0' + '0' + '0' to the start of the string?
If you mean you want to change a 32-bit integer value into a string which shows the value in hexadecimal:
string hex = intValue.ToString("x");
For variations, please see Stack Overflow question Convert a number into the hex value in .NET.
Disclaimer: I'm not sure if this function exists in NETMF, but it is so fundamental that I think it should.
Here’s some sample code for converting an integer to hexadecimal (base 16):
int num = 48764; // assign your number
// Generate hexadecimal number in reverse.
var sb = new StringBuilder();
do
{
sb.Append(hexChars[num & 15]);
num >>= 4;
}
while (num > 0);
// Pad with leading 0s for a minimum length of 4 characters.
while (sb.Length < 4)
sb.Append('0');
// Reverse string and get result.
char[] chars = new char[sb.Length];
sb.CopyTo(0, chars, 0, sb.Length);
Array.Reverse(chars);
string result = new string(chars);
PadLeft(4, '0') prepends leading 0s to the string to ensure a minimum length of 4 characters.
The hexChars value lookup may be trivially defined as a string:
internal static readonly string hexChars = "0123456789ABCDEF";
Edit: Replacing StringBuilder with List<char>:
// Generate hexadecimal number in reverse.
List<char> builder = new List<char>();
do
{
builder.Add(hexChars[num & 15]);
num >>= 4;
}
while (num > 0);
// Pad with leading 0s for a minimum length of 4 characters.
while (builder.Count < 4)
builder.Add('0');
// Reverse string and get result.
char[] chars = new char[builder.Count];
for (int i = 0; i < builder.Count; ++i)
chars[i] = builder[builder.Count - i - 1];
string result = new string(chars);
Note: Refer to the “Hexadecimal Number Output” section of Expert .NET Micro Framework for a discussion of this conversion.

Error "Hex string have a odd number of digits" while converting int->hex->binary in C#

Aim :
To convert a integer value first to hexstring and then to byte[].
Example :
Need to convert int:1024 to hexstring:400 to byte[]: 00000100 00000000
Method:
For converting from integer to hex string i tried below code
int i=1024;
string hexString = i.ToString("X");
i got hexstring value as "400". Then i tried converting hex string to byte[] using below code
byte[] value = HexStringToByteArray(hexValue);
/* function for converting hexstring to byte array */
public byte[] HexStringToByteArray(string hex)
{
int NumberChars = hex.Length;
if(NumberChars %2==1)
throw new Exception("Hex string cannot have an odd number of digits.");
byte[] bytes = new byte[NumberChars / 2];
for (int i = 0; i < NumberChars; i += 2)
bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16);
return bytes;
}
Error:
Here i got the exception "Hex String cannot have a odd number of digits"
Solution: ??
You can force the ToString to return a specific number of digits:
string hexString = i.ToString("X08");
The exception is thrown by your own code. You can make your code more flexible to accept hex strings that have an odd number of digits:
if (hex.Length % 2 == 1) hex = "0"+hex;
Now you can remove the odd/even check, and your code will be alright.
Your code throws the exception you're seeing:
throw new Exception("Hex string cannot have an odd number of digits.");
You can improve the conversion method to also accept odd hex string lengths like this:
using System.Collections.Generic;
using System.Linq;
// ...
public byte[] HexStringToByteArray(string hex)
{
var result = new List<byte>();
for (int i = hex.Length - 1; i >= 0; i -= 2)
{
if (i > 0)
{
result.Insert(0, Convert.ToByte(hex.Substring(i - 1, 2), 16));
}
else
{
result.Insert(0, Convert.ToByte(hex.Substring(i, 1), 16));
}
}
return bytes.ToArray();
}
This code should iterate through the hex string from its end, adding new bytes to the beginning of the resulting list (that will be transformed into an array before returning the value). If a single digit remains, it will be treated separately.
Your hex string has an odd number of digits and you are explicitly checking for that and throwing the exception. You need to decide why you put this line of code in there and whether you need to remove that in favour of other logic.
Other options are:
add a "0" to the beginning of the string to make it even length
force whoever is calling that code to always provide an even length string
change the later code to deal with odd numbers of characters properly...
In comments you have suggested that the first is what you need to know in which case:
if(hex.Length%2==1)
hex = "0"+hex;
Put this at the beginning of your method and if you get an odd number in then you will add the zero to it automatically. You can of course then take out your later check and exception throw.
Of note is that you may want to validate the input string as hex or possibly just put a try catch round the conversion to make sure that it is a valid hex string.
Also since it isn't clear whether the string is a necessary intermediate step or just one that you think is necessary, you might be interested in C# int to byte[] which deals with converting to bytes without the intermediate string.

Convert List<boolean> to String

I got a boolean list with 92 booleans, I want the list to be converted to a string, I thought I ll take 8 booleans(bits) and put them in a Byte(8 bits) and then use the ASCII to convert it the byte value to a char then add the chars to a string. However after googeling for more then 2 hours, no luck atm. I tried converting the List to a Byte list but it didn t work either ^^.
String strbyte = null;
for (int x = 0; x != tmpboolist.Count; x++) //tmpboolist is the 90+- boolean list
{
//this loop checks for true then puts a 1 or a 0 in the string(strbyte)
if (tmpboolist[x])
{
strbyte = strbyte + '1';
}
else
{
strbyte = strbyte + '0';
}
}
//here I try to convert the string to a byte list but no success
//no success because the testbytearray has the SAME size as the
//tmpboolist(but it should have less since 8 booleans should be 1 Byte)
//however all the 'Bytes' are 48 & 49 (which is 1 and 0 according to
//http://www.asciitable.com/)
Byte[] testbytearray = Encoding.Default.GetBytes(strbyte);
PS If anyone has a better suggestion on how to code & decode a Boolean list to a String?
(Because I want people to share their boolean list with a string rather then a list of 90 1 and 0s.)
EDIT: got it working now! ty all for helping
string text = new string(tmpboolist.Select(x => x ? '1' : '0').ToArray());
byte[] bytes = getBitwiseByteArray(text); //http://stackoverflow.com/a/6756231/1184013
String Arraycode = Convert.ToBase64String(bytes);
System.Windows.MessageBox.Show(Arraycode);
//first it makes a string out of the boolean list then it uses the converter to make it an Byte[](array), then we use the base64 encoding to make the byte[] a String.(that can be decoded later)
I ll look into the encoding32 later, ty for all the help again :)
You should store your boolean values in a BitArray.
var values = new BitArray(92);
values[0] = false;
values[1] = true;
values[2] = true;
...
Then you can convert the BitArray to a byte array
var bytes = new byte[(values.Length + 7) / 8];
values.CopyTo(bytes);
and the byte array to a Base64 string
var result = Convert.ToBase64String(bytes);
Reversely, you can convert a Base64 string to a byte array
var bytes2 = Convert.FromBase64String(result);
and the byte array to a BitArray
var values2 = new BitArray(bytes2);
The Base64 string looks like this: "Liwd7bRv6TMY2cNE". This is probably a bit unhandy for sharing between people; have a look at human-oriented base-32 encoding:
Anticipated uses of these [base-32 strings] include cut-
and-paste, text editing (e.g. in HTML files), manual transcription via a
keyboard, manual transcription via pen-and-paper, vocal transcription over
phone or radio, etc.
The desiderata for such an encoding are:
minimizing transcription errors -- e.g. the well-known problem of confusing
'0' with 'O'
embedding into other structures -- e.g. search engines, structured or
marked-up text, file systems, command shells
brevity -- Shorter [strings] are better than longer ones.
ergonomics -- Human users (especially non-technical ones) should find the
[strings] as easy and pleasant as possible. The uglier the [strings] looks, the worse.
To start with, it's a bad idea to concatenate strings in a loop like that - at least use StringBuilder, or use something like this with LINQ:
string text = new string(tmpboolist.Select(x => x ? '1' : '0').ToArray());
But converting your string to a List<bool> is easy with LINQ, using the fact that string implements IEnumerable<char>:
List<bool> values = text.Select(c => c == '1').ToList();
It's not clear where the byte array comes in... but you should not try to represent arbitrary binary data in a string just using Encoding.GetString. That's not what it's for.
If you don't care what format your string uses, then using Base64 will work well - but be aware that if you're grouping your Boolean values into bytes, you'll need extra information if you need to distinguish between "7 values" and "8 values, the first of which is False" for example.
Since I am infering from your code you want a string with n digits of either 1 or 0 depending onthe internal lists bool value then how about...
public override string ToString()
{
StringBuilder output = new StringBuilder(91);
foreach(bool item in this.tempboolist)
{
output.Append(item ? "1" : "0");
}
return output.ToString();
}
Warning this was off the cuff typing, I have not validated this with a compiler yet!
This function does what you want:
public String convertBArrayToStr(bool[] input)
{
if (input == null)
return "";
int length = input.Count();
int byteArrayCount = (input.Count() - 1) / 8 + 1;
var bytes = new char[byteArrayCount];
for (int i = 0; i < length; i++ )
{
var mappedIndex = (i - 1) / 8;
bytes[mappedIndex] = (char)(2 * bytes[mappedIndex] +(input[i] == true ? 1 : 0));
}
return new string(bytes);
}

Categories