Is there something similar to sprintf() in C#?
I would for instance like to convert an integer to a 2-byte byte-array.
Something like:
int number = 17;
byte[] s = sprintf("%2c", number);
string s = string.Format("{0:00}", number)
The first 0 means "the first argument" (i.e. number); the 00 after the colon is the format specifier (2 numeric digits).
However, note that .NET strings are UTF-16, so a 2-character string is 4 bytes, not 2
(edit: question changed from string to byte[])
To get the bytes, use Encoding:
byte[] raw = Encoding.UTF8.GetBytes(s);
(obviously different encodings may give different results; UTF8 will give 2 bytes for this data)
Actually, a shorter version of the first bit is:
string s = number.ToString("00");
But the string.Format version is more flexible.
EDIT: I'm assuming that you want to convert the value of an integer to a byte array and not the value converted to a string first and then to a byte array (check marc's answer for the latter.)
To convert an int to a byte array you can use:
byte[] array = BitConverter.GetBytes(17);
but that will give you an array of 4 bytes and not 2 (since an int is 32 bits.)
To get an array of 2 bytes you should use:
byte[] array = BitConverter.GetBytes((short)17);
If you just want to convert the value 17 to two characters then use:
string result = string.Format("{0:00}", 17);
But as marc pointed out the result will consume 4 bytes since each character in .NET is 2 bytes (UTF-16) (including the two bytes that hold the string length it will be 6 bytes).
It turned out, that what I really wanted was this:
short number = 17;
System.IO.BinaryWriter writer = new System.IO.BinaryWriter(stream);
writer.Write(number);
writer.Flush();
The key here is the Write-function of the BinaryWriter class. It has 18 overloads, converting different formats to a byte array which it writes to the stream. In my case I have to make sure the number I want to write is kept in a short datatype, this will make the Write function write 2 bytes.
Related
I am working on a Embedded systems and I have only 2 Bytes of storage. I need to store a JSON response in those 2 byte. the JSON response is a string containing 2 digits. How can I convert the string to an unsigned integer split and save into those 2 bytes. I am using C#:
var results = "16";
I need to convert this and store it into 2 bytes.
As your value is only 2 digits long you just need 1 byte to store it.
You can just call Byte.Parse("16") and you will get 16 as a byte.
You can then store your byte whereever you want.
What #TheBlueOne said - a two digit number, even when hexadecimal requires just 1 byte - but for larger numbers you can use
BitConverter.GetBytes.
var s2 = "FF01";
var n = Convert.ToUInt16(s2, 16);
var bytes = BitConverter.GetBytes(n);
//bytes[0] = 1
//bytes[1] = 255
I have a simple bit of code that converts a C# string by encoding it to UTF-8 then creating a byte array from it. But i am wondering how can i encode to UTF-8 using a byte array i have already made at a starting index?
So this is how i am currently encoding and getting the resulting byte array:
byte[] result = Encoding.UTF8.GetBytes(myString);
But I have a byte array premade that i would prefer to write to at a specific index if that makes sense. Is there any built in method to do this, if not how would i go about it ?
GetBytes has another overload that writes to existing array:
byte[] bytes = new byte[1000]; // sample, make sure it has enough space
var specificIndex = 0;
var actualByteCount = Encoding.UTF8.GetBytes(
myString, 0, myString.Length, bytes, specificIndex);
Don't forget to handle result to know how many bytes in the array actually represent string (actualByteCount)
Note you may need to use GetByteCount to get correct array size or adjust number of characters to convert to fit into your buffer.
First you will need to convert your bytes into Base64String then convert that into bytes. Likes this:
byte[] random = new byte[] { 0x00C9, 0x00C9, 0x00C9 };
byte[] encodedBytes = Encoding.UTF8.GetBytes(Convert.ToBase64String(random));
Is there any way to convert a Byte[] into a int8? I have been given a binary file that contains a list of input parameters for a test. The parameters vary in size from uint32 down to uint8. I am having no problem reading in the file, what is tricky is getting the values to display in the GUI.
Here's is the basis of what I'm doing:
private Byte[] blockSize;
private Byte[] binSize;
FileStream filen = File.OpenRead(file);
BinaryReader br = new BinaryReader(filen);
blockSize = br.ReadBytes(4);
binSize = br.ReadBytes(1);
No problems with that considering that the first 32 bits (4 bytes) of the parameter file are the blockSize and the next 8 bits (1 byte) are the value for my binSize variable. Where the problem comes is in displaying it.
textBox1.Text = BitConverter.ToInt32(blockSize, 0).ToString();
textBox2.Text = BitConverter.ToString(binSize, 0).ToString();
Lets say that my binary input file contains the following 5 bytes of data "0A 00 00 00 0A". My first textbox displays '10', my second textbox displays '0A'. I want the hex value converted into the more human understandable decimal value. It seems to work fine as long as the parameter in the input file is greater than 1 byte so I can easily convert it using ToInt16 or ToInt32, but I have nothing for the 8bit variety.
Your problem that r.ReadBytes(1); return byte[] then your call BitConverter.ToString(bytearray) with byte array as parameter which do the next:
Converts the numeric value of each element of a specified array of
bytes to its equivalent hexadecimal string representation
BinaryReader has methods
int ReadInt32()
byte ReadByte()
Change types of blockSize to int and binSize to byte and use those methods
int blockSize = br.ReadInt32();
byte binSize = br.ReadByte();
textBox1.Text = blockSize.ToString();
textBox2.Text = binSize.ToString();
From MSDN:
ReadInt32()
Reads a 4-byte signed integer from the current stream and advances the
current position of the stream by four bytes.
ReadByte()
Reads the next byte from the current stream and advances the current
position of the stream by one byte.
This question already has an answer here:
Calculate actual data size from Base64 encoded string length
(1 answer)
Closed 10 years ago.
I have a requirement to create a byte[] with length 16. (A byte array that has 128 bit to be used as Key in AES encryption).
Following is a valid string
"AAECAwQFBgcICQoLDA0ODw=="
What is the algorithm that determines whether a string will be 128 bit? Or is trial and error the only way to create such 128 bit strings?
CODE
static void Main(string[] args)
{
string firstString = "AAECAwQFBgcICQoLDA0ODw=="; //String Length = 24
string secondString = "ABCDEFGHIJKLMNOPQRSTUVWX"; //String Length = 24
int test = secondString.Length;
byte[] firstByteArray = Convert.FromBase64String((firstString));
byte[] secondByteArray = Convert.FromBase64String((secondString));
int firstLength = firstByteArray.Length;
int secondLength = secondByteArray.Length;
Console.WriteLine("First Length: " + firstLength.ToString());
Console.WriteLine("Second Length: " + secondLength.ToString());
Console.ReadLine();
}
Findings:
For 256 bit, we need 256/6 = 42.66 chars. That is rounded to 43 char. [To make it divisible by 4 add =]
For 512 bit, we need 512/6 = 85.33 chars. That is rounded to 86 char. [To make it divisible by 4 add ==]
For 128 bit, we need 128/6 = 21.33 chars. That is rounded to 22 char. [To make it divisible by 4 add ==]
A base64 string for 16 bytes will always be 24 characters and have == at the end, as padding.
(At least when it's decodable using the .NET method. The padding is not always inlcuded in all uses of base64 strings, but the .NET implementation requires it.)
In Base64 encoding '=' is a special symbol that is added to end of the Base64 string to indicate that there is no data for these chars in original value.
Each char is equal to 6 original bits of data, so to produce 8 bit values the string length has to be dividable by 4 without remainder. (6 bits * 4 = 8 bits * 3). When the resulting BASE64 string is shorter than 4n then '=' are added at the end to make it valid.
Update
Last char before '==' encodes only 2 bits of information, so by replacing it with all possible Base64 chars will give you only 4 different keys out of 64 possible combinations. In other words, by generating strings in format "bbbbbbbbbbbbbbbbbbbbbb==" (where 'b' is valid Base64 character) you'll get 15 duplicate keys per each unique key.
You can use PadRight() to pad the string to the end of it with a char that you will later remove once decrypted.
I am not familiar with Hashing algorithms and the risks associated when using them and therefore have a question on the answer below that I received on a previous question . . .
Based on the comment that the hash value must, when encoded to ASCII, fit within 16 ASCI characters, the solution is first, to choose some cryptographic hash function (the SHA-2 family includes SHA-256, SHA-384, and SHA-512)
then, to truncate the output of the chosen hash function to 96 bits (12 bytes) - that is, keep the first 12 bytes of the hash function output and discard the remaining bytes
then, to base-64-encode the truncated output to 16 ASCII characters (128 bits)
yielding effectively a 96-bit-strong cryptographic hash.
If I substring the base-64-encoded string to 16 characters is that fundamentally different then keeping the first 12 bytes of the hash function and then base-64-encoding them? If so, could someone please explain (provide example code) for truncating the byte array?
I tested the substring of the full hash value against 36,000+ distinct values and had no collisions. The code below is my current implementation.
Thanks for any help (and clarity) you can provide.
public static byte[] CreateSha256Hash(string data)
{
byte[] dataToHash = (new UnicodeEncoding()).GetBytes(data);
SHA256 shaM = new SHA256Managed();
byte[] hashedData = shaM.ComputeHash(dataToHash);
return hashedData;
}
public override void InputBuffer_ProcessInputRow(InputBufferBuffer Row)
{
byte[] hashedData = CreateSha256Hash(Row.HashString);
string s = Convert.ToBase64String(hashedData, Base64FormattingOptions.None);
Row.HashValue = s.Substring(0, 16);
}
[Original post]
(http://stackoverflow.com/questions/4340471/is-there-a-hash-algorithm-that-produces-a-hash-size-of-64-bits-in-c)
No, there is no difference. However, it's easier to just get the base64 string of the first 12 bytes of the array, instead of truncating the array:
public override void InputBuffer_ProcessInputRow(InputBufferBuffer Row) {
byte[] hashedData = CreateSha256Hash(Row.HashString);
Row.HashValue = Convert.ToBase64String(hashedData, 0, 12);
}
The base 64 encoding simply puts 6 bits in each character, so 3 bytes (24 bits) goes into 4 characters. As long as you are splitting the data at an even 3 byte boundary, it's the same as splitting the string at the even 4 character boundary.
If you try to split the data between these boundaries, the base64 string will be padded with filler data up to the next boundary, so the result would not be the same.
Truncating is as easy as adding Take(12) here:
Change
byte[] hashedData = CreateSha256Hash(Row.HashString);
To:
byte[] hashedData = CreateSha256Hash(Row.HashString).Take(12).ToArray();