itemVal = "0";
res = int.TryParse(itemVal, out num);
if ((res == true) && (num.GetType() == typeof(byte)))
return true;
else
return false; // goes here when I debugging.
Why num.GetType() == typeof(byte) does not return true ?
Because num is an int, not a byte.
GetType() gets the System.Type of the object at runtime. In this case, it's the same as typeof(int), since num is an int.
typeof() gets the System.Type object of a type at compile-time.
Your comment indicates you're trying to determine if the number fits into a byte or not; the contents of the variable do not affect its type (actually, it's the type of the variable that restricts what its contents can be).
You can check if the number would fit into a byte this way:
if ((num >= 0) && (num < 256)) {
// ...
}
Or this way, using a cast:
if (unchecked((byte)num) == num) {
// ...
}
It seems your entire code sample could be replaced by the following, however:
byte num;
return byte.TryParse(itemVal, num);
Simply because you are comparing a byte with an int
If you want to know number of bytes try this simple snippet:
int i = 123456;
Int64 j = 123456;
byte[] bytesi = BitConverter.GetBytes(i);
byte[] bytesj = BitConverter.GetBytes(j);
Console.WriteLine(bytesi.Length);
Console.WriteLine(bytesj.Length);
Output:
4
8
because and int and a byte are different data types.
an int (as it is commonly known) is 4 bytes (32 bits) an Int64, or Int16 are 64 or 16 bits respectively
a byte is only 8 bits
If num is an int it will never return true
If you wanna check if this int value would fit a byte, you might test the following;
int num = 0;
byte b = 0;
if (int.TryParse(itemVal, out num) && byte.TryParse(itemVal, b))
{
return true; //Could be converted to Int32 and also to Byte
}
Related
Can we access randomly a part of byte.I mean can i access three bits of a bit randomnly without using BitArray and accessing by byte.
Is there any possibility of accessing bit from byte and if not why it is not possible to access it and is it depends on any criteria
You can use the Bitwise And (&) operator in order to read a specific bit from a byte. I'll give some examples by using the 0b prefix, which in C# allows you to write binary literals in your code.
So suppose you have the following byte value:
byte val = 0b10010100; // val = 148 (in decimal)
byte testBits = 0b00000100; // set ONLY the BITS you want to test here...
if ( val & testBits != 0 ) // bitwise and will return 0 if the bit is NOT SET.
Console.WriteLine("The bit is set!");
else
Console.WriteLine("The bit is not set....");
Here's a method for you to test any bit in a given byte, which uses the Left-Shift operator applied to the number 1 in order to generate a binary number which is able to be used for testing against any arbitrary bit in the given byte:
public static int readBit(byte val, int bitPos)
{
if ((val & (1 << bitPos)) != 0)
return 1;
return 0;
}
You can use this method to print which bits are set in a given byte:
byte val = 0b10010100;
for (int i = 0; i < 8; i++)
{
int bitValue = readBit(val, i);
Console.WriteLine($"Bit {i} = {bitValue}");
}
The output from the code above should be:
Bit 0 = 0
Bit 1 = 0
Bit 2 = 1
Bit 3 = 0
Bit 4 = 1
Bit 5 = 0
Bit 6 = 0
Bit 7 = 1
you can use bit shifting
var bitNumber = 0;
var firstBit = (b & (1 << bitNumber)) != 0 ;
we can convert this to extension method
public static class ByteExtensions
{
public static bool GetBit(this byte b, int bitNumber) =>
(b & (1 << bitNumber)) != 0;
}
then
byte b = 7;
var bit0 = b.GetBit(3);
If I have a byte array representing a number read from a file, how can the byte array be converted to an Int16/short?
byte[] bytes = new byte[]{45,49,54,50 } //Byte array representing "-162" from text file
short value = 0; //How to convert to -162 as a short here?
Tried using BitConverter.ToInt16(bytes, 0), but the value is not correct.
Edit: Looking for a solution that does NOT use string conversion.
This function performs some validations that you may be able to exclude. You could simplify it if you know your input array will always contain at least one element and that the value will be a valid Int16.
const byte Negative = (byte)'-';
const byte Zero = (byte)'0';
static Int16 BytesToInt16(byte[] bytes)
{
if (null == bytes || bytes.Length == 0)
return 0;
int result = 0;
bool isNegative = bytes[0] == Negative;
int index = isNegative ? 1 : 0;
for (; index < bytes.Length; index++)
{
result = 10 * result + (bytes[index] - Zero);
}
if (isNegative)
result *= -1;
if (result < Int16.MinValue)
return Int16.MinValue;
if (result > Int16.MaxValue)
return Int16.MaxValue;
return (Int16)result;
}
Like willaien said, you want to first convert your bytes to a string.
byte[] bytes = new byte[]{ 45,49,54,50 };
string numberString = Encoding.UTF8.GetString(bytes);
short value = Int16.Parse(numberString);
If you're not sure that your string can be parsed, I recommend using Int16.TryParse:
byte[] bytes = new byte[]{ 45,49,54,50 };
string numberString = Encoding.UTF8.GetString(bytes);
short value;
if (!Int16.TryParse(numberString, out value))
{
// Parsing failed
}
else
{
// Parsing worked, `value` now contains your value.
}
I have a simple task: determine how many bytes is necessary to encode some number (byte array length) to byte array and encode final value (implement this article: Encoded Length and Value Bytes).
Originally I wrote a quick method that accomplish the task:
public static Byte[] Encode(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
List<Byte> computedRawData = new List<Byte> { enclosingtag };
// if array size is less than 128, encode length directly. No questions here
if (rawData.Length < 128) {
computedRawData.Add((Byte)rawData.Length);
} else {
// convert array size to a hex string
String hexLength = rawData.Length.ToString("x2");
// if hex string has odd length, align it to even by prepending hex string
// with '0' character
if (hexLength.Length % 2 == 1) { hexLength = "0" + hexLength; }
// take a pair of hex characters and convert each octet to a byte
Byte[] lengthBytes = Enumerable.Range(0, hexLength.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(hexLength.Substring(x, 2), 16))
.ToArray();
// insert padding byte, set bit 7 to 1 and add byte count required
// to encode length bytes
Byte paddingByte = (Byte)(128 + lengthBytes.Length);
computedRawData.Add(paddingByte);
computedRawData.AddRange(lengthBytes);
}
computedRawData.AddRange(rawData);
return computedRawData.ToArray();
}
This is an old code and was written in an awful way.
Now I'm trying to optimize the code by using either, bitwise operators, or BitConverter class. Here is an example of bitwise-edition:
public static Byte[] Encode2(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
List<Byte> computedRawData = new List<Byte>(rawData);
if (rawData.Length < 128) {
computedRawData.Insert(0, (Byte)rawData.Length);
} else {
// temp number
Int32 num = rawData.Length;
// track byte count, this will be necessary further
Int32 counter = 1;
// simply make bitwise AND to extract byte value
// and shift right while remaining value is still more than 255
// (there are more than 8 bits)
while (num >= 256) {
counter++;
computedRawData.Insert(0, (Byte)(num & 255));
num = num >> 8;
}
// compose final array
computedRawData.InsertRange(0, new[] { (Byte)(128 + counter), (Byte)num });
}
computedRawData.Insert(0, enclosingtag);
return computedRawData.ToArray();
}
and the final implementation with BitConverter class:
public static Byte[] Encode3(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
List<Byte> computedRawData = new List<Byte>(rawData);
if (rawData.Length < 128) {
computedRawData.Insert(0, (Byte)rawData.Length);
} else {
// convert integer to a byte array
Byte[] bytes = BitConverter.GetBytes(rawData.Length);
// start from the end of a byte array to skip unnecessary zero bytes
for (int i = bytes.Length - 1; i >= 0; i--) {
// once the byte value is non-zero, take everything starting
// from the current position up to array start.
if (bytes[i] > 0) {
// we need to reverse the array to get the proper byte order
computedRawData.InsertRange(0, bytes.Take(i + 1).Reverse());
// compose final array
computedRawData.Insert(0, (Byte)(128 + i + 1));
computedRawData.Insert(0, enclosingtag);
return computedRawData.ToArray();
}
}
}
return null;
}
All methods do their work as expected. I used an example from Stopwatch class page to measure performance. And performance tests surprised me. My test method performed 1000 runs of the method to encode a byte array (actually, only array sixe) with 100 000 elements and average times are:
Encode -- around 200ms
Encode2 -- around 270ms
Encode3 -- around 320ms
I personally like method Encode2, because the code looks more readable, but its performance isn't that good.
The question: what you woul suggest to improve Encode2 method performance or to improve Encode readability?
Any help will be appreciated.
===========================
Update: Thanks to all who participated in this thread. I took into consideration all suggestions and ended up with this solution:
public static Byte[] Encode6(Byte[] rawData, Byte enclosingtag) {
if (rawData == null) {
return new Byte[] { enclosingtag, 0 };
}
Byte[] retValue;
if (rawData.Length < 128) {
retValue = new Byte[rawData.Length + 2];
retValue[0] = enclosingtag;
retValue[1] = (Byte)rawData.Length;
} else {
Byte[] lenBytes = new Byte[3];
Int32 num = rawData.Length;
Int32 counter = 0;
while (num >= 256) {
lenBytes[counter] = (Byte)(num & 255);
num >>= 8;
counter++;
}
// 3 is: len byte and enclosing tag
retValue = new byte[rawData.Length + 3 + counter];
rawData.CopyTo(retValue, 3 + counter);
retValue[0] = enclosingtag;
retValue[1] = (Byte)(129 + counter);
retValue[2] = (Byte)num;
Int32 n = 3;
for (Int32 i = counter - 1; i >= 0; i--) {
retValue[n] = lenBytes[i];
n++;
}
}
return retValue;
}
Eventually I moved away from lists to fixed-sized byte arrays. Avg time against the same data set is now about 65ms. It appears that lists (not bitwise operations) gives me a significant penalty in performance.
The main problem here is almost certainly the allocation of the List, and the allocation needed when you are inserting new elements, and when the list is converted to an array in the end. This code probably spend most of its time in the garbage collector and memory allocator. The use vs non-use of bitwise operators probably means very little in comparison, and I would have looked into ways to reduce the amount of memory you allocate first.
One way is to send in a reference to a byte array allocated in advance and and an index to where you are in the array instead of allocating and returning the data, and then return an integer telling how many bytes you have written. Working on large arrays is usually more efficient than working on many small objects. As others have mentioned, use a profiler, and see where your code spend its time.
Of cause the optimization I mentioned will makes your code more low level in nature, and more close to what you would typically do in C, but there is often a trade off between readability and performance.
Using "reverse, append, reverse" instead of "insert at front", and preallocating everything, it might be something like this: (not tested)
public static byte[] Encode4(byte[] rawData, byte enclosingtag) {
if (rawData == null) {
return new byte[] { enclosingtag, 0 };
}
List<byte> computedRawData = new List<byte>(rawData.Length + 6);
computedRawData.AddRange(rawData);
if (rawData.Length < 128) {
computedRawData.InsertRange(0, new byte[] { enclosingtag, (byte)rawData.Length });
} else {
computedRawData.Reverse();
// temp number
int num = rawData.Length;
// track byte count, this will be necessary further
int counter = 1;
// simply cast to byte to extract byte value
// and shift right while remaining value is still more than 255
// (there are more than 8 bits)
while (num >= 256) {
counter++;
computedRawData.Add((byte)num);
num >>= 8;
}
// compose final array
computedRawData.Add((byte)num);
computedRawData.Add((byte)(counter + 128));
computedRawData.Add(enclosingtag);
computedRawData.Reverse();
}
return computedRawData.ToArray();
}
I don't know for sure whether it's going to be faster, but it would make sense - now the expensive "insert at front" operation is mostly avoided, except in the case where there would be only one of them (probably not enough to balance with the two reverses).
An other idea is to limit the insert at front to only one time in an other way: collect all the things that have to be inserted there and then do it once. Could look something like this: (not tested)
public static byte[] Encode5(byte[] rawData, byte enclosingtag) {
if (rawData == null) {
return new byte[] { enclosingtag, 0 };
}
List<byte> computedRawData = new List<byte>(rawData);
if (rawData.Length < 128) {
computedRawData.InsertRange(0, new byte[] { enclosingtag, (byte)rawData.Length });
} else {
// list of all things that will be inserted
List<byte> front = new List<byte>(8);
// temp number
int num = rawData.Length;
// track byte count, this will be necessary further
int counter = 1;
// simply cast to byte to extract byte value
// and shift right while remaining value is still more than 255
// (there are more than 8 bits)
while (num >= 256) {
counter++;
front.Insert(0, (byte)num); // inserting in tiny list, not so bad
num >>= 8;
}
// compose final array
front.InsertRange(0, new[] { (byte)(128 + counter), (byte)num });
front.Insert(0, enclosingtag);
computedRawData.InsertRange(0, front);
}
return computedRawData.ToArray();
}
If it's not good enough or didn't help (or if this is worse - hey, could be), I'll try to come up with more ideas.
I have to convert values (double/float in C#) to bytes and need some help..
// Datatype long 4byte -99999999,99 to 99999999,99
// Datatype long 4byte -99999999,9 to 99999999,9
// Datatype short 2byte -999,99 to 999,99
// Datatype short 2byte -999,9 to 999,9
In my "world at home" i would just string it and ASCII.GetBytes().
But now, in this world, we have to make less possible space.
And indeed that '-99999999,99' takes 12 bytes instead of 4! if it's a 'long' datatype.
[EDIT]
Due to some help and answer I attach some results here,
long lng = -9999999999L;
byte[] test = Encoding.ASCII.GetBytes(lng.ToString()); // 11 byte
byte[] test2 = BitConverter.GetBytes(lng); // 8 byte
byte[] mybyt = BitConverter.GetBytes(lng); // 8 byte
byte[] bA = BitConverter.GetBytes(lng); // 8 byte
There still have to be one detail left to find out. The lng-variabel got 8 byte even if it helds a lower values, i.e. 99951 (I won't include the ToString() sample).
If the value are even "shorter", which means -999,99 -- 999,99 it will only take 2 byte space.
[END EDIT]
Have you checked BitConverter
long lng =-9999999999L;
byte[] mybyt = BitConverter.GetBytes(lng);
hope this is what you are looking
Try to do it in this way:
long l = 4554334;
byte[] bA = BitConverter.GetBytes(l);
Be aware that in 2 bytes you can only have 4 full digits + sign, and in 4 bytes you can only have 9 digits + sign, so I had to scale your prereqs accordingly.
public static byte[] SerializeLong2Dec(double value)
{
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0)
{
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong2Dec(byte[] value)
{
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeLong1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0) {
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong1Dec(byte[] value) {
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 10.0;
}
public static byte[] SerializeShort2Dec(double value) {
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort2Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeShort1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort1Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 10.0;
}
So that it's clear, the range of a (signed) short (16 bits) is -32,768 to 32,767 so it's quite clear that you only have 4 full digits plus a little piece (the 0-3), the range of a (signed) int (32 bits) is −2,147,483,648 to 2,147,483,647 so it's quite clear that you only have 9 full digits plus a little piece (the 0-2). Going to a (signed) long (64 bits) you have -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 so 18 digits plus a (big) piece. Using floating points you lose in accuracy. A float (32 bits) has an accuracy of around 7 digits, while a double (64 bits) has an accuracy of around 15-16 digits.
To everyone reading this question and the answers. Please note that:
//convert to bytes
long number = 123;
byte[] bytes = BitConverter.GetBytes(number);
//convert back to int64
long newNumber = BitConverter.ToInt64(bytes);
//numbers are different on x64 systems!!!
number != newNumber;
The workaround is to check which system you run on:
newNumber = BitConverter.IsLittleEndian
? BitConverter.ToInt64(bytes, 0)
: BitConverter.ToInt64(bytes.Reverse().ToArray(), 0);
long longValue = 9999999999L;
Console.WriteLine("Long value: " + longValue.ToString());
bytes = BitConverter.GetBytes(longValue);
Console.WriteLine("Byte array value:");
Console.WriteLine(BitConverter.ToString(bytes));
As the other answers have pointed out, you can use BitConverter to get the byte representation of primitive types.
You said that in the current world you inhabit, there is an onus on representing these values as concisely as possible, in which case you should consider variable length encoding (though that document may be a bit abstract).
If you decide this approach is applicable to your case I would suggest looking at how the Protocol Buffers project represents scalar types as some of these types are encoded using variable length encoding which produces shorter output if the data set favours smaller absolute values. (The project is open source under a New BSD license, so you will be able to learn the technique employed from the source repository or even use the source in your own project.)
I need to convert bytes in two's complement format to positive integer bytes.
The range -128 to 127 mapped to 0 to 255.
Examples: -128 (10000000) -> 0 , 127 (01111111) -> 255, etc.
EDIT To clear up the confusion, the input byte is (of course) an unsigned integer in the range 0 to 255. BUT it represents a signed integer in the range -128 to 127 using two's complement format. For example, the input byte value of 128 (binary 10000000) actually represents -128.
EXTRA EDIT Alrighty, lets say we have the following byte stream 0,255,254,1,127. In two's complement format this represents 0, -1, -2, 1, 127. This I need clamping to the 0 to 255 range. For more info check out this hard to find article: Two's complement
From you sample input you simply want:
sbyte something = -128;
byte foo = (byte)( something + 128);
new = old + 128;
bingo :-)
Try
sbyte signed = (sbyte)input;
or
int signed = input | 0xFFFFFF00;
public static byte MakeHexSigned(byte value)
{
if (value > 255 / 2)
{
value = -1 * (255 + 1) + value;
}
return value;
}
I believe that 2s complement bytes would be best done with the following. Maybe not elegant or short but clear and obvious. I would put it as a static method in one of my util classes.
public static sbyte ConvertTo2Complement(byte b)
{
if(b < 128)
{
return Convert.ToSByte(b);
}
else
{
int x = Convert.ToInt32(b);
return Convert.ToSByte(x - 256);
}
}
If I undestood correctly, your problem is how to convert the input, which is really a signed-byte (sbyte), but that input is stored in a unsigned integer, and then also avoid negative values by converting them to zero.
To be clear, when you use a signed type (like ubyte) the framework is using Two's complement behind the scene, so just by casting to the right type you will be using two's complement.
Then, once you have that conversion done, you could clamp the negative values with a simple if or a conditional ternary operator (?:).
The functions presented below will return 0 for values from 128 to 255 (or from -128 to -1), and the same value for values from 0 to 127.
So, if you must use unsigned integers as input and output you could use something like this:
private static uint ConvertSByteToByte(uint input)
{
sbyte properDataType = (sbyte)input; //128..255 will be taken as -128..-1
if (properDataType < 0) { return 0; } //when negative just return 0
if (input > 255) { return 0; } //just in case as uint can be greater than 255
return input;
}
Or, IMHO, you could change your input and outputs to the data types best suited to your input and output (sbyte and byte):
private static byte ConvertSByteToByte(sbyte input)
{
return input < 0 ? (byte)0 : (byte)input;
}
int8_t indata; /* -128,-127,...-1,0,1,...127 */
uint8_t byte = indata ^ 0x80;
xor MSB, that's all
Here is my solution for this problem, for numbers bigger than 8-bits. My example is for a 16-bit value. Note: You will have to check the first bit, to see if it is a negative or not.
Steps:
Convert # to compliment by placing '~' before variable. (ie. y = ~y)
Convert #s to binary string
Break binary strings into character array
Starting with right most value, add 1 , keeping track of carries. Store result in character array.
Convert character array back to string.
private string TwosComplimentMath(string value1, string value2)
{
char[] binary1 = value1.ToCharArray();
char[] binary2 = value2.ToCharArray();
bool carry = false;
char[] calcResult = new char[16];
for (int i = 15; i >= 0; i--)
{
if (binary1[i] == binary2[i])
{
if (binary1[i] == '1')
{
if (carry)
{
calcResult[i] = '1';
carry = true;
}
else
{
calcResult[i] = '0';
carry = true;
}
}
else
{
if (carry)
{
calcResult[i] = '1';
carry = false;
}
else
{
calcResult[i] = '0';
carry = false;
}
}
}
else
{
if (carry)
{
calcResult[i] = '0';
carry = true;
}
else
{
calcResult[i] = '1';
carry = false;
}
}
}
string result = new string(calcResult);
return result;
}
So the problem is that the OP's problem isn't really two's complement conversion. He's adding a bias to a set of values, to adjust the range from -128..127 to 0..255.
To actually do a two's complement conversion, you just typecast the signed value to the unsigned value, like this:
sbyte test1 = -1;
byte test2 = (byte)test1;
-1 becomes 255. -128 becomes 128. This doesn't sound like what the OP wants, though. He just wants to slide an array up so that the lowest signed value (-128) becomes the lowest unsigned value (0).
To add a bias, you just do integer addition:
newValue = signedValue+128;
You could be describing something as simple as adding a bias to your number ( in this case, adding 128 to the signed number ).