Get lower bits of a double in C# - c#

I got a number in a double variable.
How can I take only the 4 lower bits of the number and save them somewhere else?

Use BitConverter.GetBytes to get all 8 bytes in a byte array. Do what you wish with them from there.
If you really meant the lowest four bits, then you want:
byte[] bytes = BitConverter.GetBytes(someDouble);
int low4bits = bytes[0] & 0xf;
If it is actually four bytes that you are looking for, then you follow the call to BitConverter.GetBytes() by a call to Array.Copy().

By converting it (bit-by-bit) to a long first, so you can apply bitwise operators:
long bits = BitConverter.DoubleToInt64Bits(your_double);
int lowest_4_bits = (int)bits & 0xF;

Related

Get the 2 most significant bits

How can I get the 2 most significant bits of a byte in C#.
I have something like this (value >> 6) & 7 , but I'm unsure if this is correct.
01011100 just wanting to return the part in bold.
If you want two bits, then you need to and by 3 (binary 11), not by 7 (binary 111).
So if value is a byte, something like:
byte twobits = (byte)((value >> 6) & 3);
Howevers, as the comments stated, this is redundant. It would suffice by right shifting by 6 (since the other bits would be 0 already).
Just for fun, if you want to have the two most significant bits of any data type, you could have:
byte twobits = (byte)(value >> (System.Runtime.InteropServices.Marshal.SizeOf(value)*8-2));
Just as a warning, Marshal.SizeOf gives the byte size of the variable type after marshalling, but it "usually" works.
If I read your question correctly, you want to have the two most significant bits of a byte in another byte, as the least significant bits, with the other bits set to zero.
In that case, you can just return myByte >> 6 as it will fill the rest of the bits with zeroes (in C# at least). The & 7 operation seems redundant, the layout of 7 is 00000111. This means you ensure that the 5 left most bits in the resulting byte are set to zero... You might have intended to ensure the 6 left most bits are zero, in that case it should be 3.
If you want to return only the left most two bits and keep them in place, then you should return myByte & 0b11000000.

Bit shifting and data explanation

Ok, I'd like to have an explanation on how bit shifting works and how to build data from an array of bytes.The language is not important (if an example is needed, I know C,C++, Java and C#, they all follow the same shifting syntax,no?)
The question is, how do I go from byte[] to something which is a bunch of bytes together? (be it 16 bit ints, 32 bits ints, 64 bit ints, n-bits ints) and more importantly, why? I'd like to understand and learn how to make this myself rather then copy from the internet.
I know of endianess, I mainly mess with little endian stuff, but explaining a general rule for both systems would be nice.
Thank you very very much!!
Federico
for bit shifting i would say. byte1 << n will shift bits of byte1 to n times left and result can be get by multiplying byte1 with 2^n...and for >>n we have to divide byte1 by 2^n.
Hm... it depends what you're looking for. There's really no way to convert to an n-bit integer type (it's always an array of bytes anyways) but you can shift bytes into other types.
my32BitInt = byte1 << 24 | byte2 << 16 | byte3 << 8 | byte4
Usually the compiler is going to take care of endianess for you. So you can OR bytes together with shifting and not worry too much about that.
I've used a bit of a cheat myself in the past to save cycles. When I know the data to be in arrays least to most significant order, then you can cast the bytes into larger types and walk the array by size of walker. For example this sample walks 4 bytes at a time and writes 4 byte ints in C.
char bytes[] = {1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1};
int *temp;
temp = (int *)&bytes[0];
printf("%d\n", *temp);
temp = (int *)&bytes[4];
printf("%d\n", *temp);
temp = (int *)&bytes[8];
printf("%d\n", *temp);
temp = (int *)&bytes[12];
printf("%d\n", *temp);
return 0;
Output:
1
256
65536
16777216
This is probably no good for C# and Java
You will want to to take into consideration the endianness of the data. From there, it depends on the data type - but here is an example. Let's say you would like to go from an array of 4 bytes to an unsigned 32 bit integer. Let's also assume the bytes are in the same order as they started (so we don't need to worry about the endianness ).
//In C
#include <stdint.h>
uint32_t offset; //the offset in the array, should be set to something
uint32_t unpacked_int; //target
uint8_t* packed_array; //source
for ( uint32_t i = 0; i < 4; --i ) {
unpacked_int += packed_array[ offset + i ] << 8*i;
You could equivalently use |= and OR the bytes too. Also, if you attempt this trick on structures be aware that your compiler probably won't pack values in back-to-back, although usually you can set the behavior with a pragma. For example, gcc/g++ packs data aligned to 32 bits on my architecture.
On C#, BitConverter should have you covered. I know Python, PHP, and Perl use a function called pack() for this, perhaps there is an equivalent in Java.

Packing record length in 2 bytes

I want to create a ASCII string which will have a number of fields. For e.g.
string s = f1 + "|" + f2 + "|" + f3;
f1, f2, f3 are fields and "|"(pipe) is the delimiter. I want to avoid this delimiter and keep the field count at the beginning like:
string s = f1.Length + f2.Length + f3.Length + f1 + f2 + f3;
All lengths are going to be packed in 2 chars, Max length = 00-99 in this case. I was wondering if I can pack the length of each field in 2 bytes by extracting bytes out of a short. This would allow me to have a range 0-65536 using only 2 bytes. E.g.
short length = 20005;
byte b1 = (byte)length;
byte b2 = (byte)(length >> 8);
// Save bytes b1 and b2
// Read bytes b1 and b2
short length = 0;
length = b2;
length = (short)(length << 8);
length = (short)(length | b1);
// Now length is 20005
What do you think about the above code, Is this a good way to keep the record lengths?
I cannot see what you are trying to achieve. short aka Int16 is 2 bytes - yes, so you can happily use it. But creating a string does not make sense.
short sh = 56100; // 2 bytes
I believe you mean, being able to output the short to a stream. For this there are ways:
BinaryWriter.Write(sh) which writes 2 bytes straight to the stream
BitConverter.GetBytes(sh) which gives you bytes of a short
Reading back you can use the same classes.
If you want ascii, i.e. "00" as characters, then just:
byte[] bytes = Encoding.Ascii.GetBytes(length.ToString("00"));
or you could optimise it if you want.
But IMO, if you are storing 0-99, 1 byte is plenty:
byte b = (byte)length;
If you want the range 0-65535, then just:
bytes[0] = (byte)length;
bytes[1] = (byte)(length >> 8);
or swap index 0 and 1 for endianness.
But if you are using the full range (of either single or double byte), then it isn't ascii nor a string. Anything that tries to read it as a string might fail.
Whether it's a good idea depends on the details of what it's for, but it's not likely to be good.
If you do this then you're no longer creating an "ASCII string". Those were your words, but maybe you don't really care whether it's ASCII.
You will sometimes get bytes with a value of 0 in your "string". If you're handling the strings with anything written in C, this is likely to cause trouble. You'll also get all sorts of other characters -- newlines, tabs, commas, etc. -- that may confuse software that's trying to work with your data.
The original plan of separating with (say) | characters will be more compact and easier for humans and software to read. The only obvious downsides are (1) you can't allow field values with a | in (or else you need some sort of escaping) and (2) parsing will be marginally slower.
If you want to get clever you could pack your 2 bytes into 1 where the value of byte 1 is <= 127, or if the value is >=128 you use 2 bytes instead. This technique looses you 1 bit, per byte that you are using, but if you normally have small values, but occasionally have larger values it dynamically grows to accommodate the value.
All you need to do is mark bit 8 with a value indicating that the 2nd byte is required to be read.
If bit 8 of the active byte is not set, it means you have completed your value.
EG
If you have a value of 4 then you use this
|8|7|6|5|4|3|2|1|
|0|0|0|0|0|1|0|0|
If you have a value of 128 you then can read the 1st byte check if bit 8 is high, and read the remaining 7 bits of the 1st byte, then you do the same with the 2nd byte, moving the 7bits left 7 bits.
|BYTE 0 |BYTE 1 |
|8|7|6|5|4|3|2|1|8|7|6|5|4|3|2|1|
|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|

Converting Int32 to 24-bit signed integer

I have a need to convert an Int32 value to a 3-byte (24-bit) integer. Endianness remains the same (little), but I cannot figure out how to move the sign appropriately. The values are already constrained to the proper range, I just can't figure out how to convert 4 bytes to 3. Using C# 4.0. This is for hardware integration, so I have to have 24-bit values, cannot use 32 bit.
If you want to do that conversion, just remove the top byte of the four-byte number. Two's complement representation will take care of the sign correctly. If you want to keep the 24-bit number in an Int32 variable, you can use v & 0xFFFFFF to get just the lower 24 bits. I saw your comment about the byte array: if you have space in the array, write all four bytes of the number and just send the first three; that is specific to little-endian systems, though.
Found this: http://bytes.com/topic/c-sharp/answers/238589-int-byte
int myInt = 800;
byte[] myByteArray = System.BitConverter.GetBytes(myInt);
sounds like you just need to get the last 3 elements of the array.
EDIT:
as Jeremiah pointed out, you'd need to do something like
int myInt = 800;
byte[] myByteArray = System.BitConverter.GetBytes(myInt);
if (BitConverter.IsLittleEndian) {
// get the first 3 elements
} else {
// get the last 3 elements
}

How do I convert a int to an array of byte's and then back?

I need to send an integer through a NetworkStream. The problem is that I only can send bytes.
Thats why I need to split the integer in four byte's and send those and at the other end convert it back to a int.
For now I need this only in C#. But for the final project I will need to convert the four bytes to an int in Lua.
[EDIT]
How about in Lua?
BitConverter is the easiest way, but if you want to control the order of the bytes you can do bit shifting yourself.
int foo = int.MaxValue;
byte lolo = (byte)(foo & 0xff);
byte hilo = (byte)((foo >> 8) & 0xff);
byte lohi = (byte)((foo >> 16) & 0xff);
byte hihi = (byte)(foo >> 24);
Also.. the implementation of BitConverter uses unsafe and pointers, but it's short and simple.
public static unsafe byte[] GetBytes(int value)
{
byte[] buffer = new byte[4];
fixed (byte* numRef = buffer)
{
*((int*) numRef) = value;
}
return buffer;
}
Try
BitConverter.GetBytes()
http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx
Just keep in mind that the order of the bytes in returned array depends on the endianness of your system.
EDIT:
As for the Lua part, I don't know how to convert back. You could always multiply by 16 to get the same functionality of a bitwise shift by 4. It's not pretty and I would imagine there is some library or something that implements it. Again, the order to add the bytes in depends on the endianness, so you might want to read up on that
Maybe you can convert back in C#?
For Lua, check out Roberto's struct library. (Roberto is one of the authors of Lua.) It is more general than needed for the specific case in question, but it isn't unlikely that the need to interchange an int is shortly followed by the need to interchange other simple types or larger structures.
Assuming native byte order is acceptable at both ends (which is likely a bad assumption, incidentally) then you can convert a number to a 4-byte integer with:
buffer = struct.pack("l", value)
and back again with:
value = struct.unpack("l", buffer)
In both cases, buffer is a Lua string containing the bytes. If you need to access the individual byte values from Lua, string.byte is your friend.
To specify the byte order of the packed data, change the format from "l" to "<l" for little-endian or ">l" for big-endian.
The struct module is implemented in C, and must be compiled to a DLL or equivalent for your platform before it can be used by Lua. That said, it is included in the Lua for Windows batteries-included installation package that is a popular way to install Lua on Windows systems.
Here are some functions in Lua for converting a 32-bit two's complement number into bytes and converting four bytes into a 32-bit two's complement number. A lot more checking could/should be done to verify that the incoming parameters are valid.
-- convert a 32-bit two's complement integer into a four bytes (network order)
function int_to_bytes(n)
if n > 2147483647 then error(n.." is too large",2) end
if n < -2147483648 then error(n.." is too small",2) end
-- adjust for 2's complement
n = (n < 0) and (4294967296 + n) or n
return (math.modf(n/16777216))%256, (math.modf(n/65536))%256, (math.modf(n/256))%256, n%256
end
-- convert bytes (network order) to a 32-bit two's complement integer
function bytes_to_int(b1, b2, b3, b4)
if not b4 then error("need four bytes to convert to int",2) end
local n = b1*16777216 + b2*65536 + b3*256 + b4
n = (n > 2147483647) and (n - 4294967296) or n
return n
end
print(int_to_bytes(256)) --> 0 0 1 0
print(int_to_bytes(-10)) --> 255 255 255 246
print(bytes_to_int(255,255,255,246)) --> -10
investigate the BinaryWriter/BinaryReader classes
Convert an int to a byte array and display : BitConverter ...
www.java2s.com/Tutorial/CSharp/0280__Development/Convertaninttoabytearrayanddisplay.htm
Integer to Byte - Visual Basic .NET answers
http://bytes.com/topic/visual-basic-net/answers/349731-integer-byte
How to: Convert a byte Array to an int (C# Programming Guide)
http://msdn.microsoft.com/en-us/library/bb384066.aspx
As Nubsis says, BitConverter is appropriate but has no guaranteed endianness.
I have an EndianBitConverter class in MiscUtil which allows you to specify the endianness. Of course, if you only want to do this for a single data type (int) you could just write the code by hand.
BinaryWriter is another option, and this does guarantee little endianness. (Again, MiscUtil has an EndianBinaryWriter if you want other options.)
To convert to a byte[]:
BitConverter.GetBytes(int)
http://msdn.microsoft.com/en-us/library/system.bitconverter.aspx
To convert back to an int:
BitConverter.ToInt32(byteArray, offset)
http://msdn.microsoft.com/en-us/library/system.bitconverter.toint32.aspx
I'm not sure about Lua though.
If you are concerned about endianness use John Skeet's EndianBitConverter. I've used it and it works seamlessly.
C# supports their own implementation of htons and ntohs as:
system.net.ipaddress.hosttonetworkorder()
system.net.ipaddress.networktohostorder()
But they only work on signed int16, int32, int64 which means you'll probably end up doing a lot of unnecessary casting to make them work, and if you're using the highest order bit for anything other than signing the integer, you're screwed. Been there, done that. ::tsk:: ::tsk:: Microsoft for not providing better endianness conversion support in .NET.

Categories