I need to mask certain string values read from a database by setting a specific bit in an int value for each possible database value. For example, if the database returns the string "value1" then the bit in position 0 will need to be set to 1, but if the database returns "value2" then the bit in position 1 will need to be set to 1 instead.
How can I ensure each bit of an int is set to 0 originally and then turn on just the specified bit?
If you have an int value "intValue" and you want to set a specific bit at position "bitPosition", do something like:
intValue = intValue | (1 << bitPosition);
or shorter:
intValue |= 1 << bitPosition;
If you want to reset a bit (i.e, set it to zero), you can do this:
intValue &= ~(1 << bitPosition);
(The operator ~ reverses each bit in a value, thus ~(1 << bitPosition) will result in an int where every bit is 1 except the bit at the given bitPosition.)
To set everything to 0, AND the value with 0x00000000:
int startValue = initialValue & 0x00000000;
//Or much easier :)
int startValue = 0;
To then set the bit, you have to determine what number has just that bit set and OR it. For example, to set the last bit:
int finalValue = startValue | 0x00000001;
As #Magus points out, to unset a bit you do the exact opposite:
int finalValue = startValue & 0xFFFFFFFE;
//Or
int finalValue = startValue & ~(0x00000001);
The ~ operatior is bitwise not which flips every bit.
so, this?
enum ConditionEnum
{
Aaa = 0,
Bbb = 1,
Ccc = 2,
}
static void SetBit(ref int bitFlag, ConditionEnum condition, bool bitValue)
{
int mask = 1 << (int)condition;
if (bitValue)
bitFlag |= mask;
else
bitFlag &= ~mask;
}
Just provide a value, bit value and position. Note that you might be able to modify this to work for other types.
public static void SetBit(ref int value, bool bitval, int bitpos)
{
if (!bitval) value&=~(1<<bitpos); else value|=1<<bitpos;
}
Related
I know there is similar questions to my question, however, they aren't the same. All of them are just normal shifts, circular shifts etc. not having a through Carry flag.
I am trying to implement a method that does a right rotate through carry flag, carry flag being 1, current code:
public static int RotateRight(int value, int count = 2)
{
uint val = (uint)value;
return (int)((val >> count) | (val << (32 - count)));
}
However, this works only as normal shift, an input of 16 returns 4. How would one create a carry flag?
Before an admin points to this question C# bitwise rotate left and rotate right this does a normal rotate without a through carry flag.
Rotate through carry
Another example of this:
10 >> 2 with carry through flag of 1
1 00010000
Rotate with carry >> 2
0 10001000
0 01000100
If I'm understanding you correctly, here my really straigth-forward, off the top of my head, totally unoptimized way of doing it:
int RotateRightThroughCarry(uint value, int count, ref uint carry)
{
// width assumed to be 32 bits
// carry assumed to be either 0 or 1
for(int i = 0; i < count; i++)
value = RotateRightOneBitThroughCarry(value, ref carry);
return value;
}
int RotateRightOneBitThroughCarry(uint value, ref uint carry)
{
int newCarry = value & 1;
int newValue = (value >> 1) | (carry << 31);
carry = newCarry;
return newValue;
}
I have a C#/Mono app that reads data from an ADC. My read functions returns it as ulong. The data is in two complement and I need to translate it into int. E.g.:
0x7FFFFF = + 8,388,607
0x7FFFFE = + 8,388,606
0x000000 = 0
0xFFFFFF = -1
0x800001 = - 8,388,607
0x800000 = - 8,388,608
How can I do this?
const int MODULO = 1 << 24;
const int MAX_VALUE = (1 << 23) - 1;
int transform(int value) {
if (value > MAX_VALUE) {
value -= MODULO;
}
return value;
}
Explanation: MAX_VALUE is the maximum possible positive value that can be stored in 24 signed int, so if the value is less or equal then that, it should be returned as is. Otherwise value is unsigned representation of two-complement negative number. The way two-complement works is that negative number is written as a number modulo MODULO, which would effectively mean that MODULO is added. So to convert it back to signed we subtract MODULO.
Quite often when using hardware interfaces you'll have to set groups of bits or set them without changing the rest of the bits. The interface description says something like:
you get a System.UINT32, bit 0 is set if available; bits 1..7 mean the minimum value; bits 8..14 is the maximum value; bits 15..17 is the threshold, etc.
I have to do this for a lot of values, each with their own start and stop bits.
That's why I'd like to create a class that can convert the values (start bit; stop bit; raw UINT32 value) into the value it represents, and back.
So something like:
class RawParameterInterpreter
{
public int StartBit {get; set;} // counting from 0..31
public int StopBit {get; set;} // counting from 0..31
Uint32 ExtractParameterValue(Uint32 rawValue);
Uint32 InsertParameterValueToRawValue(Uint32 parameterValue,
Uint32 rawValue);
}
I understand the part with handling the bits:
// example bits 4..7:
extract parameter from raw value: (rawvalue & 0x000000F0) >> startbit;
insert parameter into raw: (parameter << startbit) | (rawValue & 0xFFFFFF0F)
The problem is, how to initialize the 0x000000F0 and 0xFFFFFF0F from values startBit and endBit? Is there a general method to calculate these values?
I would use something like this
Uint32 bitPattern = 0;
for (int bitNr = startBit; bitNr <= stopBit; ++bitNr)
{
bitPattern = bitPattern << 2 + 1;
}
bitPattern = bitPattern << startBit;
I know the class System.Collections.BitArray. This would make it even easier to set the bits, but how to convert the BitArray back to Uint32?
So question: what is the best method for this?
Well, your question is very general but,
You could use an enum with a Flags attribute.
[Flags]
public enum BitPattern
{
Start = 1,
Stop = 1 << 31
}
I have a int number And I have to shift it to the right a couple of times.
x = 27 = 11011
x>>1= 13 = 1101
x>>2= 6 = 110
x>>3= 3 = 11
I would like to get the value bit value that has been removed. I would have to get: 1, 1, 0
How can I get the removed value
(x & 1) gives you the value of the least significant bit. You should calculate it before you shift.
For least significant bit you can use x & 1 before each shifting. If you want to get a custom bit at any time without changing the value, you can use below method.
private static int GetNthBit(int x, int n)
{
return (x >> n) & 1;
}
Just test the first bit before removing it
int b = x & 1;
See MSDN reference on & operator
What's an efficient or syntactically simple way to get and set the high order part of an integer?
There are multiple ways of achieving that, here are some of them.
Using the Bitwise and/or Shift operators
Applying a right shift in an integer will move the bits to the right, putting zeros to the left.
In the case below, it will shift the size of a short (Int16, as 16 bits).
Applying a logical AND (&) operation in an integer like 0x0000FFFF will basically 'cut' the value (where it's F) and ignore the rest (where it's 0).
Remember that in the end it's just a 0b_1 AND 0b_1 = 0b_1 operation, so any 0b_0 AND 0b_1 will result in 0b_0.
Applying a logical OR (|) operation will basically merge the two numbers in this case, like 0b_10 | 0b_01 = 0b_11.
Code:
uint number = 0xDEADBEEF;
//Get the higher order value.
var high = number >> 16;
Console.WriteLine($"High: {high:X}");
//Get the lower order value.
var low = number & 0xFFFF; //Or use 0x0000FFFF
Console.WriteLine($"Low: {low:X}");
//Set a high order value (you can also use 0xFFFF instead of 0x0000FFFF).
uint newHigh = 0xFADE;
number = number & 0x0000FFFF | newHigh << 16;
Console.WriteLine($"New high: {number:X}");
//Set a low order value.
uint newLow = 0xC0DE;
number = number & 0xFFFF0000 | newLow & 0x0000FFFF;
Console.WriteLine($"New low: {number:X}");
Output:
High: DEAD
Low: BEEF
New high: FADEBEEF
New low: FADEC0DE
Using FieldOffsetAttribute in a struct
C# has excellent support for variables sharing the same memory location, and bits structuring.
Since C# has no macro functions like in C, you can use the union approach to speed things up. It's more performant than passing the variable to methods or extension methods.
You can do that by simply creating a struct with explicit layout and setting the offset of the fields:
Code:
using System;
using System.Runtime.InteropServices;
[StructLayout(LayoutKind.Explicit)]
struct WordUnion
{
[FieldOffset(0)]
public uint Number;
[FieldOffset(0)]
public ushort Low;
[FieldOffset(2)]
public ushort High;
}
public class MainClass
{
public static void Main(string[] args)
{
var x = new WordUnion { Number = 0xABADF00D };
Console.WriteLine("{0:X} {1:X} {2:X}", x.Number, x.High, x.Low);
x.Low = 0xFACE;
Console.WriteLine("{0:X} {1:X} {2:X}", x.Number, x.High, x.Low);
x.High = 0xDEAD;
Console.WriteLine("{0:X} {1:X} {2:X}", x.Number, x.High, x.Low);
}
}
Output:
ABADF00D ABAD F00D
ABADFACE ABAD FACE
DEADFACE DEAD FACE
Mind that with Visual Studio 2029 (16.7), you still may get zeros in x.High and x.Low when adding the variable x inside the Watch or by hovering your cursor on top of the variables x.High and x.Low directly.
Using unsafe and pointer element access operator []
To a more akin to C programming, but in C#, use unsafe:
Code:
unsafe
{
uint value = 0xCAFEFEED;
// x86 is using low-endian.
// So low order array number gets the low order of the value
// And high order array number gets the high order of the value
Console.WriteLine("Get low order of {0:X}: {1:X}",
value, ((ushort*) &value)[0]);
Console.WriteLine("Get high order of {0:X}: {1:X}",
value, ((ushort*) &value)[1]);
((ushort*) &value)[1] = 0xABAD;
Console.WriteLine("Set high order to ABAD: {0:X}", value);
((ushort*) &value)[0] = 0xFACE;
Console.WriteLine("Set low order to FACE: {0:X}", value);
}
Output:
Get low order of CAFEFEED: FEED
Get high order of CAFEFEED: CAFE
Set high order to ABAD: ABADFEED
Set low order to FACE: ABADFACE
Using unsafe and pointer member access operator ->
Another unsafe approach, but this time accessing a member from the WordUnion struct declared in a previous example:
Code:
unsafe
{
uint value = 0xCAFEFEED;
Console.WriteLine("Get low order of {0:X}: {1:X}",
value, ((WordUnion*) &value)->Low);
Console.WriteLine("Get high order of {0:X}: {1:X}",
value, ((WordUnion*) &value)->High);
((WordUnion*) &value)->High = 0xABAD;
Console.WriteLine($"Set high order to ABAD: {value:X}");
((WordUnion*) &value)->Low = 0xFACE;
Console.WriteLine($"Set low order to FACE: {value:X}");
}
Output:
Get low order of CAFEFEED: FEED
Get high order of CAFEFEED: CAFE
Set high order to ABAD: ABADFEED
Set low order to FACE: ABADFACE
Using the BitConverter class
It simply gets 16 bits (2 bytes, a short/Int16) from the specified number. The offset can be controlled by the second parameter.
Code:
uint value = 0xCAFEFEED;
var low = BitConverter.ToInt16(BitConverter.GetBytes(value), 0);
var high = BitConverter.ToInt16(BitConverter.GetBytes(value), 2);
Console.WriteLine($"Low: {low:X}");
Console.WriteLine($"High: {high:X}");
Output:
Low: 0xCAFE
High: 0xFEED
It's the same as in C/C++:
// get the high order 16 bits
int high = 0x12345678 >> 16; // high = 0x1234
// set the high order 16 bits
high = (high & 0x0000FFFF) + (0x5678 << 16); // high = 0x56781234
EDIT: Because I'm in a good mood, here you go. Just remember, immutable types are immutable! The 'set' functions need to be assigned to something.
public static class ExtensionMethods
{
public int LowWord(this int number)
{ return number & 0x0000FFFF; }
public int LowWord(this int number, int newValue)
{ return (number & 0xFFFF0000) + (newValue & 0x0000FFFF); }
public int HighWord(this int number)
{ return number & 0xFFFF0000; }
public int HighWord(this int number, int newValue)
{ return (number & 0x0000FFFF) + (newValue << 16); }
}
EDIT 2: On second thoughts, if you really need to do this and don't want the syntax everywhere, use Michael's solution. +1 to him for showing me something new.
I guess you don't want calculations when you want the Hiword / Hibyte or the LoWord / Lobyte,
if a System.Int32 starts at address 100 (so it occupies address 100 to 103), you want as a LoWord the two bytes starting at address 100 and 101 and Hiword is address 102 and 103.
This can be achieved using the class BitConverter. This class doesn't do anything with the bits, it only uses the addresses to return the requested value.
As the size of types like int / long are different per platform, and WORD and DWORD are a bit confusing, I use the System types System.Int16/Int32/Int64. No one will ever have any problems guessing the number of bits in a System.Int32.
With BitConverter you can convert any integer to the array of bytes starting on that location and convert an array of bytes of the proper length to the corresponding integer. No calculations needed and no bits will change,
Suppose you have a System.Int32 X (which is a DWORD in old terms)
LOWORD: System.Int16 y = BitConverter.ToInt16(BitConverter.GetBytes(x), 0);
HIWORD: System.Int16 y = BitConverter.ToInt16(BitConverter.GetBytes(x), 2);
The nice thing is that this works with all lengths, you don't have to combine functions like LOBYTE and HIWORD to get the third byte:
HIByte(Hiword(x)) will be like: BitConverter.GetBytes(x)[3]
Another Alternative
public class Macro
{
public static short MAKEWORD(byte a, byte b)
{
return ((short)(((byte)(a & 0xff)) | ((short)((byte)(b & 0xff))) << 8));
}
public static byte LOBYTE(short a)
{
return ((byte)(a & 0xff));
}
public static byte HIBYTE(short a)
{
return ((byte)(a >> 8));
}
public static int MAKELONG(short a, short b)
{
return (((int)(a & 0xffff)) | (((int)(b & 0xffff)) << 16));
}
public static short HIWORD(int a)
{
return ((short)(a >> 16));
}
public static short LOWORD(int a)
{
return ((short)(a & 0xffff));
}
}
I use these 2 function...
public static int GetHighint(long intValue)
{
return Convert.ToInt32(intValue >> 32);
}
public static int GetLowint(long intValue)
{
long tmp = intValue << 32;
return Convert.ToInt32(tmp >> 32);
}