I have an enum declaration like this:
public enum Filter
{
a = 0x0001;
b = 0x0002;
}
What does that mean? They are using this to filter an array.
It means they're the integer values assigned to those names. Enums are basically just named numbers. You can cast between the underlying type of an enum and the enum value.
For example:
public enum Colour
{
Red = 1,
Blue = 2,
Green = 3
}
Colour green = (Colour) 3;
int three = (int) Colour.Green;
By default an enum's underlying type is int, but you can use any of byte, sbyte, short, ushort, int, uint, long or ulong:
public enum BigEnum : long
{
BigValue = 0x5000000000 // Couldn't fit this in an int
}
It just means that if you do Filter->a, you get 1. Filter->b is 2.
The weird hex notation is just that, notation.
EDIT:
Since this is a 'filter' the hex notation makes a little more sense.
By writing 0x1, you specify the following bit pattern:
0000 0001
And 0x2 is:
0000 0010
This makes it clearer on how to use a filter.
So for example, if you wanted to filter out data that has the lower 2 bits set, you could do:
Filter->a | Filter->b
which would correspond to:
0000 0011
The hex notation makes the concept of a filter clearer (for some people). For example, it's relatively easy to figure out the binary of 0x83F0 by looking at it, but much more difficult for 33776 (the same number in base 10).
It's not clear what it is that you find unclear, so let's discuss it all:
The enum values have been given explicit numerical values. Each enum value is always represented as a numerical value for the underlying storage, but if you want to be sure what that numerical value is you have to specify it.
The numbers are written in hexadecimal notation, this is often used when you want the numerical values to contain a single set bit for masking. It's easier to see that the value has only one bit set when it's written as 0x8000 than when it's written as 32768.
In your example it's not as obvious as you have only two values, but for bit filtering each value represents a single bit so that each value is twice as large as the previous:
public enum Filter {
First = 0x0001,
Second = 0x0002,
Third = 0x0004,
Fourth = 0x0008
}
You can use such an enum to filter out single bits in a value:
If ((num & Filter.First) != 0 && (num & Filter.Third) != 0) {
Console.WriteLine("First and third bits are set.");
}
It could mean anything. We need to see more code then that to be able to understand what it's doing.
0x001 is the number 1. Anytime you see the 0x it means the programmer has entered the number in hexadecimal.
Those are literal hexadecimal numbers.
Main reason is :
It is easyer to read hex notation when writing numbers such as : "2 to the power of x" is needed.
To use enum type as bit flag, we need to increment enum values by power of 2 ...
1,2,4,8,16,32,64, etc. To keep it readable, hex notation is used.
Ex : 2^10 is 0x10000 in hex (neat and clean), but it is written 65536 in classical decimal notation ... Same for 0x200 (hex notation) and 512. (2^9)
Those look like they are bit masks of some sort. But their actual values are 1 and 2...
You can assign values to enums such as:
enum Example {
a = 10,
b = 23,
c = 0x00FF
}
etc...
Using Hexidecimal notation like that usually indicates that there may be some bit manipulation. I've used this notation often when dealing with this very thing, for the very reason you asked this question - this notation sort of pops out at you and says "Pay attention to me I'm important!"
Well we can use integers infact we can avoid any as the default nature of enum assigns 0 to its first member and an incremented value to the next available member. Many developers use this to hit two targets with one bow.
Complicate the code making it difficult to understand
Faster the performance as hex codes are nearer to binary one
my view is if we are still using why we are in fourth generation language just move to binary again
but its quite better technique to play with bits and encryption/decryption process
Related
Whilst reading over some existing code I came across:
public static readonly int MaxSize = 0x1000;
Which made me wonder why use a hex literal. In this context MaxSize is used for pagination.
The closest I came to was:
Hex numbers are a convenient way of expressing integral values,
denoting exactly the bits stored in memory for that integer.
https://csharp.2000things.com/2010/08/28/72-hexadecimal-numbers/
Which makes sense to a degree, I'd be interested in hearing a more detailed explanation for this use case in particular "denoting exactly the bits stored in memory".
In some cases HEX value is more rounded and understandable than its decimal equivalent. Like 0x0FFF, 0xA0A0, 0x10000, etc.
Now I know that converting a int to hex is simple but I have an issue here.
I have an int that I want to convert it to hex and then add another hex to it.
The simple solution is int.Tostring("X") but after my int is turned to hex, it is also turned to string so I can't add anything to it until it is turned back to int again.
So my question is; is there a way to turn a int to hex and avoid having turned it to string as well. I mean a quick way such as int.Tostring("X") but without the int being turned to string.
I mean a quick way such as int.Tostring("X") but without the int being
turned to string.
No.
Look at this way. What is the difference between those?
var i = 10;
var i = 0xA;
As a value, they are exactly same. As a representation, first one is decimal notation and the second one is hexadecimal notation. The X you use hexadecimal format specifier which generates hexadecimal notation of that numeric value.
Be aware, you can parse this hexadecimal notation string to integer anytime you want.
C# convert integer to hex and back again
There is no need to convert. Number ten is ten, write it in binary or hex, yes their representation will differ depending in which base you write them but value is same. So just add another integer to your integer - and convert the final result to hex string when you need it.
Take example. Assume you have
int x = 10 + 10; // answer is 20 or 0x14 in Hex.
Now, if you added
int x = 0x0A + 0x0A; // x == 0x14
Result would still be 0x14. See?
Numeric 10 and 0x0A have same value just they are written in different base.
Hexadecimal string although is a different beast.
In above case that could be "0x14".
For computer this would be stored as: '0', 'x', '1', '4' - four separate characters (or bytes representing these characters in some encoding). While in case with integers, it is stored as single integer (encoded in binary form).
I guess you missing the point what is HEX and what is INT. They both represent an numbers. 1, 2, 3, 4, etc.. numbers. There's a way to look at numbers: as natural numbers: INT and hexadecimal - but at the end those are same numbers. For example if you have number: 5 + 5 = 10 (int) and A (as hex) but it the same number. Just view on them are different
Hex is just a way to represent number. The same statment is true for decimal number system and binary although with exception of some custom made numbers (BigNums etd) everything will be stored as binary as long as its integer (by integer i mean not floating point number). What would you really like to do is probably performing calculations on integers and then printing them as a Hex which have been already described in this topic C# convert integer to hex and back again
The short answer: no, and there is no need.
The integer One Hundred and seventy nine (179) is B3 in hex, 179 in base-10, 10110011 in base-2 and 20122 in base-3. The base of the number doesn't change the value of it. B3, 17, 10110011, and 20122 are all the same number, they are just represented different. So it doesn't matter what base they are in as long as you do you mathematical operations on numbers in the same base it doesn't matter what the base is.
So in your case with Hex numbers, they can contain characters such as 'A','B', 'C', and so on. So when you get a value in hex if it is a number that will contain a letter in its hex representation it will have to be a string as letters are not ints. To do what you want, it would be best to convert both numbers to regular ints and then do math and convert to Hex after. The reason for this is that if you want to be able to add (or whatever operation) with them looking like hex you are going to to need to change the behavior of the desired operator on string which is a hassle.
I have a 32 bit int and I want to address only the lower half of this variable. I know I can convert to bit array and to int16, but is there any more straight forward way to do that?
It you want only the lower half, you can just cast it: (Int16)my32BitInt
In general, if you're extending/truncating bit patterns like this, then you do need to be careful about signed types - unsigned types may cause fewer surprises.
As mentioned in the comments - if you've enclosed your code in a 'checked' context, or changed your compiler options so that the default is 'checked', then you can't truncate a number like this without an exception being thrown if there are any non-zero bits being discarded - in that situation you'd need to do:
(UInt16)(my32BitInt & 0xffff)
(The option of using signed types is gone in this case, because you'd have to use & 0x7fff which then preserves only 15 bits)
just use this function
Convert.ToInt16()
or just
(Int16)valueasint
You can use implicit conversation to Int16 like;
(Int16)2;
but be careful when you do that. Because Int16 can't hold all possible Int32 values.
For example this won't work;
(Int16)2147483683;
because Int16 can hold 32787 as maximum value. You can use unchecked (C# Reference) keyword such this cases.
If you force an unchecked operation, a cast should work:
int r = 0xF000001;
short trimmed = unchecked((short) r);
This will truncate the value of r to fit in a short.
If the value of r should always fit in a short, you can just do a normal cast and let an exception be thrown.
If you need a 16 bit value and you happen to know something specific like that the number will never be less than zero, you could use a UINT16 value. That conversion looks like:
int x = 0;
UInt16 value = (UInt16)x;
This has the full (positive) range of an integer.
Well, first, make sure you actually want to have the value signed. uint and ushort are there for a reason. Then:
ushort ret = (ushort)(val & ((1 << 16) - 1));
Because I needed to look at some methods in BigInteger, I DotPeeked into the assembly. And then I found something rather odd:
internal int _sign;
Why would you use an int for the sign of a number? Is there no reason, or is there something I'm missing. I mean, they could use a BitArray, or a bool, or a byte. Why an int?
If you look at some of the usages of _sign field in the decompiled code, you may find things like this:
if ((this._sign ^ other._sign) < 0)
return this._sign >= 0 ? 1 : -1;
Basically int type allows to compare signs of two values using multiplication. Obviously neither byte, nor bool would allow this.
Still there is a question: why not Int16 then, as it would consume less memory? This is perhaps connected with alignment.
Storing the sign as an int allows you to simply multiply by the sign to apply it to the result of a calculation. This could come in handy when converting to simpler types.
A bool can have only 2 states. The advantage of an int is that it now also is simple to keep track of the special value: 0
public bool get_IsZero()
{
return (this._sign == 0);
}
And several more shortcuts like that when you read the rest of the code.
The size of any class object is going to be rounded up to 32 bits (four bytes), so "saving" three bytes won't buy anything. One might be able to shave four bytes off the size of a typical BigInteger by stealing a bit from one of the words that holds the numeric value, but the extra processing required for such usage would outweigh the cost of wasting a 32-bit integer.
A more interesting possibility might be to have BigInteger be an abstract class, with derived classes PositiveBigInteger and NegativeBigInteger. Since every class object is going to have a word that says what class it is, such an approach would save 32 bits for each BigInteger that's created. Use of an abstract class in such fashion would add an extra virtual member dispatch to each function call, but would likely save an "if" test on most of them (since the methods of e.g. NegativeBigInteger would know by virtue of the fact that they are invoked that this is negative, they wouldn't have to test it). Such a design could also improve efficiency if there were classes for TinyBigInteger (a BigInteger whose value could fit in a single Integer) and SmallBigInteger (a BigInteger whose value could fit in a Long). I have no idea if Microsoft considered such a design, or what the trade-offs would have been.
Gets a number that indicates the sign (negative, positive, or zero) of the current System.Numerics.BigInteger object.
-1 The value of this object is negative. 0 The value of this object is 0 (zero). 1 The value of this object is positive.
That means
class Program
{
static void Main(string[] args)
{
BigInteger bInt1 = BigInteger.Parse("0");
BigInteger bInt2 = BigInteger.Parse("-5");
BigInteger bInt3 = BigInteger.Parse("5");
division10(bInt1);//it is Impossible
division10(bInt2);//it is Possible : -2
division10(bInt3);//it is Possible : 2
}
static void division10(BigInteger bInt)
{
double d = 10;
if (bInt.IsZero)
{
Console.WriteLine("it is Impossible");
}
else
{
Console.WriteLine("it is Possible : {0}", d / (int)bInt);
}
}
}
don't use byte or another uint, sbyte, ushort, short because exist CLS and CLS don't support their
Part of my application data contains a set of 9 ternary (base-3) "bits". To keep the data compact for the database, I would like to store that data as a single short. Since 3^9 < 2^15 I can represent any possible 9 digit base-3 number as a short.
My current method is to work with it as a string of length 9. I can read or set any digit by index, and it is nice and easy. To convert it to a short though, I am currently converting to base 10 by hand (using a shift-add loop) and then using Int16.Parse to convert it back to a binary short. To convert a stored value back to the base 3 string, I run the process in reverse. All of this takes time, and I would like to optimize it if at all possible.
What I would like to do is always store the value as a short, and read and set ternary bits in place. Ideally, I would have functions to get and set individual digits from the binary in place.
I have tried playing with some bit shifts and mod functions, but havn't quite come up with the right way to do this. I'm not even sure if it is even possible without going through the full conversion.
Can anyone give me any bitwise arithmetic magic that can help out with this?
public class Base3Handler
{
private static int[] idx = {1, 3, 9, 27, 81, 243, 729, 729*3, 729*9, 729*81};
public static byte ReadBase3Bit(short n, byte position)
{
if ((position > 8) || (position < 0))
throw new Exception("Out of range...");
return (byte)((n%idx[position + 1])/idx[position]);
}
public static short WriteBase3Bit(short n, byte position, byte newBit)
{
byte oldBit = ReadBase3Bit(n, position);
return (short) (n + (newBit - oldBit)*idx[position]);
}
}
These are small numbers. Store them as you wish, efficiently in memory, but then use a table lookup to convert from one form to another as needed.
You can't do bit operations on ternary values. You need to use multiply, divide and modulo to extract and combine values.
To use bit operations you need to limit the packing to 8 ternaries per short (i.e. 2 bits each)