Is there any way to set an int to a negative value using a hexadecimal literal in C#? I checked the specification on Integer literals but it didn't mention anything.
For example:
int a = -1; // Allowed
int b = 0xFFFFFFFF; // Not allowed?
Hexadecimal notation is clearer for my application, and I'd prefer not to use uints because I would need to do some extra casting.
Use the unchecked keyword.
unchecked
{
int b = (int)0xFFFFFFFF;
}
or even shorter
int b = unchecked((int)0xFFFFFFFF);
i think you can use -0x1 with c#
You have to cast it and put it in an unchecked context.
unchecked {
int i = (int)0xFFFFFFFF;
}
This will get the job done without unchecked operations or casting.
int i = 0xFF << 24 | 0xFF << 16 | 0xFF << 8 | 0xFF;
You can use the "Convert" class:
string hex = "FF7F";
int myInt = Convert.ToInt32(hex, 16);
Related
Say you have a byte variable, and you assign it the decimal value of 102.
(byte myByteVariable = 102;)
Why is that possible? Shouldn't you have to provide an actual 8 bit value instead to avoid confusion? And how do I set a byte value by supplying bits instead?
And how do I set a byte value by supplying bits instead?
If you are setting the byte value explicitly, you can do so using the prefix 0x to indicate that you are doing so in hexadecimal, as an encoding for groups of 4 bits. E.g. the hex value of decimal 102 is 0x66, which is equivalent to 0110 0110.
But for supplying bits directly, Marc Gravell provides an option here for converting from binary to int using the Convert class, and similarly you can convert the string representation of a binary value into a byte using the Convert.ToByte method as:
byte b = Convert.ToByte("01100110", 2);
byte is a decimal type which represents a decimal number, it doesn't represent a bit field. So 102 is a normal value for it, because it's in the range of byte values (which is [0;255]). If you want to manipulate the bits, consider using BitArray or BitVector32.
A byte can store 8 bits (that is values from 0 to 255). A short stores 16 bits, an int 32, etc. Each integer type in C# simply allows the storage of a wider range of numbers. C# allows you to assign a whole number to any of these types.
In order to set each bit individually, you will need to use bitwise operators.
I actually wrote a library a while ago to handle this, allowing you to set each bit of the byte using data[x]. It is very similar to the BitArray class (which I'm not sure if I knew about when I made it)
The main idea of it is:
private byte data;
public void SetBit(int index, bool value)
{
if (value)
data = (byte)(data | (1 << index));
else
data = (byte)(data & ~(1 << index));
}
public bool GetBit(int index)
{
return ((data & (1 << index)) != 0);
}
I have a following c++ code which needs to convert to c#.
char tempMask;
tempMask = 0xff;
I am not too sure how to convert this, could some one please let me know what is the use of below line in c++
tempMask = 0xff;
thanks
It initializes tempMask with a byte containing value 0xFF
In C#, you can write
tempMask = (char)0xff;
or
tempMask = Convert.ToChar(0xff);
It's just a simple initialisation of a variable tempMask in a hexadecimal system. http://en.wikipedia.org/wiki/Hexadecimal
OxFF = 15*16^0+ 15*16^1 = 255 .
In C# you cannot init char with int value without explicit conversion but you can write : char tempMask = (char)0xFF;, or if you really want a 8-bit integer - try byte tempMask = 0xFF;
A signed char in C/C++ is not a 'char' in C#, it's an 'sbyte' - no cast required:
sbyte tempMask;
tempMask = 0xff;
static long[] Mask1=
{
0xFF, 0xFF00, 0xFF0000, 0xFF000000,
0xFF00000000, 0xFF0000000000, 0xFF000000000000, 0xFF00000000000000
};
static long[] Mask2 =
{
0x101010101010101, 0x202020202020202, 0x404040404040404, 0x808080808080808,
0x1010101010101010, 0x2020202020202020, 0x4040404040404040, 0x8080808080808080
};
In Java, I can put these values into long[] just fine.
But in C#, I get that they cannot be 'convert to a long'.
I can only use a ulong in C#, but why when Java long and C# long have the same capacity? –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 no?
Is there an extra specifier I need for the masks?
Specifically my errors are:
"Constant value `18374686479671623680' cannot be converted to a `long'"
Constant value `9259542123273814144' cannot be converted to a `long'
long.MaxValue = 9223372036854775807
your value 1 = 18374686479671623680
your value 2 = 9259542123273814144
Your values are bigger than the signed long can hold. This is why you have to use unsigned.
You have to use an 'unchecked' cast in C# for the out-of-range values:
static long[] Mask1 = {0xFF, 0xFF00, 0xFF0000, 0xFF000000, 0xFF00000000, 0xFF0000000000, 0xFF000000000000, unchecked((long)0xFF00000000000000)};
static long[] Mask2 = {0x101010101010101, 0x202020202020202, 0x404040404040404, 0x808080808080808, 0x1010101010101010, 0x2020202020202020, 0x4040404040404040, unchecked((long)0x8080808080808080)};
I have an integer value. I want to convert it to the Base 64 value. I tried the following code.
byte[] b = BitConverter.GetBytes(123);
string str = Convert.ToBase64String(b);
Console.WriteLine(str);
Its giving the out put as "ewAAAA==" with 8 characters.
I convert the same value to base 16 as follows
int decvalue = 123;
string hex = decvalue.ToString("X");
Console.WriteLine(hex);
the out put of the previous code is 7B
If we do this in maths the out comes are same. How its differ? How can I get same value to Base 64 as well. (I found the above base 64 conversion in the internet)
The question is rather unclear... "How is it differ?" - well, in many different ways:
one is base-16, the other is base-64 (hence they are fundamentally different anyway)
one is doing an arithmetic representation; one is a byte serialization format - very different
one is using little-endian arithmetic (assuming a standard CPU), the other is using big-endian arithmetic
To get a comparable base-64 result, you probably need to code it manually (since Convert only support base-8, base-10, base-16 for arithmetic converts). Perhaps (note: not optimized):
static void Main()
{
string b64 = ConvertToBase64Arithmetic(123);
}
// uint because I don't care to worry about sign
static string ConvertToBase64Arithmetic(uint i)
{
const string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
StringBuilder sb = new StringBuilder();
do
{
sb.Insert(0, alphabet[(int)(i % 64)]);
i = i / 64;
} while (i != 0);
return sb.ToString();
}
What can be the reason for the problem? My method returns incorrect int values. When I give it hex value of AB or DC or something similar it returns int = 0 but when I give it a hex = 22 it returns me int = 22. (though int should be 34 in this case).
public int StatusBit(int Xx, int Rr) {
int Number;
int.TryParse(GetX(Xx,Rr), out Number);
return Number;
}
I tried to use Number = Convert.ToInt32(GetX(Xx,Rr)); but it gives same result but null instead of 0 for anything that includes letters.
Use Convert.ToInt32(string, int) instead. That way you can give a base the number should be interpreted in. E.g.
return Convert.ToInt32(GetX(Xx, Rr), 16);
(You also don't check the return value of TryParse which would give a hint that the parse failed.)
If you expect both decimal and hexadecimal numbers you need to branch according to how the number looks and use either base 10 or base 16. E.g. if your hexadeximal numbers always start with 0x you could use something along the following lines:
string temp = GetX(Xx, Rr);
return Convert.ToInt32(temp, temp.StartsWith("0x") ? 16 : 10);
But that would depend on how (if at all) you would distinguish the two. If everything is hexadecimal then there is no such need, of course.
Use NumberStyles.HexNumber:
using System;
using System.Globalization;
class Test
{
static void Main()
{
string text = "22";
int value;
int.TryParse(text, NumberStyles.HexNumber,
CultureInfo.InvariantCulture, out value);
Console.WriteLine(value); // Prints 34
}
}
Do you really want to silently return 0 if the value can't be parsed, by the way? If not, use the return value of int.TryParse to determine whether the parsing succeeded or not. (That's the reason it's returning 0 for "AB" in your original code.)
int.TryParse parses a base 10 integer.
Use Convert.ToUInt32(hex, 16) instead
here is my solution;
kTemp = int.Parse(xcc, System.Globalization.NumberStyles.HexNumber);
above kTemp is an integer, and xcc is a string.
xcc can be anything like; FE, 10BA, FE0912... that is to say; xcc is a string of hex characters in any length.
beware; I dont get the 0x prefix with my hex strings.