I have a hexadecimal number written in the text file. I need to check the condition for hexadecimal number in if-else. For example, the Start Number written in the text file is 1100 and End Number is 10FF. The start number,1100 is the addition of End number with 1. This increment process done by other system.
In my case, the system will proceed to the next process after read the numbers in the text file.
This is my code:
var data = File
.ReadAllLines(Path)
.Select(x => x.Split('='))
.Where(x => x.Length > 1)
.ToDictionary(x => x[0].Trim(), x => x[1]);
Console.WriteLine("Start: {0}", data["Start"]);
Console.WriteLine("End: {0}", data["End"]);
if (data["Start"] == data["End"]+1)
{
//it will proceed to next process
}
else
{
//prompt not meet end number
}
The problem is, the if (data["Start"] == data["End"]+1) does not functioning. How can I resolve this issue? Do I need to convert the hexadecimal number to int first?
In C# if you concatenate string with number, number will get converted to a string and appended to the end of original string:
If you want to perform some math on your numbers, you need to convert them to correct data type first (in your case - integer).
To do this, you can use one of these commands:
if (Convert.ToInt32(data["Start"], 16) == Convert.ToInt32(data["End"], 16) + 1)
or
if (int.Parse(data["Start"], NumberStyles.HexNumber) == int.Parse(data["End"], NumberStyles.HexNumber) + 1)
They will convert your string that contains hexadecimal number to decimal representation of this number, and then it will behave as a number (addition will work as expected, for example).
"10FF" is not 0x10FF.
In c#, The fact that a string happens to contain text that can be parsed as a hexa-decimal number (or any number for that matter) doesn't mean it will be implicitly converted to that number.
In fact, it's the other way around - c# will implicitly convert the number to a string when using the + operator between a string and a number - so "10FF" + 1 will result with "10FF1".
Note I've started this line with "In c#" - because other languages might not follow the same rules. In T-SQL, for instance, implicit conversions from varchar to int happens all the time and it's a very common "gotcha" for inexperienced devs.
So you need to convert your strings to ints, as Lasse V. Karlsen wrote in the comments.
You can either do that by using Convert.ToInt32(string, 16) or by using int.Parse(str, NumberStyles.HexNumber) - if you're sure that the text will always contain the string representation of a hexa-decimal number.
For text that you're not sure about, you better use int.TryParse to avoid the risk of a FormatException - but note that TryParse returns bool and the int value is returned via an out parameter: int.TryParse(str, NumberStyles.HexNumber, CultureInfo.InvariantCulture, out var val)
Related
I have a number
int number = 509; // integer
string bool_number = Convert.ToString(number, 2); // same integer converted to binary no
I want to bitwise or this number with hex values 0x01, 0x02, 0x04 and 0x08.
(e.g. something like this)
result = number | 0x01
How can I do it? Should I convert number to hex form or whats the right way?
You can use hexadecimal values as numeric literals...
int number = 509;
int hexNumber = 0x02;
int newNumber = number | hexNumber;
// whatever
string newNumberAsBinaryString = Convert.ToString(newNumber, 2);
Console.WriteLine(newNumber);
// etc.
If you need to input a hex string and convert it to a numeric type:
int num = Int32.Parse(hexString, System.Globalization.NumberStyles.HexNumber);
If you need to output a numeric type as hex:
Console.WriteLine(num.ToString("x"));
// or
Console.WriteLine("{0:x}", num);
See also MSDN's page on dealing with hex strings.
An int value isn't in any particular base. You can use bitwise operators on an int at any time - there's no need to convert it first. For example:
int a = 509;
int b = 0x1fd;
The variables a and b have exactly the same value here. I happen to have used a decimal literal to initialize a, and a hex literal to initialize b, but the effect is precisely the same.
So you can bitwise OR your ints at any time. Your example (adding a suitable declaration and semicolon to make it compile):
int result = number | 0x01;
will work just fine - you don't need to do anything to prepare number for this sort of usage. (Incidentally, this will do nothing, because the result of a bitwise OR of the numbers 509 and 1 is 509. If you write 509 in binary you get 111111101 - the bottom bit is already 1, so ORing in 1 won't change anything.)
You should avoid thinking in terms of things like "hex values", because there isn't really any such thing in C#. Numeric bases are only relevant for numbers represented as strings, which typically means either literals in source code, or conversions done at runtime. For example, if your program accepts a number as a command line argument, then that will arrive as a string, so you'll need to know its base to convert it correctly to an int. But once it's an int it's just an int - there's no such thing as a hex value or a decimal value for an int.
C++, Java all include the [-]0xh.hhhhp+/-d format in the syntax of the language, other languages like python and C99 have library support for parsing these strings (float.fromhex, scanf).
I have not, yet, found a way to parse this exact hex encoded exponential format in C# or using the .NET libraries.
Is there a good way to handle this, or a decent alternative encoding? (decimal encoding is not exact).
Example strings:
0x1p-8
-0xfe8p-12
Thank you
Unfortunately, I don't know of any method built-in to .NET that compares to Python's float.fromhex(). So I suppose the only thing you can do is roll your own .fromhex() in C#. This task can range in difficulty from "Somewhat Easy" to "Very Difficult" depending on how complete and how optimized you'd like your solution to be.
Officially, the IEEE 754 spec allows for decimals within the hexadecimal coefficient (ie. 0xf.e8p-12) which adds a layer of complexity for us since (much to my frustration) .NET also does not support Double.Parse() for hexadecimal strings.
If you can constrain the problem to examples like you've provided where you only have integers as the coefficient, you can use the following solution using string operations:
public static double Parsed(string hexVal)
{
int index = 0;
int sign = 1;
double exponent = 0d;
//Check sign
if (hexVal[index] == '-')
{
sign = -1;
index++;
}
else if (hexVal[index] == '+')
index++;
//consume 0x
if (hexVal[index] == '0')
{
if (hexVal[index+1] == 'x' || hexVal[index+1] == 'X')
index += 2;
}
int coeff_start = index;
int coeff_end = hexVal.Length - coeff_start;
//Check for exponent
int p_index = hexVal.IndexOfAny(new char[] { 'p', 'P' });
if (p_index == 0)
throw new FormatException("No Coefficient");
else if (p_index > -1)
{
coeff_end = p_index - index;
int exp_start = p_index + 1;
int exp_end = hexVal.Length;
exponent = Convert.ToDouble(hexVal.Substring(exp_start, exp_end - (exp_start)));
}
var coeff = (double)(Int32.Parse(hexVal.Substring(coeff_start, coeff_end), NumberStyles.AllowHexSpecifier));
var result = sign * (coeff * Math.Pow(2, exponent));
return result;
}
If you're seeking an identical function to Python's fromhex(), you can try your hand at converting the CPython implementation into C# if you'd like. I tried, but got in over my head as I'm not very familiar with the standard and had trouble following all the overflow checks they were looking out for. They also allow other things like unlimited leading and trailing whitespace, which my solution does not allow for.
My solution is the "Somewhat Easy" solution. I'm guessing if you really knew your stuff, you could build the sign, exponent and mantissa at the bit level instead of multiplying everything out. You could definitely do it in one pass as well, rather than cheating with the .Substring() methods.
Hopefully this at least gets you on the right track.
I have written C# code for formatting and parsing numbers in the hexadecimal floating-point format described in IEEE 754r and supported by C99, C++11 and Java. The code is part of the BSD-licenced FParsec library for F# and is contained in a single file:
https://bitbucket.org/fparsec/main/src/tip/FParsecCS/HexFloat.cs
The supported format is described a bit at http://www.quanttec.com/fparsec/reference/charparsers.html#members.floatToHexString
The test code (written in F#) can be found at https://bitbucket.org/fparsec/main/src/tip/Test/HexFloatTests.fs
I have an int[]:
RXBuffer[0], RXBuffer[1],..., RXBuffer[9]
where each value represents an ASCII code, so 0x31 represents 1, 0x41 represents A.
How do I convert this to a 10 character string ?
So far I've tried Data = RxBuffer.ToString();. But it shows Data equals to System.Int32[] which is not what my data is.
How can I do this?
Assuming the "int array" is values in the 0-9 range (which is the only way that makes sense to convert an "int array" length 10 to a 10-character string) - a bit of an exotic way:
string s = new string(Array.ConvertAll(RXBuffer, x => (char)('0' + x)));
But pretty efficient (the char[] is right-sized automatically, and the string conversion is done just with math, instead of ToString()).
Edit: with the revision that makes it clear that these are actually ASCII codes, it becomes simpler:
string s = new string(Array.ConvertAll(RXBuffer, x => (char)x));
Although frankly, if the values are ASCII (or even unicode) it would be better to store it as a char[]; this covers the same range, takes half the space, and is just:
string s = new string(RXBuffer);
LolCoder
All you need is :
string.Join("",RXBuffer);
============== Or =================
int[] RXBuffer = {0,1,2,3,4,5,6,7,8,9};
string result = string.Join(",",RXBuffer);
We have an interesting problem were we need to determine the decimal precision of a users input (textbox). Essentially we need to know the number of decimal places entered and then return a precision number, this is best illustrated with examples:
4500 entered will yield a result 1
4500.1 entered will yield a result 0.1
4500.00 entered will yield a result 0.01
4500.450 entered will yield a result 0.001
We are thinking to work with the string, finding the decimal separator and then calculating the result. Just wondering if there is an easier solution to this.
I think you should just do what you suggested - use the position of the decimal point. Obvious drawback might be that you have to think about internationalization yourself.
var decimalSeparator = NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator;
var position = input.IndexOf(decimalSeparator);
var precision = (position == -1) ? 0 : input.Length - position - 1;
// This may be quite unprecise.
var result = Math.Pow(0.1, precision);
There is another thing you could try - the Decimal type stores an internal precision value. Therefore you could use Decimal.TryParse() and inspect the returned value. Maybe the parsing algorithm maintains the precision of the input.
Finally I would suggest not to try something using floating point numbers. Just parsing the input will remove any information about trailing zeros. So you have to add an artifical non-zero digit to preserve them or do similar tricks. You might run into precision issues. Finally finding the precision based on a floating point number is not simple, too. I see some ugly math or a loop multiplying with ten every iteration until there is no longer any fractional part. And the loop comes with new precision issues...
UPDATE
Parsing into a decimal works. Se Decimal.GetBits() for details.
var input = "123.4560";
var number = Decimal.Parse(input);
// Will be 4.
var precision = (Decimal.GetBits(number)[3] >> 16) & 0x000000FF;
From here using Math.Pow(0.1, precision) is straight forward.
UPDATE 2
Using decimal.GetBits() will allocate an int[] array. If you want to avoid the allocation you can use the following helper method which uses an explicit layout struct to get the scale directly out of the decimal value:
static int GetScale(decimal d)
{
return new DecimalScale(d).Scale;
}
[StructLayout(LayoutKind.Explicit)]
struct DecimalScale
{
public DecimalScale(decimal value)
{
this = default;
this.d = value;
}
[FieldOffset(0)]
decimal d;
[FieldOffset(0)]
int flags;
public int Scale => (flags >> 16) & 0xff;
}
Just wondering if there is an easier
solution to this.
No.
Use string:
string[] res = inputstring.Split('.');
int precision = res[1].Length;
Since your last examples indicate that trailing zeroes are significant, I would rule out any numerical solution and go for the string operations.
No, there is no easier solution, you have to examine the string. If you convert "4500" and "4500.00" to numbers, they both become the value 4500 so you can't tell how many non-value digits there were behind the decimal separator.
As an interesting aside, the Decimal tries to maintain the precision entered by the user. For example,
Console.WriteLine(5.0m);
Console.WriteLine(5.00m);
Console.WriteLine(Decimal.Parse("5.0"));
Console.WriteLine(Decimal.Parse("5.00"));
Has output of:
5.0
5.00
5.0
5.00
If your motivation in tracking the precision of the input is purely for input and output reasons, this may be sufficient to address your problem.
Working with the string is easy enough.
If there is no "." in the string, return 1.
Else return "0.", followed by n-1 "0", followed by one "1", where n is the length of the string after the decimal point.
Here's a possible solution using strings;
static double GetPrecision(string s)
{
string[] splitNumber = s.Split('.');
if (splitNumber.Length > 1)
{
return 1 / Math.Pow(10, splitNumber[1].Length);
}
else
{
return 1;
}
}
There is a question here; Calculate System.Decimal Precision and Scale which looks like it might be of interest if you wish to delve into this some more.
If I have a credit number that is an int and I just want to display the last 4 numbers with a * on the left, how would I do this in C#?
For example, 4838382023831234 would shown as *1234
If it's an integer type?
Where i is the int
string maskedNumber = string.Format("*{0}", i % 10000)
This will get the modulus of 10,000 which will return the last four digits of the int
// assumes that ccNumber is actually a string
string hidden = "*" + ccNumber.Substring(ccNumber.Length - 4);
string myCc = myCc.ToString().Substring(12).PadLeft(1, '*');
A credit card number will overflow an int32 and just like phone numbers it doesn't make any sense to think about adding, subtractings, or multiplying credit card numbers. Also string inputs can handle formatting because some users will write in the hyphens. For those reasons, its a lot better to store these objects as strings and reserve numeric value types for data that you actually intend to perform arithmetic on.
I'm not satisfied.
binaryworrier: Mind that if you use modulo, you will get fewer digits for numbers such as
1234123412340001
sshow: mind that, if you use substring(12), you will get fewer digits for numbers such as
0000123412341234
solution would be:
UInt64 ccNumber;
string s = ccNumber.ToString().Text.PadLeft(15, 'myString');
string last = "*"+s.Substring(s.Length-4);
But on a more abstract note, is a credit card number actually a number?
I think not; much more likely that you are going to want to manipulate it digit by digit than perform arithmetic on it. Your advantage of converting char[16] to UInt64 cuts storage space by 50%. No wait, 75% - stupid two-byte-chars!
If the number is stored as a string then this will do it
string ccNumber = "4242424242424242";
string modifiedCCNumber = "*" + ccNumber.Substring(ccNumber.Length - 4);
string cardNo = "1234567890123456";
string maskedNo = "*" + cardNo.Substring(12,4);