Hex Conversion Error from Web Service Form call - c#

I'm receiving an error which looks like it is due to an error using hexidecimal inputs in a uint field. It occurs on both versions of the web service I'm working on.
System.ArgumentException: Cannot convert 0x2 to System.UInt32.
Parameter name: type ---> System.FormatException: Input string was not in a correct format.
However, my coworker says that it works for him on a previous version of the web service I have, when he calls it using C++, but it doesn't work on the current version I'm working on.
Has anyone experienced this?

Are you using something like this in your code? If not, try to implement this (replace "CF01" with your input value):
int i = Convert.ToInt32("CF01", 16);
Edit:
For the particular case with the 0x prefix:
public int32 GetInt32FromHex(string h) {
h = h.substring(2, (h.length - 2));
return convert.ToInt32(h, 16);
}

This is quite an interesting error because it seems like the Parse method of UInt32 is only able to parse numbers of the form [ws][sign]digits[ws] per the documentation. There is a version of the method that can take NumberStyles flags see documentation. One of the values of this is AllowHexSpecifier which you'd think would allow the 0x. However if you read the documentation for both the Parse method and the NumberStyles unable to handle the 0x format at all. If you look at the it says:
If s is the string representation of a hexadecimal number, it cannot be preceded by any decoration (such as 0x or &h) that differentiates it as a hexadecimal number. This causes the conversion to fail.
The AllowHexSpecifier makes it so that only numbers of the form [ws]hexdigits[ws]
It seems like you are going to have to get rid of the leading 0x before parsing or use another method of parsing.
One way to do that, especially if there is a leading 0x is to do the following:
var value = UInt32.Parse( "0x2".TrimStart('0').TrimStart('x'));
You will have to be careful here to do checking to make sure you have the proper base though and you may need to use the NumberStyles.AllowHexSpecifier to parse correctly.

Related

C#: Scientific notation String to Int64 conversion failing

I'm getting an exception when trying to parse a number that is in scientific notation. Looking at other posts on how to do it, and I can't tell what I'm doing any differently than those.
I've tried the following:
System.Convert.ToInt64("1.0206e+06");
System.Convert.ToInt64("1.0206E+06"); // Uppercase 'E'
These result in a FormatException: Input string was not in the correct format.
I tried these:
Int64.Parse("1.0206e+06", System.Globalization.NumberStyles.Any);
Int64.Parse("1.0206e+06", System.Globalization.NumberStyles.Any, System.Globalization.CultureInfo.InvariantCulture);
Int64.Parse("1.0206e+06", System.Globalization.NumberStyles.Float, System.Globalization.CultureInfo.InvariantCulture);
These all result in an OverflowException: Value was too large or too small.
Also tried with Int32.Parse and got the same exception and message:
(long)Int32.Parse(str, System.Globalization.NumberStyles.Any, System.Globalization.CultureInfo.InvariantCulture);
Using Decimal.Parse works with the same string and parameters passed to it:
(long)Decimal.Parse(str, System.Globalization.NumberStyles.Any, System.Globalization.CultureInfo.InvariantCulture);
This answer suggests using this:
Double.Parse("1.234567E-06", System.Globalization.NumberStyles.Float);
Which is similar to my last example, I just accept all number styles, and that answer used a negative exponent. In fact, I fed that exact string into my examples and I still get the same exceptions.
Not sure if it matters, but I'm using Mono C#, the version that comes with Unity.
Here's the C# source file: https://github.com/Unity-Technologies/mono/blob/unity-staging/mcs/class/corlib/System/Int64.cs. The exception is thrown on line 469 and doesn't provide me a call stack before that point. But I'm guessing the exception is created on line 355 or 372 since those match the exception type and message I'm being shown.
I'm going to assume that this is a bug with the version of Mono C# I'm using, which comes with Unity 5.5.x or earlier. Their repository can be found here.
Their implementation of Int64.Parse does not even check for the NumberStyles.AllowExponents flag, or handle exponents in any way. So it's going to fail when it finds the + symbol in the string. Basically, Int64.Parse when using Unity does not support exponents.
Mono's Int32.Parse does seem to look for exponents, but still causes an OverflowException with all exponents that I give it.
Decimal.Parse actually does work with the same parameters as the other two, which suggests there was nothing wrong with the string or parameters, but it's just a bug in their other Parse methods. Decimal's parsing is completely different from how the Int parsing is being done, so that may explain why it works and the others don't.

Strange behaviour of String.Format when (mis-)using placeholders

When I learned about the String.Format function, I did the mistake to think that it's acceptable to name the placeholders after the colon, so I wrote code like this:
String.Format("A message: '{0:message}'", "My message");
//output: "A message: 'My message'"
I just realized that the string behind the colon is used to define the format of the placeholder and may not be used to add a comment as I did.
But apparently, the string behind the colon is used for the placeholder if:
I want to fill the placeholder with an integer and
I use an unrecognized formating-string behind the colon
But this doesn't explain to me, why the string behind the colon is used for the placeholder if I provide an integer.
Some examples:
//Works for strings
String.Format("My number is {0:number}!", "10")
//output: "My number is 10!"
//Works without formating-string
String.Format("My number is {0}!", 10)
//output: "My number is 10!"
//Works with recognized formating string
String.Format("My number is {0:d}!", 10)
//output: "My number is 10!"
//Does not work with unrecognized formating string
String.Format("My number is {0:number}!", 10)
//output: "My number is number!"
Why is there a difference between the handling of strings and integers? And why is the fallback to output the formating string instead of the given value?
Just review the MSDN page about composite formatting for clarity.
A basic synopsis, the format item syntax is:
{ index[,alignment][:formatString]}
So what appears after the : colon is the formatString. Look at the "Format String Component" section of the MSDN page for what kind of format strings are predefined. You will not see System.String mentioned in that list. Which is no great surprise, a string is already "formatted" and will only ever appear in the output as-is.
Composite formatting is pretty lenient to mistakes, it won't throw an exception when you specify an illegal format string. That the one you used isn't legal is already pretty evident from the output you get. And most of all, the scheme is extensible. You can actually make a :message format string legal, a class can implement the ICustomFormatter interface to implement its own custom formatting. Which of course isn't going to happen on System.String, you cannot modify that class.
So this works as expected. If you don't get the output you expected then this is pretty easy to debug, you've just go two mistakes to consider. The debugger eliminates one (wrong argument), your eyes eliminates the other.
String.Format article on MSDN has following description:
A format item has this syntax: { index[,alignment][ :formatString] }
...
formatString Optional.
A string that specifies the format of the
corresponding argument's result string. If you omit formatString, the
corresponding argument's parameterless ToString method is called to
produce its string representation. If you specify formatString, the
argument referenced by the format item must implement the IFormattable
interface.
If we directly format the value using the IFormattable we will have the same result:
String garbageFormatted = (10 as IFormattable).ToString("garbage in place of int",
CultureInfo.CurrentCulture.NumberFormat);
Console.WriteLine(garbageFormatted); // Writes the "garbage in place of int"
So it seems that it is something close to the "garbage in, garbage out" problem in the implementation of the IFormattable interface on Int32 type(and possibly on other types as well). The String class does not implement IFormattable, so any format specifier is left unused and .ToString(IFormatProvider) is called instead.
Also:
Ildasm shows that Int32.ToString(String, INumberFormat) internally calls
string System.Number::FormatInt32(int32,
string,
class System.Globalization.NumberFormatInfo)
But it is the internalcall method (extern implemented somewhere in native code), so Ildasm is of no use if we want to determine the source of the problem.
EDIT - CULPRIT:
After reading the How to see code of method which marked as MethodImplOptions.InternalCall? I've used the source code from Shared Source Common Language Infrastructure 2.0 Release (it is .NET 2.0 but nonetheless) in attempt to find a culprit.
Code for the Number.FormatInt32 is located in the ...\sscli20\clr\src\vm\comnumber.cpp file.
The culprit could be deduced from the default section of the format switch statement of the FCIMPL3(Object*, COMNumber::FormatInt32, INT32 value, StringObject* formatUNSAFE, NumberFormatInfo* numfmtUNSAFE):
default:
NUMBER number;
Int32ToNumber(value, &number);
if (fmt != 0) {
gc.refRetString = NumberToString(&number, fmt, digits, gc.refNumFmt);
break;
}
gc.refRetString = NumberToStringFormat(&number, gc.refFormat, gc.refNumFmt);
break;
The fmt var is 0, so the NumberToStringFormat(&number, gc.refFormat, gc.refNumFmt); is being called.
It leads us to nothing else than to the second switch statement default section in the NumberToStringFormat method, that is located in the loop that enumerates every format string character. It is very simple:
default:
*dst++ = ch;
It just plain copies every character from the format string into the output array, that's how the format string ends repeated in the output.
From one point of view it allows to really use garbage format strings that will output nothing useful, but from other point of view it will allow you to use something like:
String garbageFormatted = (1234 as IFormattable).ToString("0 thousands and ### in thousand",
CultureInfo.CurrentCulture.NumberFormat);
Console.WriteLine(garbageFormatted);
// Writes the "1 thousands and 234 in thousand"
that can be handy in some situations.
Interesting behavior indeed BUT NOT unaccounted for.
Your last example works when
if String.Format("My number is {0:n}!", 10)
but revert to the observed beahvior when
if String.Format("My number is {0:nu}!", 10)`.
This prompts to search about the Standard Numeric Format Specifier article on MSDN where you can read
Standard numeric format strings are used to format common numeric
types. A standard numeric format string takes the form Axx, where:
A is a single alphabetic character called the format specifier. Any
numeric format string that contains more than one alphabetic
character, including white space, is interpreted as a custom numeric
format string. For more information, see Custom Numeric Format
Strings.
The same article explains: if you have a SINGLE letter that is not recognized you get an exception.
Indeed
if String.Format("My number is {0:K}!", 10)`.
throws the FormatException as explained.
Now looking in the Custom Numeric Format Strings chapter you will find a table of eligible letters and their possible mixings, but at the end of the table you could read
Other
All other characters
The character is copied to the result string unchanged.
So I think that you have created a format string that cannot in any way print that number because there is no valid format specifier where the number 10 should be 'formatted'.
No it's not acceptable to place anything you like after the colon. Putting anything other than a recognized format specifier is likely to result in either an exception or unpredictable behaviour as you've demonstrated. I don't think you can expect string.Format to behave consistently when you're passing it arguments which are completely inconsistent with the documented formatting types

custom string format puzzler

We have a requirement to display bank routing/account data that is masked with asterisks, except for the last 4 numbers. It seemed simple enough until I found this in unit testing:
string.Format("{0:****1234}",61101234)
is properly displayed as: "****1234"
but
string.Format("{0:****0052}",16000052)
is incorrectly displayed (due to the zeros??): "****1600005252""
If you use the following in C# it works correctly, but I am unable to use this because DevExpress automatically wraps it with "{0: ... }" when you set the displayformat without the curly brackets:
string.Format("****0052",16000052)
Can anyone think of a way to get this format to work properly inside curly brackets (with the full 8 digit number passed in)?
UPDATE: The string.format above is only a way of testing the problem I am trying to solve. It is not the finished code. I have to pass to DevExpress a string format inside braces in order for the routing number to be formatted correctly.
It's a shame that you haven't included the code which is building the format string. It's very odd to have the format string depend on the data in the way that it looks like you have.
I would not try to do this in a format string; instead, I'd write a method to convert the credit card number into an "obscured" string form, quite possibly just using Substring and string concatenation. For example:
public static string ObscureFirstFourCharacters(string input)
{
// TODO: Argument validation
return "****" + input.Substring(4);
}
(It's not clear what the data type of your credit card number is. If it's a numeric type and you need to convert it to a string first, you need to be careful to end up with a fixed-size string, left-padded with zeroes.)
I think you are looking for something like this:
string.Format("{0:****0000}", 16000052);
But I have not seen that with the * inline like that. Without knowing better I probably would have done:
string.Format("{0}{1}", "****", str.Substring(str.Length-4, 4);
Or even dropping the format call if I knew the length.
These approaches are worthwhile to look through: Mask out part first 12 characters of string with *?
As you are alluding to in the comments, this should also work:
string.Format("{0:****####}", 16000052);
The difference is using the 0's will display a zero if no digit is present, # will not. Should be moot in your situation.
If for some reason you want to print the literal zeros, use this:
string.Format("{0:****\0\052}", 16000052);
But note that this is not doing anything with your input at all.

How does WChar relate to Unicode and ASCII

I am about to show my total ignorance of how encoding works and different string formats.
I am passing a string to a compiler (Microsoft as it happens amd for their Flight Simulator). The string is passed as part of an XML document which is used as the source for the compiler. This is created using using standard NET strings. I have not needed to specifically specify any encoding or setting of type since the XML is just text.
The string is just a collection of characters. This is an example of one that gives the error:
ARG, AFL, AMX, ACA, DAH, CCA, AEL, AGN, MAU, SEY, TSC, AZA, AAL, ANA, BBC, CPA, CAL, COA, CUB, DAL, UGX, ELY, UAE, ERT, ETH, EEZ, GHA, IRA, JAL, NWA, KAL, KAC, LAN, LDI, MAS, MEA, PIA, QTR, RAM, RJA, SVA, SIA, SWR, ROT, THA, THY, AUI, UAL, USA, ACA, TAR, UZB, IYE, QFA
If I create the string using my C# managed program then there is no issue. However this string is coming from a c++ program that can create the compiled file using its own compiler that is not compliant with the MS one
The MS compiler does not like the string. It throws two errors:
INTERNAL COMPILER ERROR: #C2621: Couldn't convert WChar string!
INTERNAL COMPILER ERROR: #C2029: Failed to convert attribute value from UNICODE!
Unfortunately there is not any useful documentation with the compiler on its errors. We just makethe best of what we see!
I have seen other errors of this type but these contain hidden characters and control characters that I can trap and remove.
In this case I looked at the string as a Char[] and could not see anything unusual. Only what I expected. No values above the ascii limit of 127 and no control characters.
I understand that WChar is something that C++ understands (but I don't), Unicode is a two byte representation of characters and ASCII is a one byte representation.
I would like to do two things - first identify a string that will fail if passed to the compiler and second fix the string. I assume the compiler is expecting ASCII.
EDIT
I told an untruth - in fact I do use encoding. I checked the code I used to convert a byte array into a string.
public static string Bytes2String(byte[] bytes, int start, int length) {
string temp = Encoding.Defaut.GetString(bytes, start, length);
}
I realized that Default might be an issue but changing it to ASCII makes no difference. I am beginning to believe that the error message is not what it seems.
It looks like you are taking a byte array, and converting it as a string using the encoding returned by Encoding.Default.
It is recommended that you do not do this (in the Microsoft documentation).
You need to work out what encoding is being used in the C++ program to generate the byte array, and use the same one (or a compatible one) to convert the byte array back to a string again in the C# code.
E.g. if the byte array is using ASCII encoding, you could use:
System.Text.ASCIIEncoding.GetString(bytes, start, length);
or
System.Text.UTF8Encoding.GetString(bytes, start, length);
P.S. I hope Joel doesn't catch you ;)
I have to come clean that the compiler error has nothing to do with the encoding format of the string. It turns out that it is the length of the string that is at fault. As per the sample there are a number of entries separated by commas. The compiler throws the rather unhelful messages if the entry count exceeds 50.
However Thanks everyone for your help - it has raised the issue of encoding in my mind and I will now look at it much more carefully

C# convert any format string to double

I tried searching google and stackoverflow without success.
I'm having a problem with "Input string was not in a correct format." exception with an application I'm working at.
Thing is, that I convert some double values to strings with doubleNumber.ToString("N2"); in order to store them in XML file. When I switch testing machines, XML file stored on one can't be returned back to double values.
I've tried all of the solutions I could think of, but setting number culture didn't work, using CultureInfo.InvariantCulture, replacing characters also doesn't work. Sometimes the values are stored like "3,001,435.57" and sometimes (on other PC) like "3.001.435,57".
Is there some function or a way to parse a double from string, whatever the input format is?
Thanks.
You have to specify a culture because (eg.) "3,001" is ambiguous - is it 3.001 or 3001?
Depending on what your numbers look like, perhaps you could attempt to detect the culture by counting number of , and . characters and/or checking their positions.
Here is what you are looking for...
http://msdn.microsoft.com/en-us/library/9s9ak971.aspx
This will accept a string variable and a format provider. You need to create a format provider that provides the culture information you are looking to convert out of.

Categories