I'm in the process of converting a Delphi app to C#, and I came across this:
alength:=1; //alength is a byte
aa:=astring //astring is a string parameter passed into the function containing all this
alength:=strtoint(copy(aa,2,length(aa)-1));
So copy creates a string from part of an existing string, with the first character of the string starting at index 1, not 0 like other languages. It uses this format:
function copy ( Source : string; StartChar, Count : Integer ) : string;
And then strtoint which converts a string to an int.
For my c# conversion of that bit of code, I have:
alength = Convert.ToInt32(aa.Substring(1 ,aa.Length - 1));
which gives me the error Error 131 Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?)
Since alength is already type byte, I didn't think I had to cast it?
You're using Convert.ToInt32() when you're assigning a byte. Use Convert.ToByte() instead.
Even better would be to use TryParse instead to avoid exceptions when the string isn't valid:
byte alength;
bool success = Byte.TryParse(aa.SubString(1,aa.Length - 1), out alength);
If the parsing succeedded success will be true, otherwise false.
You can define the flow of your program depending on whether the conversion succeeds or not:
byte alength;
if(Byte.TryParse(aa.SubString(1,aa.Length - 1), out alength))
{
//Great success! continue program
}
else
{
//Oops something went wrong!
}
Simply change:
alength = Convert.ToInt32(aa.Substring(1 ,aa.Length - 1));
into
alength = Convert.ToByte(aa.Substring(1 ,aa.Length - 1));
But more important question here would be: what is the range of value for aa string in the original use? Is it 0-255? If it is, then you can simply use ToByte, but if it is not, then you should think of using other data type.
Something like this:
int alength = Convert.ToInt32(aa.Substring(1 ,aa.Length - 1)); //define as int
The pure cast can also work.
The byte is approx max 255 combinations I belive, so:
byte i = int_variable & 0x000000FF;
Will be fully controled cast.
can use 0xFF because there is no difference & 0x000000FF == 0xFF
Related
I want to know whether we can give the index of the string as Long data type.
var i=long.Parse(Console.ReadLine());
var result = testString[i-1];
the second line giving me the error by saying that "The best overloaded method match for 'string.this[int]' has some invalid arguments."
No you can't use long for most collection types (you haven't specified what testString is).
One way to get around this would be to segregate the string into a multi-part / multi-dimension array then use a multiplier to get which part of the array to check.
For example:
Your index is 100,000 and you have an array of shorts (32,767 length)...
string[,] testString = new string[100, 32766]; //Replace this with your Initialisation / existing string
var arrayRank = (int)Math.Round((double) 100000 / 32767, 0);
var arrayIndex = (int)Math.Round((double)100000 % 32767, 0);
//Test this works.
//testString[arrayRank, arrayIndex] = "test"; - Test to see that the array range is assignable.
var result = testString[arrayRank, arrayIndex]; //Test value is what we expect
This may not be the most efficient way to go about things, but it is a workaround.
No, it cannot accept a long. The only overload accepts an int indexer. You would need to change your code to int.Parse() instead of long.Parse()
There is no way to pass long as an index of array, compiler doesn't allow that.
Workaround can be converting the long to int, this is called narrow conversion.
var result= testString[(int)i)];
Consider this code:
var x = "tesx".Remove('x');
If I run this code, I get this exception:
startIndex must be less than length of string.
Why can I pass a char instead of an int to this method?
Why don't I get a compilation error?
Why does the compiler have this behavior?
you try to remove 'x' which is a declared as char, x is equal to 120
The .Remove only takes 2 parameters of type int the start and (optional) count to remove from the string.
If you pass a char, it will be converted to the integer representation. Meaning if you pass 'x' -> 120 is greater than the string's .Length and that's why it will throw this error!
Implicit conversion lets you compile this code since char can be converted to int and no explicit conversion is required. This will also compile and the answer will be 600 (120*5) -
char c = 'x';
int i = c;
int j = 5;
int answer = i * j;
From MSDN, implicit conversion is summarized as below -
As other's have stated you could use Replace or Remove with valid inputs.
There is no overload of Remove that takes a char, so the character is implicitly converted to an int and the Remove method tries to use it as an index into the string, which is way outside the string. That's why you get that runtime error instead of a compile time error saying that the parameter type is wrong.
To use Remove to remove part of a string, you first need to find where in the string that part is. Example:
var x = "tesx";
var x = x.Remove(x.IndexOf('x'), 1);
This will remove the first occurance of 'x' in the string. If there could be more than one occurance, and you want to remove all, using Replace is more efficient:
var x = "tesx".Replace("x", String.Empty);
Remove takes an int parameter for the index within the string to start removing characters, see msdn. The remove call must automatically convert the char to its ASCII integer index and try to remove the character at that index from the string, it is not trying to remove the character itself.
If you just want to remove any cases where x occurs in the string do:
"testx".Replace("x",string.Empty);
If you want to remove the first index of x in the string do:
var value = "testx1x2";
var newValue = value.Remove(value.IndexOf('x'), 1);
Since you are passing a char in the function and this value is getting converted to int at runtime hence you are getting the runtime error because the value of char at runtime is more than the length of the string.You may try like this:-
var x = "tesx";
var s = x.Remove(x.IndexOf('x'), 1);
or
var s = x.Replace("x",string.Empty);
.Remove takes the int as parameter. Remove takes two parameters. The first one is what position in your string you want to start at. (The count starts at zero.) The second parameter is how many characters you want to delete, starting from the position you specified.
On a side note:
From MSDN:
This method(.Remove) does not modify the value of the current instance.
Instead, it returns a new string in which the number of characters
specified by the count parameter have been removed. The characters are
removed at the position specified by startIndex.
You can use extension methods to create your own methods for already existing classes. Consider following example:
using System;
using MyExtensions;
namespace ConsoleApplication
{
class Program
{
static void Main(string[] args)
{
const string str1 = "tesx";
var x = str1.RemoveByChar('x');
Console.WriteLine(x);
Console.ReadKey();
}
}
}
namespace MyExtensions
{
public static class StringExtensions
{
public static string RemoveByChar(this String str, char c)
{
return str.Remove(str.IndexOf(c), 1);
}
}
}
What can be the reason for the problem? My method returns incorrect int values. When I give it hex value of AB or DC or something similar it returns int = 0 but when I give it a hex = 22 it returns me int = 22. (though int should be 34 in this case).
public int StatusBit(int Xx, int Rr) {
int Number;
int.TryParse(GetX(Xx,Rr), out Number);
return Number;
}
I tried to use Number = Convert.ToInt32(GetX(Xx,Rr)); but it gives same result but null instead of 0 for anything that includes letters.
Use Convert.ToInt32(string, int) instead. That way you can give a base the number should be interpreted in. E.g.
return Convert.ToInt32(GetX(Xx, Rr), 16);
(You also don't check the return value of TryParse which would give a hint that the parse failed.)
If you expect both decimal and hexadecimal numbers you need to branch according to how the number looks and use either base 10 or base 16. E.g. if your hexadeximal numbers always start with 0x you could use something along the following lines:
string temp = GetX(Xx, Rr);
return Convert.ToInt32(temp, temp.StartsWith("0x") ? 16 : 10);
But that would depend on how (if at all) you would distinguish the two. If everything is hexadecimal then there is no such need, of course.
Use NumberStyles.HexNumber:
using System;
using System.Globalization;
class Test
{
static void Main()
{
string text = "22";
int value;
int.TryParse(text, NumberStyles.HexNumber,
CultureInfo.InvariantCulture, out value);
Console.WriteLine(value); // Prints 34
}
}
Do you really want to silently return 0 if the value can't be parsed, by the way? If not, use the return value of int.TryParse to determine whether the parsing succeeded or not. (That's the reason it's returning 0 for "AB" in your original code.)
int.TryParse parses a base 10 integer.
Use Convert.ToUInt32(hex, 16) instead
here is my solution;
kTemp = int.Parse(xcc, System.Globalization.NumberStyles.HexNumber);
above kTemp is an integer, and xcc is a string.
xcc can be anything like; FE, 10BA, FE0912... that is to say; xcc is a string of hex characters in any length.
beware; I dont get the 0x prefix with my hex strings.
I am trying to convert some vb6 code to c# and I am struggling a bit.
I have looked at this page below and others similar, but am still stumped.
Why use hex?
vb6 code below:
Dim Cal As String
Cal = vbNull
For i = 1 To 8
Cal = Cal + Hex(Xor1 Xor Xor2)
Next i
This is my c# code - it still has some errors.
string Cal = null;
int Xor1 = 0;
int Xor2 = 0;
for (i = 1; i <= 8; i++)
{
Cal = Cal + Convert.Hex(Xor1 ^ Xor2);
}
The errors are:
Cal = Cal + Convert.Hex(Xor1 ^ Xor2 ^ 6);
Any advice as to why I cant get the hex to convert would be appreciated.
I suspect its my lack of understanding the .Hex on line 3 above and the "&H" on line 1/2 above.
Note: This answer was written at a point where the lines Xor1 = CDec("&H" + Mid(SN1, i, 1))
and Xor1 = Convert.ToDecimal("&H" + SN1.Substring(i, 1)); were still present in the question.
What's the &H?
In Visual Basic (old VB6 and also VB.NET), hexadecimal constants can be used by prefixing them with &H. E.g., myValue = &H20 would assign the value 32 to the variable myValue. Due to this convention, the conversion functions of VB6 also accepted this notation. For example, CInt("20") returned the integer 20, and CInt("&H20") returned the integer 32.
Your code example uses CDec to convert the value to the data type Decimal (actually, to the Decimal subtype of Variant) and then assigns the result to an integer, causing an implicit conversion. This is actually not necessary, using CInt would be correct. Apparently, the VB6 code was written by someone who did not understand that (a) the Decimal data type and (b) representing a number in decimal notation are two completely different things.
So, how do I convert between strings in hexadecimal notation and number data types in C#?
To convert a hexadecimal string into a number use
int number = Convert.ToInt32(hex, 16); // use this instead of Convert.ToDecimal
In C#, there's no need to pad the value with "&H" in the beginning. The second parameter,16, tells the conversion function that the value is in base 16 (i.e., hexadecimal).
On the other hand, to convert a number into its hex representation, use
string hex = number.ToString("X"); // use this instead of Convert.ToHex
What you are using, Convert.ToDecimal, does something completely different: It converts a value into the decimal data type, which is a special data type used for floating-point numbers with decimal precision. That's not what you need. Your other method, Convert.Hex simply does not exist.
say I have the following declarations:
public enum Complexity { Low = 0, Normal = 1, Medium = 2, High = 3 }
public enum Priority { Normal = 1, Medium = 2, High = 3, Urgent = 4 }
and I want to code it so that I could get the enum value (not the index, like I earlier mentioned):
//should store the value of the Complexity enum member Normal, which is 1
int complexityValueToStore = EnumHelper.GetEnumMemberValue(Complexity.Normal);
//should store the value 4
int priorityValueToStore = EnumHelper.GetEnumMemberValue(Priority.Urgent);
How should this reusable function look like?
tia!
-ren
Revised answer (after question clarification)
No, there's nothing cleaner than a cast. It's more informative than a method call, cheaper, shorter etc. It's about as low impact as you could possibly hope for.
Note that if you wanted to write a generic method to do the conversion, you'd have to specify what to convert it to as well: the enum could be based on byte or long for example. By putting in the cast, you explicitly say what you want to convert it to, and it just does it.
Original answer
What do you mean by "index" exactly? Do you mean the numeric value? Just cast to int. If you mean "position within enum" you'd have to make sure the values are in numeric order (as that's what Enum.GetValues gives - not the declaration order), and then do:
public static int GetEnumMemberIndex<T>(T element)
where T : struct
{
T[] values = (T[]) Enum.GetValues(typeof(T));
return Array.IndexOf(values, element);
}
You can find the integer value of an enum by casting:
int complexityValueToStore = (int)Complexity.Normal;
The most generic way I know of is to read the value__ field using reflection.
This approach makes no assumptions about the enum's underlying type so it will work on enums that aren't based on Int32.
public static object GetValue(Enum e)
{
return e.GetType().GetField("value__").GetValue(e);
}
Debug.Assert(Equals(GetValue(DayOfWeek.Wednesday), 3)); //Int32
Debug.Assert(Equals(GetValue(AceFlags.InheritOnly), (byte) 8)); //Byte
Debug.Assert(Equals(GetValue(IOControlCode.ReceiveAll), 2550136833L)); //Int64
Note: I have only tested this with the Microsoft C# compiler. It's a shame there doesn't appear to be a built in way of doing this.
I realize this isn't what you asked, but it's something you might appreciate.
I discovered that you can find the integer value of an enum without a cast, if you know what the enum's minimum value is:
public enum Complexity { Low = 0, Normal = 1, Medium = 2, High = 3 }
int valueOfHigh = Complexity.High - Complexity.Low;
This wouldn't work with Priority, unless you added some minimal value of 0, or added 1 back:
public enum Priority { Normal = 1, Medium = 2, High = 3, Urgent = 4 }
int valueOfUrgent = Priority.Urgent - Priority.Normal + 1;
I find this technique much more aesthetically appealing than casting to int.
I'm not sure off the top of my head what happens if you have an enum based on byte or long -- I suspect that you'd get byte or long difference values.
If you want the value, you can just cast the enum to int. That would set complexityValueToStore == 1 and priorityValueToStore == 4.
If you want to get the index (ie: Priority.Urgent == 3), you could use Enum.GetValues, then just find the index of your current enum in that list. However, the ordering of the enum in the list returned may not be the same as in your code.
However, the second option kind of defeats the purpose of Enum in the first place - you're trying to have discrete values instead of lists and indices. I'd rethink your needs if that is what you want.
This is the most simple way to solve your problem:
public static void GetEnumMemberValue<T>(T enumItem) where T : struct
{
return (int) Enum.Parse(typeof(T), enumItem.ToString());
}
It works for me.