I'm working on an app. I'm using JavaScript to save values to a database. My database table has a column that holds a Decimal value. It works fine with some old C# code. In fact, in C#, I'd do this:
decimal? myValue = null;
decimal temp = 0;
if (Decimal.TryParse(myString, out temp)) {
myValue = temp;
}
I understand that JavaScript only has a single Number type. However, because I'm saving my value to the database, how do I ensure that its a decimal? In C#, I know that a Float is basically a 32-bit value, a Double is basically a 64-bit value, and a Deciml is basically a 128-bit value. However, I'm not sure how to translate this to JavaScript.
Can anyone provide some insights?
Thanks!
You would check for decimals in javascript like this:
var dec = 3.14;
if(typeof dec == "number" && (dec+'').indexOf('.')!=-1){
var myvalue = dec;
}
Note that the above will fail on numbers such as 5.00 as noted by FelixKling as the decimals are lost when it is converted to String.
Related
I'm currently trying to convert a few numbers from a DB2 Server into double values in C#.
The getting of the data from the DB2 Server is not a Problem, and I get the data into a DataTable quite easily. The Problem then Comes when I try to convert the objects into double values, as the Notation is different (, instead of . as example).
Thus the code:
foreach (DataRow row in DataTable myResultTable)
{
double myValue = String.IsNullOrEmpty(row["myValue"].ToString())? 0 : (double)row["myValue"]; // myValue has 1234,56 as Content.
}
Fails with an exception that the value can't be converted.
The datatype of the field myValue in the db2 is Decimal with a length of 16.
As I didn't find anything, I thought about converting it to string, formating it there and then transform that into a double but that seems quite....complicated to me for something that should be easy (and complicated always means prone to Errors because of something unexpected).
So my question is: Is there any easy way to do this Transformation?
Edit:
As it was asked a GetType on row["myValue"] results in: {Name = "Decimal" FullName = "System.Decimal"}.
The real solution is to cast to decimal not to double:
var value = row["myValue"] is DBNull ? 0m : (decimal)row["myValue"];
So I was dinking around in LinqPad and noticed something odd, I get same result in Visual Studio testing code Unit Tests.
I was playing with all the different TryParse for the numerical datatypes. During this I noticed that double.TryParse is acting a bit different then the rest.
For example:
var doubleStr = double.MinValue.ToString();
//doubleStr = -1.79769313486232E+308
double result;
var result = double.TryParse(doubleStr, out result);
//result is returning false
All other datatypes with the MinValue are not having this trouble:
var floatStr = float.MinValue.ToString();
//floatStr = -3.402823E+38
float result2;
float.TryParse(floatStr, out result2);
//result = True
Any body know why double is the only one that false to parse the string version of it's MinValue property back to an actual double?
I am not seeing why this is different off hand. Maybe I am missing something.
To get a string that can be surely re-parsed as a double, use the "R" (round-trip) format string:
double.Parse(double.MinValue.ToString("R"))
In other formats, the string you get may generally re-parse to a different value, due to rounding. With double.MinValue, this gets especially worse, since the different value it would re-parse to is outside of the range of double. Hence the parse failing.
When I pass this object back as JSON, it looks like this:
0.000000000000000e+000
My code in C# is:
// get adjustments for user
IEnumerable<Severity> existingSeverities =
from s in db.AdjusterPricingGrossLossSeverities
where s.type == type
&& s.adjusterID == adj.id
select new Severity
{
id = s.severity,
adjustment = Math.Round((double)s.adjustment, 2, MidpointRounding.AwayFromZero).ToString(),
isT_E = (bool)s.isTimeAndExpense
};
How can I make it just round to two decimal places (0.00)?
Use;
dec.ToString("#.##");
See this answer for more information
If it's a nullable double in a Console app do;
double ? d = 2.22222;
Console.WriteLine(d.Value.ToString("#.##"));
I think that you are confusing two things. The "real" number is not what you see. The real number is stored internally in a binary format. The decimal digits that you see do not exist in this internal format. What you see is the conversion of this value to a decimal representation as a string.
The conversion of any internal binary representation to a human visible string is called formatting. The Round function does not format. See this example:
double x = 0.123456000000000e+000;
double y = Math.Round(x, 2, MidpointRounding.AwayFromZero);
// y ====> 0.120000000000000e+000;
The rounding function changes the internal value. What you need is probably not to change the value but to display the unchanged value with only two digits:
string formattedValue = x.ToString("N2");
If you are deling with currencies, use decimal rather than double. decimal uses a binary encoded decimal format internally. Values like 1/10 cannot be represented precisely as binary number in a computer just like 1/7 cannot be represent precisely in decimal notation (0.142857142857...). But 1/10 has an exact internal representation when stored as a decimal.
Turns out, this was a LINQ to SQL issue. I did this, and it works...
// get adjustments for user
IEnumerable<Severity> existingSeverities =
from s in db.AdjusterPricingGrossLossSeverities
where s.type == type
&& s.adjusterID == adj.id
select new Severity
{
id = s.severity,
adjustment = roundtest(s.adjustment.GetValueOrDefault()),
isT_E = (bool)s.isTimeAndExpense
};
// test method...
public string roundtest(double num)
{
return num.ToString("#.##");
}
Try this:
// get adjustments for user
IEnumerable<Severity> existingSeverities =
from s in db.AdjusterPricingGrossLossSeverities
where s.type == type
&& s.adjusterID == adj.id
select new Severity
{
id = s.severity,
adjustment = s.adjustment.GetValueOrDefault().ToString("0.##"),
isT_E = (bool)s.isTimeAndExpense
};
-Edit-
I think that maybe you will need to have the Severity class have a Property that takes a double and saves a string to Severity.adjustment, like so:
Severity
{
//rest of class as normal
public double SetAdjustment
{
set { adjustment = value.ToString("0.00"); } }
}
}
-Edit, part 2-
// get adjustments for user
IEnumerable<Severity> existingSeverities =
from s in db.AdjusterPricingGrossLossSeverities
where s.type == type
&& s.adjusterID == adj.id
select new Severity
{
id = s.severity,
SetAdjustment = s.adjustment.GetValueOrDefault(),
isT_E = (bool)s.isTimeAndExpense
};
The rest of your code should not need to be changed, it should still use (Severity variable).adjustment as normal. This is just to get around the fact that there is not guaranteed way to translate .Net's Standard Numeric Format Strings into SQL's Convert, much less any custom formatting.
I have following section of code in my program:
object val;
val = arr[1].Trim(); // in my case i am getting value here is 1.00
now when I am assigning value to a datarow I am getting error
Expected int64 value.
datarow[Convert.ToString(drow["col"]).Trim().ToUpper()] = val;
I am not facing any issue when getting value other that 1.00.
What could be the exact problem? How can I solve it?
Suggestions and solutions are welcome
If that column in your datatable is expecting an Int64 you need to convert val (which is a string) to an Int64:
var val = arr[1].Trim(); // String at this point
long longVal = 0;
if(!long.TryParse(val,out longVal){
throw new InvalidOperationException("value wasnt an Int64!");
}
datarow[Convert.ToString(drow["col"]).Trim().ToUpper()] = longVal
arr[1] seems to be string, and applying .Trim() keeps it as a string, even if it's "1.00". If you need an integer, you need to parse it. However, it can't be parsed to an intteger, because it's actually a double.
As a proof of whether I'm right or not, you can try (Int64)double.Parse(val) and that should work. However, it's up to you to decide whether that's not an issue for your program. There's two possible issues:
val might not be parse-able to double, in which case you will get an exception
val might be a double, but not one that can be represented as an int (too large, or lose precision ex. "1.8" would become 1)
Hope this helps
I am trying to convert some vb6 code to c# and I am struggling a bit.
I have looked at this page below and others similar, but am still stumped.
Why use hex?
vb6 code below:
Dim Cal As String
Cal = vbNull
For i = 1 To 8
Cal = Cal + Hex(Xor1 Xor Xor2)
Next i
This is my c# code - it still has some errors.
string Cal = null;
int Xor1 = 0;
int Xor2 = 0;
for (i = 1; i <= 8; i++)
{
Cal = Cal + Convert.Hex(Xor1 ^ Xor2);
}
The errors are:
Cal = Cal + Convert.Hex(Xor1 ^ Xor2 ^ 6);
Any advice as to why I cant get the hex to convert would be appreciated.
I suspect its my lack of understanding the .Hex on line 3 above and the "&H" on line 1/2 above.
Note: This answer was written at a point where the lines Xor1 = CDec("&H" + Mid(SN1, i, 1))
and Xor1 = Convert.ToDecimal("&H" + SN1.Substring(i, 1)); were still present in the question.
What's the &H?
In Visual Basic (old VB6 and also VB.NET), hexadecimal constants can be used by prefixing them with &H. E.g., myValue = &H20 would assign the value 32 to the variable myValue. Due to this convention, the conversion functions of VB6 also accepted this notation. For example, CInt("20") returned the integer 20, and CInt("&H20") returned the integer 32.
Your code example uses CDec to convert the value to the data type Decimal (actually, to the Decimal subtype of Variant) and then assigns the result to an integer, causing an implicit conversion. This is actually not necessary, using CInt would be correct. Apparently, the VB6 code was written by someone who did not understand that (a) the Decimal data type and (b) representing a number in decimal notation are two completely different things.
So, how do I convert between strings in hexadecimal notation and number data types in C#?
To convert a hexadecimal string into a number use
int number = Convert.ToInt32(hex, 16); // use this instead of Convert.ToDecimal
In C#, there's no need to pad the value with "&H" in the beginning. The second parameter,16, tells the conversion function that the value is in base 16 (i.e., hexadecimal).
On the other hand, to convert a number into its hex representation, use
string hex = number.ToString("X"); // use this instead of Convert.ToHex
What you are using, Convert.ToDecimal, does something completely different: It converts a value into the decimal data type, which is a special data type used for floating-point numbers with decimal precision. That's not what you need. Your other method, Convert.Hex simply does not exist.