I am trying to retrieve the exact value that I see in the database.
The database column type is float. The database is not mine and I cannot change its type or anything. In the database, I have the value 0.1, but when I read it with SqlDataReader.GetDouble it returns something like 0.09999.....98.
I also tried using GetValue and then manually converting to double, but still no difference.
I also cant use static rounding because while this value is 0.1 in the database and is read as 0.09...98, there are some values that have a lot more decimals. F.e. there is a value 0.0151515151515152 which with GetDouble is being read as 0.015151515151515148
Related
I have run into an issue where our decimal columns which have a column type of decimal(18,7) are causing the Sum operation to be evaluated locally.
From what I've been able to find this is because SQlite's numeric datatype's. I thought that was just an example, but after changing our decimals to be decimal(10,5) instead of decimal(18,7) the sum operation is no longer run locally. This confuses me because from what I've understood the datatypes/affinities don't actually effect how the data is stored.
The fact that decimal(10,5) works seems to also contradict the following statement they make.
Note that numeric arguments in parentheses that following the type name (ex: "VARCHAR(255)") are ignored by SQLite
We are using SQlite to store a copy of the data locally from a SQL Server Db, and the SQL Server uses decimal(18,7) so we can't change that for SQlite.
Is there any way to change this so we can use the linq Sum operation our decimal columns?
EDIT:
I found that if I cast the decimal to a double inside the Sum, it will work fine. Seems like it doesn't want to convert it to SQL because it will lose precision. At the moment I am calling ToList to pull the records in locally before doing the Sum, but this seems extremely wasteful
Does this mean that our decimal columns are actually stored as floating points in the database?
I am fetching data from sql server and exporting it to excel programmatically using c#.Based on the column data type, I am doing format of cell.Here I am facing one issue.
In sql server,retrieving column data type is defined as numeric(20,0).In my application column type is coming as Decimal.I want to get the type other numeric type like Double or Int64. Is there any way to get it?
Since the datatype in the DB is numeric, you have two options to get either a double or int.
Change the datatype in the database from numeric to int.
Receive the data as a decimal and round it. For this, use Math.Round as Convert.ToInt32 can have some unexpected behavior.
I am using SqlBulkCopy to upload lots of data to an SQL table. It works very well apart from one thing (always the way).
So in my c# app, I have a function. It receives a variable myObj of type object (from Matlab). The object is actually an array.
I create a DataTable where I specify the column type & read in the data from myObj. One of the columns (let's call it salary) in the table is of type double.
The problem
If one of the rows has a NaN value for salary the upload won't work, it returns the message below.
OLE DB provider 'STREAM' for linked server '(null)' returned invalid data for column
What I need in the database is the row to be uploaded but with the Salary, column to have a value of null. Is there any way of doing this?
The only crude way I have come up with is testing for when the value is null ( or NaN) in my c# app and assigning it a value of -999 and then after the upload updating any values from -999 to null. However, this seems like a poor workaround.
I've an Oracle table with a column of type NUMBER(38,0)
I need to fetch this column to my C# application.
I use the library System.Data.Odbc.OdbcDataReader to read the data from my Oracle table.
I've tried fetching the data by using the normal functions like:
var data = oracleReader["COLUMN"].ToString();
And
var data = oracleReader.GetString(0);
And even the oracleReader.GetBytes() function.
But I always get System.OverflowException, because the OdbcDataReader always try to get the column as decimal in the last step:
System.OverflowException: Value was either too large or too small for a Decimal.
at System.Data.Odbc.OdbcDataReader.internalGetDecimal(Int32 i)
at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i, TypeMap typemap)
at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i)
at System.Data.Odbc.DbCache.AccessIndex(Int32 i)
at System.Data.Odbc.OdbcDataReader.internalGetString(Int32 i)
at System.Data.Odbc.OdbcDataReader.GetString(Int32 i)
I'm happy if I can get this data as a String to my Application.
FYI, I can not change the datatype of the column, I need to work with this.
This data type is an alias for the NUMBER(38) data type, and is designed so that the OracleDataReader returns a System.Decimal or OracleNumber instead of an integer value. Using the .NET Framework data type can cause an overflow.
Come to think of it you actually need BigInteger to be able to represent the same number of significant digits as to what NUMBER defaults to. I've never seen anyone do that and I would suppose it's a very rare need. Also BigInteger still wouldn't cut it since NUMBER can be of positive and negative infinity.
You can use this list for future research.
EDIT: I am now strongly suspecting this behavior is due to a bug in the OleDB.Oracle provider. Upon other testing, I was able to perform Select statements against other CAST column values with negative scale that did not cause the 'Decimal byte constructor...' exception. I also note that the provider is returning the absolute value of the scale when viewing the schema, eg scale of -2 is returned as 2. Additionally, this same test query does not cause an exception when run through the ODP.NET driver (rather than the Oracle OLEDB provider). Changing the numeric delimiter as suggested by Lalit (in comments) did not affect the results (but I thank him for his time nonetheless). I continue to research this problem and will advise if more information is realized.
I have a 64-bit C# application that fetches data from an Oracle database via the Oracle 11g OLEDB provider. When Oracle returns a numeric type defined or cast with negative scale (such as 'Select Cast(123.1 as Number(3,-1))', the mapped OleDB schema (from GetSchemaTable) is reporting that column as a Decimal with a scale of 255. The documentation indicates 255 is intended to represent an N/A or irrelevant value.
When OleDBDataReader.GetValues() is later called on the row containing such a column, an ArgumentException is thrown, advising that a 'Decimal byte array constructor...requires four valid decimal bytes," telling me that even though the OleDB Provider thinks its Decimal data, there's no valid Decimal data to read. I'm making an assumption that data is present, but not sure exactly what.
I have tried:
Explicitly getting the bytes from the column via calls to OleDbDataReader.GetBytes (even a call to size a buffer excepts), but doing so throws "Specified cast is not valid" ArgumentExceptions.
Written a chunk of test code to get every possible supported return data type, eg GetInt16, GetInt32, etc., etc., and each throws the same exception (invalid cast).
Does the Oracle OleDB provider not even return data to the caller when fetching a column defined with a negative scale? Is there some other mechanism to at least get the bytes "across the pond" and manipulate them on the receiving end?