I've an Oracle table with a column of type NUMBER(38,0)
I need to fetch this column to my C# application.
I use the library System.Data.Odbc.OdbcDataReader to read the data from my Oracle table.
I've tried fetching the data by using the normal functions like:
var data = oracleReader["COLUMN"].ToString();
And
var data = oracleReader.GetString(0);
And even the oracleReader.GetBytes() function.
But I always get System.OverflowException, because the OdbcDataReader always try to get the column as decimal in the last step:
System.OverflowException: Value was either too large or too small for a Decimal.
at System.Data.Odbc.OdbcDataReader.internalGetDecimal(Int32 i)
at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i, TypeMap typemap)
at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i)
at System.Data.Odbc.DbCache.AccessIndex(Int32 i)
at System.Data.Odbc.OdbcDataReader.internalGetString(Int32 i)
at System.Data.Odbc.OdbcDataReader.GetString(Int32 i)
I'm happy if I can get this data as a String to my Application.
FYI, I can not change the datatype of the column, I need to work with this.
This data type is an alias for the NUMBER(38) data type, and is designed so that the OracleDataReader returns a System.Decimal or OracleNumber instead of an integer value. Using the .NET Framework data type can cause an overflow.
Come to think of it you actually need BigInteger to be able to represent the same number of significant digits as to what NUMBER defaults to. I've never seen anyone do that and I would suppose it's a very rare need. Also BigInteger still wouldn't cut it since NUMBER can be of positive and negative infinity.
You can use this list for future research.
Related
I am trying to retrieve the exact value that I see in the database.
The database column type is float. The database is not mine and I cannot change its type or anything. In the database, I have the value 0.1, but when I read it with SqlDataReader.GetDouble it returns something like 0.09999.....98.
I also tried using GetValue and then manually converting to double, but still no difference.
I also cant use static rounding because while this value is 0.1 in the database and is read as 0.09...98, there are some values that have a lot more decimals. F.e. there is a value 0.0151515151515152 which with GetDouble is being read as 0.015151515151515148
I have run into an issue where our decimal columns which have a column type of decimal(18,7) are causing the Sum operation to be evaluated locally.
From what I've been able to find this is because SQlite's numeric datatype's. I thought that was just an example, but after changing our decimals to be decimal(10,5) instead of decimal(18,7) the sum operation is no longer run locally. This confuses me because from what I've understood the datatypes/affinities don't actually effect how the data is stored.
The fact that decimal(10,5) works seems to also contradict the following statement they make.
Note that numeric arguments in parentheses that following the type name (ex: "VARCHAR(255)") are ignored by SQLite
We are using SQlite to store a copy of the data locally from a SQL Server Db, and the SQL Server uses decimal(18,7) so we can't change that for SQlite.
Is there any way to change this so we can use the linq Sum operation our decimal columns?
EDIT:
I found that if I cast the decimal to a double inside the Sum, it will work fine. Seems like it doesn't want to convert it to SQL because it will lose precision. At the moment I am calling ToList to pull the records in locally before doing the Sum, but this seems extremely wasteful
Does this mean that our decimal columns are actually stored as floating points in the database?
I have a field in a sqlite database, we'll call it field1, on which I'm trying to iterate over each record (there's over a thousand records). The field type is string. The value of field1 in the first four rows are as follows:
DEPARTMENT
09:40:24
PARAM
350297
Here is some simple code I use to iterate over each row and display the value:
while (sqlite_datareader.Read())
{
strVal = sqlite_datareader.GetString(0);
Console.WriteLine(strVal);
}
The first 3 values display correctly. However, when it gets to the numerical entry 350297 it errors out with the following exception on the .getString() method
An unhandled exception of type 'System.InvalidCastException' occurred in System.Data.SQLite.dll
I've tried casting to a string, and a bunch of other stuff. But I can't get to the bottom of why this is happening. For now, I'm forced to use getValue, which is of type object, then convert back to a string. But I'd like to figure out why getString() isn't working here.
Any ideas?
EDIT: Here's how I currently deal with the problem:
object objVal; // This is declared before the loop starts...
objVal = sqlite_datareader.IsDBNull(i) ? "" : sqlite_datareader.GetValue(i);
if (objVal != "")
{
strVal = (string)objVal;
}
What the question should have included is
The table schema, preferrably the CREATE TABLE statement used to define the table.
The SQL statement used in opening the sqlite_datareader.
Any time you're dealing with data type issues from a database, it is prudent to include such information. Otherwise there is much unnecessary guessing and floundering (as apparent in the comments), when so very useful, crucial information is explicitly defined in the schema DDL. The underlying query for getting the data is perhaps less critical, but it could very well be part of the issue if there are CAST statements and/or other expressions that might be affecting the returned types. If I were debugging the issue on my own system, these are the first thing I would have checked!
The comments contain good discussion, but a best solution will come with understanding how sqlite handles data types straight from the official docs. The key takeaway is that sqlite defines type affinities on a column and then stores actual values according to a limited set of storage classes. A type affinity is a type to which data will attempt to be converted before storing. But (from the docs) ...
The important idea here is that the type is recommended, not required. Any column can still store any type of data.
But now consider...
A column with TEXT affinity stores all data using storage classes NULL, TEXT or BLOB. If numerical data is inserted into a column with TEXT affinity it is converted into text form before being stored.
So even though values of any storage class can be stored in any column, the default behavior should have been to convert any numeric values, like 350297, as a string before storing the value... if the column was properly declared as a TEXT type.
But if you read carefully enough, you'll eventually come to the following at the end of section 3.1.1. Affinity Name Examples:
And the declared type of "STRING" has an affinity of NUMERIC, not TEXT.
So if the question details are taken literally and field1 was defined like field1 STRING, then technically it has NUMERIC affinity and so a value like 350297 would have been stored as an integer, not a string. And the behavior described in the question is precisely what one would expect when retrieving data into strictly-typed data model like System.Data.SQLite.
It is very easy to cuss at such an unintuitive design decisions and I won't defend the behavior, but
at least the results of "STRING" type are clearly stated so that the column can be redefined to TEXT in order to fix the problem, and
"STRING" is actually not a standard SQL data type. SQL strings are instead defined with TEXT, NTEXT, CHAR, NCHAR, VARCHAR, NVARCHAR, etc.
The solution is either to use code as currently implemented: Get all values as objects and then convert to string values... which should be universally possible with .Net objects since they should all have ToString() method defined.
Or, redefine the column to have TEXT affinity like
CREATE TABLE myTable (
...
field1 TEXT,
...
)
Exactly how to redefine an existing column filled with data is another question altogether. However, at least when doing the conversion from the original to the new column, remember to use a CAST(field1 AS TEXT) to ensure the storage class is changed for the existing data. (I'm not certain whether type affinity is "enforced" when simply copying/inserting data from an existing table into another or if the original storage class is preserved by default. That's why I suggest the cast to force it to a text value.)
I am working on a rather large SQL Database project that requires some history tracking. I am aware that SQL has things like Data Capture but I need to have more control than just storing backup copies of the data along with other requirements.
Here is what I am trying to do, I found some similar questions on here like
get-the-smallest-datetime-value-for-each-day-in-sql-database
and
how-can-i-truncate-a-datetime-in-sql-server
What I am trying to avoid is exactly the types of things that are mentioned in these answers, by that I mean that they require some sort of conversion during the Query i.e. truncating the DateTime value, what I would like to do is take the DateTime.Ticks or some other DateTime property, and do a one way conversion to a smaller type to create a "Version" of each record that can be quickly Queried without doing any kind of Conversion after the database update/insert.
The concern that I have about simply storing the long from the DateTime.Ticks as a Version or using something like a Base36 string is the size and the time that is required during queries to compare the fields.
Can anyone point me in the right direction on how to SAFELY convert a DateTime.Ticks or a long into something that can be directly compared during queries? By this I mean I would like to be able to locate the history record using something like:
int versionToFind = GetVersion(DateTime.Now);
var result = from rec in db.Records
where rec.version <= versionToFind
select rec;
or
int versionToFind = record.version;
var result = from rec in db.Records
where rec.version >= versionToFind
select rec;
One thing to mention here is that I am not opposed to using some other method of quickly tracking the History of the data. I just need to end up with the quickest and smallest solution to be able to generate and compare Versions for each record.
EDIT: I am now strongly suspecting this behavior is due to a bug in the OleDB.Oracle provider. Upon other testing, I was able to perform Select statements against other CAST column values with negative scale that did not cause the 'Decimal byte constructor...' exception. I also note that the provider is returning the absolute value of the scale when viewing the schema, eg scale of -2 is returned as 2. Additionally, this same test query does not cause an exception when run through the ODP.NET driver (rather than the Oracle OLEDB provider). Changing the numeric delimiter as suggested by Lalit (in comments) did not affect the results (but I thank him for his time nonetheless). I continue to research this problem and will advise if more information is realized.
I have a 64-bit C# application that fetches data from an Oracle database via the Oracle 11g OLEDB provider. When Oracle returns a numeric type defined or cast with negative scale (such as 'Select Cast(123.1 as Number(3,-1))', the mapped OleDB schema (from GetSchemaTable) is reporting that column as a Decimal with a scale of 255. The documentation indicates 255 is intended to represent an N/A or irrelevant value.
When OleDBDataReader.GetValues() is later called on the row containing such a column, an ArgumentException is thrown, advising that a 'Decimal byte array constructor...requires four valid decimal bytes," telling me that even though the OleDB Provider thinks its Decimal data, there's no valid Decimal data to read. I'm making an assumption that data is present, but not sure exactly what.
I have tried:
Explicitly getting the bytes from the column via calls to OleDbDataReader.GetBytes (even a call to size a buffer excepts), but doing so throws "Specified cast is not valid" ArgumentExceptions.
Written a chunk of test code to get every possible supported return data type, eg GetInt16, GetInt32, etc., etc., and each throws the same exception (invalid cast).
Does the Oracle OleDB provider not even return data to the caller when fetching a column defined with a negative scale? Is there some other mechanism to at least get the bytes "across the pond" and manipulate them on the receiving end?