Specified cast is not valid - bigint to long - C# - c#

I am trying to map bigint data from sq server table to c# long variable. Which I believe is the correct way to map.
long id = (long)ds.Tables[2].Rows[0].ItemArray[0];
I also tried below as suggested on SO.
long id = (long)(double)ds.Tables[2].Rows[0].ItemArray[0];
With both above I get below error:
System.InvalidCaseException : Specified cast is not valid.
In case you are wondering what data it contains, it is "1".

Convert.toInt64 got it worked. Thanks all
You are just hiding a bigger problem you have. The datatype in your column of your c# DataTable object held within the DataSet ds is likely set incorrectly. You need to set your table up so that the column's property for DataType is set to be Int64.

Related

InvalidCastException when using datareader.getString() on a string field that contains numerical value

I have a field in a sqlite database, we'll call it field1, on which I'm trying to iterate over each record (there's over a thousand records). The field type is string. The value of field1 in the first four rows are as follows:
DEPARTMENT
09:40:24
PARAM
350297
Here is some simple code I use to iterate over each row and display the value:
while (sqlite_datareader.Read())
{
strVal = sqlite_datareader.GetString(0);
Console.WriteLine(strVal);
}
The first 3 values display correctly. However, when it gets to the numerical entry 350297 it errors out with the following exception on the .getString() method
An unhandled exception of type 'System.InvalidCastException' occurred in System.Data.SQLite.dll
I've tried casting to a string, and a bunch of other stuff. But I can't get to the bottom of why this is happening. For now, I'm forced to use getValue, which is of type object, then convert back to a string. But I'd like to figure out why getString() isn't working here.
Any ideas?
EDIT: Here's how I currently deal with the problem:
object objVal; // This is declared before the loop starts...
objVal = sqlite_datareader.IsDBNull(i) ? "" : sqlite_datareader.GetValue(i);
if (objVal != "")
{
strVal = (string)objVal;
}
What the question should have included is
The table schema, preferrably the CREATE TABLE statement used to define the table.
The SQL statement used in opening the sqlite_datareader.
Any time you're dealing with data type issues from a database, it is prudent to include such information. Otherwise there is much unnecessary guessing and floundering (as apparent in the comments), when so very useful, crucial information is explicitly defined in the schema DDL. The underlying query for getting the data is perhaps less critical, but it could very well be part of the issue if there are CAST statements and/or other expressions that might be affecting the returned types. If I were debugging the issue on my own system, these are the first thing I would have checked!
The comments contain good discussion, but a best solution will come with understanding how sqlite handles data types straight from the official docs. The key takeaway is that sqlite defines type affinities on a column and then stores actual values according to a limited set of storage classes. A type affinity is a type to which data will attempt to be converted before storing. But (from the docs) ...
The important idea here is that the type is recommended, not required. Any column can still store any type of data.
But now consider...
A column with TEXT affinity stores all data using storage classes NULL, TEXT or BLOB. If numerical data is inserted into a column with TEXT affinity it is converted into text form before being stored.
So even though values of any storage class can be stored in any column, the default behavior should have been to convert any numeric values, like 350297, as a string before storing the value... if the column was properly declared as a TEXT type.
But if you read carefully enough, you'll eventually come to the following at the end of section 3.1.1. Affinity Name Examples:
And the declared type of "STRING" has an affinity of NUMERIC, not TEXT.
So if the question details are taken literally and field1 was defined like field1 STRING, then technically it has NUMERIC affinity and so a value like 350297 would have been stored as an integer, not a string. And the behavior described in the question is precisely what one would expect when retrieving data into strictly-typed data model like System.Data.SQLite.
It is very easy to cuss at such an unintuitive design decisions and I won't defend the behavior, but
at least the results of "STRING" type are clearly stated so that the column can be redefined to TEXT in order to fix the problem, and
"STRING" is actually not a standard SQL data type. SQL strings are instead defined with TEXT, NTEXT, CHAR, NCHAR, VARCHAR, NVARCHAR, etc.
The solution is either to use code as currently implemented: Get all values as objects and then convert to string values... which should be universally possible with .Net objects since they should all have ToString() method defined.
Or, redefine the column to have TEXT affinity like
CREATE TABLE myTable (
...
field1 TEXT,
...
)
Exactly how to redefine an existing column filled with data is another question altogether. However, at least when doing the conversion from the original to the new column, remember to use a CAST(field1 AS TEXT) to ensure the storage class is changed for the existing data. (I'm not certain whether type affinity is "enforced" when simply copying/inserting data from an existing table into another or if the original storage class is preserved by default. That's why I suggest the cast to force it to a text value.)

SqlBulkCopy dealing with nulls

I am using SqlBulkCopy to upload lots of data to an SQL table. It works very well apart from one thing (always the way).
So in my c# app, I have a function. It receives a variable myObj of type object (from Matlab). The object is actually an array.
I create a DataTable where I specify the column type & read in the data from myObj. One of the columns (let's call it salary) in the table is of type double.
The problem
If one of the rows has a NaN value for salary the upload won't work, it returns the message below.
OLE DB provider 'STREAM' for linked server '(null)' returned invalid data for column
What I need in the database is the row to be uploaded but with the Salary, column to have a value of null. Is there any way of doing this?
The only crude way I have come up with is testing for when the value is null ( or NaN) in my c# app and assigning it a value of -999 and then after the upload updating any values from -999 to null. However, this seems like a poor workaround.

Can't call DataTable.Load() when IDataReader source has a VARCHAR(256) Identity column

I'm running into some trouble trying to load data into a DataTable using an IDataReader. To keep it really simple, I just call ExecuteReader() on the command, create a DataTable, call Load() on it and feed it the object implementing IDataReader:
...
if(dataReader.HasRows)
{
DataTable tempDT = new DataTable();
tempDT.Load(dataReader);
....
}
...
This works in the vast majority of cases. However, in (rare) circumstances, I get the following exception (column name is obviously variable - in this case, it's ID):
Error - MaxLength applies to string data type only. You cannot set Column `ID` property MaxLength to be a non-negative number
I investigated the source table I was trying to load, and I suspect that the problem stems from it having a VARCHAR(256) ID column, that is a Required, Unique, Key (the issue doesn't seem to occur when the PK is a regular old int). This type of situation is really uncommon in the source data, and while it definitely isn't ideal, I can't modify the schema of the source data.
I took a look at the SchemaTable in more detail, and I am at a loss:
ColumName - ID
ColumnSize - 256
ProviderType - NVarChar
DataType - {Name = "String" FullName = "System.String"}
IsIdentity - True
IsKey - True
IsAutoIncrement - True
IsUnique - True
It just doesn't make sense to me. The source table uses unique codes as the ID, and while it isn't the way I would've designed it, it's.. fine. But I don't understand how a String/Varchar can ever be an identity, auto-increment, etc.
Unfortunately, I'm at the mercy of this source data and can't mess with it, so I'm hoping someone here might have more insight into what exactly is going on. Can anyone conceive of a way for me to Load() my DataTable without applying all the constraints from the IDataReader source data? Is there an entirely alternative approach that would avoid this problem?
Thanks for reading, thanks in advance for any assistance. It's my first question so be gentle. If there's any more information that would help, please let me know!
EDIT: Some people asked for the full code for loading the DataTable. Appended here. Should add that the CacheCommand/etc. comes in from this 'InterSystems.Data.CacheClient' assm. Kinda hoping the problem can be approached more generically. In this case, the Query string is just a 'SELECT TOP 10 *' test.
using (CacheConnection cacheConnection = new CacheConnection())
{
cacheConnection.ConnectionString = connectionString;
cacheConnection.Open();
using (CacheCommand cacheCommand = new CacheCommand(Query, cacheConnection))
{
using (CacheDataReader cacheDataReader = cacheCommand.ExecuteReader())
{
if (cacheDataReader.HasRows)
{
DataTable tempDT = new DataTable();
tempDT.Load(cacheDataReader); // Exception thrown here.
cacheConnection.Close();
return tempDT;
}
else
{
cacheConnection.Close();
return null;
}
}
}
}
EDIT 2: In case it's not clear, I'm trying to extract the entirety of a (small) table from the Cache DB into a DataTable. I normally do this by calling dataTable.Load(cacheDataReader), which works fine 99% of the time, but breaks when the source table in the Cache DB has an identity column of type VARCHAR.
Calling Load() on my DataTable object (which is empty) causes it to infer the schema based on the result set from the imported IDataReader (in this case, CacheDataReader). The problem is that the schema in the CacheDataReader specifies the data in the list above^, and DataTable doesn't seem to allow the MaxLength property, even though the type is VARCHAR/String.
SELECT TOP 10 * FROM table
WHERE IsNumeric(ColumName) = 0
This will return only data where the Primary Key is of type Int

How can I safely cast all data from a dataset populated by a SQL query as a string?

Environment: ASP.net, vb.net, SQL Server
I'm retrieving a dataset from a stored procedure. I need to iterate over every column, grab the column name, and the value of the associated row... I'm storing the column names and values in a custom class.
The problem is, this is an end user situation, therefore I have absolutely no idea what SQL query they will use, or stored procedure for that matter.
When testing, I first ran into an issue that I've seen before, and it was easy to deal with...
Consider the following:
For Each column As DataColumn In DynamicQuery.Tables(0).Columns
Dim columnName = column.ColumnName.ToString
Dim value = CType(row(column.Columnname), String)
' Other code...
Next column
My first problem was running into a NULL value. I solved that with:
CType(If(IsDBNull(row(column.ColumnName)), String.Empty, row(column.ColumnName)), String)
I should also add that I want every single value to be converted to a string...
Then I ran into another problem, and that was testing a table which had a uniqueidentifier datatype.
I received the error:
Error converting 'GUID' to type 'String'
After a bit of reading, I see it's suggested to cast it into a GUID struct and then convert it to a string.
This leads to my main question.
Is there a simpler way to do this? Or should I be using:
Select Case column.DataType
On every single column iteration? If GUID is the only problem, I can deal with that one... Just waiting for a bunch of gotcha's down the road where yet another datatype causes an issue.
I suppose I'm looking for the appropriate method of defensive programming here. Again, no matter what the rows are that are returned into the dataset, every single column is read, and it's value, and everything needs to be either a string with a length, or an empty string. Apologies if I'm missing something plain and simple. C# or vb.net examples welcomed with a smile.

Cannot convert from 'System.DateTime' to 'System.Data.Linq.Binary' error

In the program I'm currently working on, my table has a Create_Timestamp column, which I defined as timestamp.
When I'm working with my data context and my form values in my controller on the HttpPost, I'm trying the following:
NewsArticle article = new NewsArticle();
article.Create_Timestamp = System.DateTime.Now;
The error I get is Cannot implicitly convert from 'System.DateTime' to 'System.Data.Linq.Binary'
I've tried to force the conversion, but I'm unsure exactly what I'm doing at this point.
Is it possible in C# to do this conversion and still have Linq be happy with me?
Thanks
I am guessing you are using the SQL timestamp type in your table and you are expecting it to be a DateTime. Timestamp isn't really meant for holding Date/Time information. From MSDN (http://msdn.microsoft.com/en-us/library/aa260631(SQL.80).aspx):
timestamp is a data type that exposes
automatically generated binary
numbers, which are guaranteed to be
unique within a database. timestamp is
used typically as a mechanism for
version-stamping table rows. The
storage size is 8 bytes.storage size is 8 bytes.
Change your column "Create_Timestamp" to DateTime and you should be fine.

Categories