Gathering Data from Oracle view in C# - c#

In my project, I want to get some data from oracle database.
In the oracle database there are tables and views separately.
So I have connected to the database and tried gathering data from the views.
So I wrote this code to get the data. But, I'm returning an error in the dt.Load(dr) line that exception said Specified Cast is not valid
Can anyone explain me what this error means and how to avoid this?
This is first time I'm working with Oracle db.
OracleConnection con = new OracleConnection("Data Source=TEST;Persist Security Info=True;UserID=app;Password=test;");
con.Open();
OracleCommand cmd = con.CreateCommand();
cmd.CommandText = "SELECT * FROM P.INVENTORY_PART_IN_STOCK_UIV WHERE PART_NO = '90202-KPL-900D' and upper(P.Sales_Part_API.Get_Catalog_Group(CONTRACT, PART_NO) ) = upper('SPMB')";
cmd.CommandType = CommandType.Text;
OracleDataReader dr = cmd.ExecuteReader();
DataTable dt = new DataTable();
dt.Load(dr);
dataGridView1.DataSource = dt.DefaultView;

Oracle column types of NUMBER can represent more precision than .NET native type type Decimal. I assume you are using an IFS database, and have faced the same issue. I ended up with a solution using Entity Framework and mapped any NUMBER fields to a Double and converted locally to Decimal. So far I haven't had any conversion issues in several years.
As pointed out in the comments, you can restrict your field list in your select statement, but you will struggle if you need to call a NUMBER type column. Quick and dirty solution would be to use a TO_CHAR around any NUMBER columns and convert in your local code.

Related

Adding paramater to SQL command in c#

I'm trying to get the following code to work:
String connStr = sqlRoutines.connectionString;
SqlConnection sqlConn = new SqlConnection(connStr);
sqlConn.Open();
SqlCommand cmd = new SqlCommand();
cmd.Connection = sqlConn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = "SELECT * FROM TABEL=#tabel";
cmd.Parameters.Add("#tabel", SqlDbType.Text).Value = DataContainer.sqlTabel;
SqlDataReader reader = cmd.ExecuteReader();
Console.WriteLine(reader.FieldCount.ToString());
reader.Close();
sqlConn.Close();
Somehow the value "DataContainer.sqlTabel" is not added to the command. Am I missing something here?
Whenever I use cmd.CommandText = "SELECT * FROM" + DataContainer.sqlTabel; everything is working fine. However I want to avoid this method because of the SQL Injection.
Thanks in advance!
EDIT:
I want to achieve a command that uses a variable (which is changed by the user). So I want to have something like this: SELECT * FROM * a variable defined by the use *;. When I use:
cmd.CommandText = "SELECT * FROM #tablename";
cmd.Parameters.Add("#tablename", SqlDbType.Text).Value = DataContainer.sqlTabel;
It doesn't work as well.
I think you try to parameterize table name which you can't. I don't understand what is TABEL either. FROM (Transact-SQL) doesn't have a syntax like that.
You can only parameterize values. Not table names or column names. Specify the table name as part of the SQL. But when you do that, you need very strong validation on the table name before putting it into the SQL, or have a whitelisted set of valid table names, in order to avoid SQL injection attacks.
If you really want to parameterize it, you can use (but not recommended) Dynamic SQL.
SELECT * FROM #tablename
As we have seen, we can make this procedure work with help of dynamic
SQL, but it should also be clear that we gain none of the advantages
with generating that dynamic SQL in a stored procedure. You could just
as well send the dynamic SQL from the client. So, OK: 1) if the SQL
statement is very complex, you save some network traffic and you do
encapsulation. 2) As we have seen, starting with SQL 2005 there are
methods to deal with permissions. Nevertheless, this is a bad idea.
Also use using statement to dispose your database connections.
using(SqlConnection sqlConn = new SqlConnection(connStr))
using(SqlCommand cmd = sqlConn.CreateCommand())
{
...
...
using(SqlDataReader reader = cmd.ExecuteReader())
{
...
}
}
You can't do this. You can only use dynamic SQL. You need to decide for yourself whether SQL injection is a risk and if so you need to code for this, perhaps checking that the value of table doesn't contain anything other than a valid table name. You can check the table name against the system tables to ensure it is valid.
we can't use parameters for either table name or column name. parameters only limited for values. if you want to use parameter for table name then you've to create a stored procedure and then pass the table name as parameter to store procedure. more information can be found here
Table name as parameter

SqlDataAdapter.Fill() - Conversion overflow

All,
I am encountering "Conversion overflow" exceptions on one of the SqlDataAdapter.Fill() usages for a decimal field. The error occurs for value beginning 10 billion, but not till 1 billion. Here is the code:
DataSet ds = new DataSet();
SqlDataAdapter sd = new SqlDataAdapter();
adapter.SelectCommand = <my SQL Command instance>
adapter.Fill(ds);
I have read using SqlDataReader as an alternate but we need to set the datatype and precision explicitly. There are at least 70 columns that I am fetching and I don't want to set all of them only for one decimal field in error.
Can anyone suggest alternate approaches?
Thank you.
Although dataset is allowed for "filling" a data adapter, I've typically done with a DataTable instead as when querying, I'm only expecting one result set. Having said that, I would pre-query the table, just to get its structure... something like
select whatever from yourTable(s) where 1=2
This will get the expected result columns when you do a
DataTable myTable = new DataTable();
YourAdapter.Fill( myTable );
Now that you have a local table that will not fail for content size because no records will have been returned, you can now explicitly go to that one column in question and set its data type / size information as you need...
myTable.Columns["NameOfProblemColumn"].WhateverDataType/Precision = Whatever you need...
NOW, your local schema is legit and the problem column will have been identified with its precision. Now, put in your proper query with proper where clause and not the 1=2 to actually return data... Since no actual rows in the first pass, you don't even need to do a myTable.Clear() to clear the rows... Just re-run the query and dataAdapter.Fill().
I haven't actually tried as I don't have your data issues to simulate same problem, but the theoretical process should get you by without having to explicitly go through all columns... just the few that may pose the problem.
I had the same problem and the reason is because in my stored procedure I returned a decimal(38,20) field. I changed it into decimal(20,10) and all works fine. It seems to be a limitation of Ado.Net.
CREATE PROCEDURE FOOPROCEDURE AS
BEGIN
DECLARE #A DECIMAL(38,20) = 999999999999999999.99999999999999999999;
SELECT #A;
END
GO
string connectionString ="";
SqlConnection conn = new SqlConnection(connectionString);
conn.Open();
SqlCommand cmd = new SqlCommand("EXEC FOOPROCEDURE", conn);
SqlDataAdapter adt = new SqlDataAdapter(cmd);
DataSet ds = new DataSet();
adt.Fill(ds); //exception thrown here

Data Adapter Vs Sql Command

Which one would be better in executing an insert statement for ms-sql database:
Sql DataAdapter or SQL Command
Object?
Which of them would be better, while inserting only one row and while inserting multiple rows?
A simple example of code usage:
SQL Command
string query = "insert into Table1(col1,col2,col3) values (#value1,#value2,#value3)";
int i;
SqlCommand cmd = new SqlCommand(query, connection);
// add parameters...
cmd.Parameters.Add("#value1",SqlDbType.VarChar).Value=txtBox1.Text;
cmd.Parameters.Add("#value2",SqlDbType.VarChar).Value=txtBox2.Text;
cmd.Parameters.Add("#value3",SqlDbType.VarChar).Value=txtBox3.Text;
cmd.con.open();
i = cmd.ExecuteNonQuery();
cmd.con.close();
SQL Data Adapter
DataRow dr = dsTab.Tables["Table1"].NewRow();
DataSet dsTab = new DataSet("Table1");
SqlDataAdapter adp = new SqlDataAdapter("Select * from Table1", connection);
adp.Fill(dsTab, "Table1");
dr["col1"] = txtBox1.Text;
dr["col2"] = txtBox5.Text;
dr["col3"] = "text";
dsTab.Tables["Table1"].Rows.Add(dr);
SqlCommandBuilder projectBuilder = new SqlCommandBuilder(adp);
DataSet newSet = dsTab.GetChanges(DataRowState.Added);
adp.Update(newSet, "Table1");
Updating a data source is much easier using DataAdapters. It's easier to make changes since you just have to modify the DataSet and call Update.
There is probably no (or very little) difference in the performance between using DataAdapters vs Commands. DataAdapters internally use Connection and Command objects and execute the Commands to perform the actions (such as Fill and Update) that you tell them to do, so it's pretty much the same as using only Command objects.
I would use LinqToSql with a DataSet for single insert and most Database CRUD requests. It is type safe, relatively fast for non compilcated queries such as the one above.
If you have many rows to insert (1000+) and you are using SQL Server 2008 I would use SqlBulkCopy. You can use your DataSet and input into a stored procedure and merge into your destination
For complicated queries I recommend using dapper in conjunction with stored procedures.
I suggest you would have some kind of control on your communication with the database. That means abstracting some code, and for that the CommandBuilder automatically generates CUD statements for you.
What would be even better is if you use that technique together with a typed Dataset. then you have intellisense and compile time check on all your columns

SqlBulkCopy calculated field

I am working on moving a database from MS Access to sql server. To move the data into the new tables I have decided to write a sync routine as the schema has changed quite significantly and it lets me run testing on programs that run off it and resync whenever I need new test data. Then eventually I will do one last sync and start live on the new sql server version.
Unfortunately I have hit a snag, my method is below for copying from Access to SQLServer
public static void BulkCopyAccessToSQLServer
(string sql, CommandType commandType, DBConnection sqlServerConnection,
string destinationTable, DBConnection accessConnection, int timeout)
{
using (DataTable dt = new DataTable())
using (OleDbConnection conn = new OleDbConnection(GetConnection(accessConnection)))
using (OleDbCommand cmd = new OleDbCommand(sql, conn))
using (OleDbDataAdapter adapter = new OleDbDataAdapter(cmd))
{
cmd.CommandType = commandType;
cmd.Connection.Open();
adapter.SelectCommand.CommandTimeout = timeout;
adapter.Fill(dt);
using (SqlConnection conn2 = new SqlConnection(GetConnection(sqlServerConnection)))
using (SqlBulkCopy copy = new SqlBulkCopy(conn2))
{
conn2.Open();
copy.DestinationTableName = destinationTable;
copy.BatchSize = 1000;
copy.BulkCopyTimeout = timeout;
copy.WriteToServer(dt);
copy.NotifyAfter = 1000;
}
}
}
Basically this queries access for the data using the input sql string this has all the correct field names so I don't need to set columnmappings.
This was working until I reached a table with a calculated field. SQLBulkCopy doesn't seem to know to skip the field and tries to update the column which fails with error "The column 'columnName' cannot be modified because it is either a computed column or is the result of a union operator."
Is there an easy way to make it skip the calculated field?
I am hoping not to have to specify a full column mapping.
There are two ways to dodge this:
use the ColumnMappings to formally define the column relationship (you note you don't want this)
push the data into a staging table - a basic table, not part of your core transactional tables, whose entire purpose is to look exactly like this data import; then use a TSQL command to transfer the data from the staging table to the real table
I always favor the second option, for various reasons:
I never have to mess with mappings - this is actually important to me ;p
the insert to the real table will be fully logged (SqlBulkCopy is not necessarily logged)
I have the fastest possible insert - no constraint checking, no indexing, etc
I don't tie up a transactional table during the import, and there is no risk of non-repeatable queries running against a partially imported table
I have a safe abort option if the import fails half way through, without having to use transactions (nothing has touched the transactional system at this point)
it allows some level of data-processing when pushing it into the real tables, without the need to either buffer everything in a DataTable at the app tier, or implement a custom IDataReader

Problem using SQLDataReader with Sybase ASE

We're developing a reporting application that uses asp.net-mvc (.net 4). We connect through DDTEK.Sybase middleware to a Sybase ASE 12.5 database.
We're having a problem pulling data into a datareader (from a stored procedure). The stored procedure computes values (approximately 50 columns) by doing sums, counts, and calling other stored procedures.
The problem we're experiencing is... certain (maybe 5% of the columns) come back with NULL or 0. If we debug and copy the SQL statement being used for the datareader and run it inside another SQL tool we get all valid values for all columns.
conn = new SybaseConnection
{
ConnectionString = ConfigurationManager.ConnectionStrings[ConnectStringName].ToString()
};
conn.Open();
cmd = new SybaseCommand
{
CommandTimeout = cmdTimeout,
Connection = conn,
CommandText = mainSql
};
reader = cmd.ExecuteReader();
// AT THIS POINT IMMEDIATELY AFTER THE EXECUTEREADER COMMAND
// THE READER CONTAINS THE BAD (NULL OR 0) DATA FOR THESE COLUMNS.
DataTable schemaTable = reader.GetSchemaTable();
// AT THIS POINT WE CAN VIEW THE DATATABLE FOR THE SCHEMA AND IT APPEARS CORRECT
// THE COLUMNS THAT DON'T WORK HAVE SPECIFICATIONS IDENTICAL TO THE COLUMNS THAT DO WORK
Has anyone had problems like this using Sybase and ADO?
Thanks,
John K.
Problem Solved! ... The problem turned out to be a diffence in the way nulls were handled in the SQL. ... We had several instances in the stored procedure that used non ansi null tests. (x = null rather than x is null) The SQL tools I had used to test this problem were defaulting the "SET ANSINULL" to OFF while our ADO code was not so the "SET ANSINULL" value was ON. Because of this setting, SQL code that tested for null would never test "TRUE" allowing the null value to be returned.

Categories