What is the right way to avoid errors on INSERT and UPDATES with a SqlDataAdapter/SqlCommandbuilder when the SQL Statement used to SELECT has a computed column as one of the return fields?
I get the error now that "The column Amount cannot be modified because it is either a computed column or is the result of a UNION operator".
UPDATE:
I fixed the issue by using a query like this:
SELECT *, PercentRating * 500 AS CoreValue FROM ValueListings
And ditching the computed column.
Now it works. How is that SqlCommandBuilder realizes to NOT build the CoreValue field into the UPDATE and INSERT statements? Anybody know how this works internally?
Use The Sentence:
SELECT * FROM ValueListings
Then After Fill the DataTable, add a Computed Column to it:
Dim Dt as new DataTable
Da.Fill(Dt)
Dt.columns.add("CoreValue", GetType(Double), "PercentRating * 500")
If you can avoid using the * it WILL save you trouble later. With * if you change the schema your code may break. If named fields, you're good.
Related
I have been having this problem for years and I wonder if anyone has figured this out. When you add a table adapter to the dataset designer in Visual Studio, it generates the default query. Many times the default query will not be efficient so you can right click and add another query. But when there is a change in the underlying table structure and you reconfigure the table adapter, the 2nd query will lose the * from the sql statement. The dataset designer will change it a list of column previously present in the table.
For example, if you create a table adapter for Select * From Customers and then create another query for Select * From Customers Where Id=#Id, anytime you make a change to the Customers table and reconfigured the datatable, the second query will change to Select Id, Name From Customers.
Does anyone know how to prevent VS designer from breaking it up? I noticed that when using the DISTINCT keyword in any subsequent queries, the designer will not replace the * with column names even after adding columns from the table. Is there another keyword that can be used that stop the designer from changing any queries?
Today I encountered problem that causes difficulty for me to solve it.
In application I want to display records in aphabetical order thus in SQL statement I'am using ORDER BY, But it looks like CAPITAL letters are before lowercase letters so record starting with Z is before a.
This is example of my sql statement
SELECT * FROM myTable WHERE id= 5 ORDER BY name
Do you have any ideas ? Can I sort data in DataTable object after retreiving it from database ? or can it be accomplished by more complex sql statement?
Any ideas will be appreciated
You can modify your SQL query in such a way that all capitals are transformed to lower before ordering
SELECT * FROM myTable WHERE id = 5 ORDER BY LOWER(name)
The rules for comparing text values is the collation; there are many many collations available in SQL Server, and most have both case-sensitive and case-insensitive options.
If you don't want to change the collation (in particular, if this applies only to specific cases), you can also use functions like LOWER / UPPER, but this cannot make efficient use of indexes. A hybrid approach is to store redundant information: store the original data in one column, and the standardized data (perhaps all lower-case-invariant) in a second column. Then you can index the two separately (as you need), and operate on either the original or standardized data. You would normally only display the original data, though. Persisted+calculated+indexed columns might work well here, as then it is impossible to get inconsistent data (the server is in charge of the calculated column).
Try
SELECT * FROM myTable WHERE id= 5 ORDER BY LOWER(name)
OR
SELECT * FROM myTable WHERE id= 5 ORDER BY LCASE(name)
depending on which database you are using
You can perform ordering by providing case in SQL. Just do this:
SELECT * FROM myTable WHERE id= 5 ORDER BY UPPER(name)
OR
SELECT * FROM myTable WHERE id= 5 ORDER BY UCASE(name)
Ordering will be done on upper case name while you result will be same as present in table.
Try this...
SELECT * FROM myTable WHERE id= 5 ORDER BY name COLLATE Latin1_General_100_CI_AS
I'm trying to insert records using a high performance table parameter method ( http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/ ), and I'm curious if it's possible to retrieve back the identity values for each record I insert.
At the moment, the answer appears to be no - I insert the data, then retrieve back the identity values, and they don't match. Specifically, they don't match about 75% of the time, and they don't match in unpredictable ways. Here's some code that replicates this issue:
// Create a datatable with 100k rows
DataTable dt = new DataTable();
dt.Columns.Add(new DataColumn("item_id", typeof(int)));
dt.Columns.Add(new DataColumn("comment", typeof(string)));
for (int i = 0; i < 100000; i++) {
dt.Rows.Add(new object[] { 0, i.ToString() });
}
// Insert these records and retrieve back the identity
using (SqlConnection conn = new SqlConnection("Data Source=localhost;Initial Catalog=testdb;Integrated Security=True")) {
conn.Open();
using (SqlCommand cmd = new SqlCommand("proc_bulk_insert_test", conn)) {
cmd.CommandType = CommandType.StoredProcedure;
// Adding a "structured" parameter allows you to insert tons of data with low overhead
SqlParameter param = new SqlParameter("#mytable", SqlDbType.Structured);
param.Value = dt;
cmd.Parameters.Add(param);
SqlDataReader dr = cmd.ExecuteReader();
// Set all the records' identity values
int i = 0;
while (dr.Read()) {
dt.Rows[i].ItemArray = new object[] { dr.GetInt32(0), dt.Rows[i].ItemArray[1] };
i++;
}
dr.Close();
}
// Do all the records' ID numbers match what I received back from the database?
using (SqlCommand cmd = new SqlCommand("SELECT * FROM bulk_insert_test WHERE item_id >= #base_identity ORDER BY item_id ASC", conn)) {
cmd.Parameters.AddWithValue("#base_identity", (int)dt.Rows[0].ItemArray[0]);
SqlDataReader dr = cmd.ExecuteReader();
DataTable dtresult = new DataTable();
dtresult.Load(dr);
}
}
The database is defined using this SQL server script:
CREATE TABLE bulk_insert_test (
item_id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
comment varchar(20)
)
GO
CREATE TYPE bulk_insert_table_type AS TABLE ( item_id int, comment varchar(20) )
GO
CREATE PROCEDURE proc_bulk_insert_test
#mytable bulk_insert_table_type READONLY
AS
DECLARE #TableOfIdentities TABLE (IdentValue INT)
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id INTO #TableOfIdentities(IdentValue)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Here's the problem: the values returned from proc_bulk_insert_test are not in the same order as the original records were inserted. Therefore, I can't programmatically assign each record the item_id value I received back from the OUTPUT statement.
It seems like the only valid solution is to SELECT back the entire list of records I just inserted, but frankly I'd prefer any solution that would reduce the amount of data piped across my SQL Server's network card. Does anyone have better solutions for large inserts while still retrieving identity values?
EDIT: Let me try clarifying the question a bit more. The problem is that I would like my C# program to learn what identity values SQL Server assigned to the data that I just inserted. The order isn't essential; but I would like to be able to take an arbitrary set of records within C#, insert them using the fast table parameter method, and then assign their auto-generated ID numbers in C# without having to requery the entire table back into memory.
Given that this is an artificial test set, I attempted to condense it into as small of a readable bit of code as possible. Let me describe what methods I have used to resolve this issue:
In my original code, in the application this example came from, I would insert about 15 million rows using 15 million individual insert statements, retrieving back the identity value after each insert. This worked but was slow.
I revised the code using high performance table parameters for insertion. I would then dispose of all of the objects in C#, and read back from the database the entire objects. However, the original records had dozens of columns with lots of varchar and decimal values, so this method was very network traffic intensive, although it was fast and it worked.
I now began research to figure out whether it was possible to use the table parameter insert, while asking SQL Server to just report back the identity values. I tried scope_identity() and OUTPUT but haven't been successful so far on either.
Basically, this problem would be solved if SQL Server would always insert the records in exactly the order I provided them. Is it possible to make SQL server insert records in exactly the order they are provided in a table value parameter insert?
EDIT2: This approach seems very similar to what Cade Roux cites below:
http://www.sqlteam.com/article/using-the-output-clause-to-capture-identity-values-on-multi-row-inserts
However, in the article, the author uses a magic unique value, "ProductNumber", to connect the inserted information from the "output" value to the original table value parameter. I'm trying to figure out how to do this if my table doesn't have a magic unique value.
Your TVP is an unordered set, just like a regular table. It only has order when you specify as such. Not only do you not have any way to indicate actual order here, you're also just doing a SELECT * at the end with no ORDER BY. What order do you expect here? You've told SQL Server, effectively, that you don't care. That said, I implemented your code and had no problems getting the rows back in the right order. I modified the procedure slightly so that you can actually tell which identity value belongs to which comment:
DECLARE #TableOfIdentities TABLE (IdentValue INT, comment varchar(20))
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id, Inserted.comment
INTO #TableOfIdentities(IdentValue, comment)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Then I called it using this code (we don't need all the C# for this):
DECLARE #t bulk_insert_table_type;
INSERT #t VALUES(5,'foo'),(2,'bar'),(3,'zzz');
SELECT * FROM #t;
EXEC dbo.proc_bulk_insert_test #t;
Results:
1 foo
2 bar
3 zzz
If you want to make sure the output is in the order of identity assignment (which isn't necessarily the same "order" that your unordered TVP has), you can add ORDER BY item_id to the last select in your procedure.
If you want to insert into the destination table so that your identity values are in an order that is important to you, then you have a couple of options:
add a column to your TVP and insert the order into that column, then use a cursor to iterate over the rows in that order, and insert one at a time. Still more efficient than calling the entire procedure for each row, IMHO.
add a column to your TVP that indicates order, and use an ORDER BY on the insert. This isn't guaranteed, but is relatively reliable, particularly if you eliminate parallelism issues using MAXDOP 1.
In any case, you seem to be placing a lot of relevance on ORDER. What does your order actually mean? If you want to place some meaning on order, you shouldn't be doing so using an IDENTITY column.
You specify no ORDER BY on this: SELECT * FROM #TableOfIdentities so there's no guarantee of order. If you want them in the same order they were sent, do an INNER JOIN in that to the data that was inserted with an ORDER BY which matches the order the rows were sent in.
Scenario Having a table with one of the column varchar(20). Column mostly contains integer values but we haven't restricted the user. 90% of users enter 50, but there are 5% users who enter 50 Units.
Defined an in code query as follows
qry = select coalesc(CONVERT(Varchar(20),column1),'') from table1
Have got c# code to populate dataset as follows
DataSet ds = loader.LoadDataSet(qry);
Now what happens is that the .net runtime gets the first row and because it's an integer (in most of the case), it assigns the column an int data type and in scenarios like '50 Units', it returns blank as column1 is int (in.net runtime view) and fails at CONVERT(varchar(20), column1) and returns empty ('') column.
One alternative is to user strongly typed dataset and get it done but I would love to know of any other alternative to get it done before going on that path.
My bad. Actually, it was the sql query which was failing in .net code. When a column is varchar, doing something like COALESC(CONVERT(VARCHAR(20),column1),0) fails. It should be COALESC(CONVERT(VARCHAR(20),column1),'0')
I am a PHP/MySQL developer, slowly venturing into the realm of C#/SQL Server and I am having a problem in C# when it comes to reading an SQL Server query that joins two tables.
Given the two tables:
TableA:
int:id
VARCHAR(50):name
int:b_id
TableB:
int:id
VARCHAR(50):name
And given the query
SELECT * FROM TableA,TableB WHERE TableA.b_id = TableB.id;
Now in C# I normally read query data in the following fashion:
SqlDataReader data_reader= sql_command.ExecuteReader();
data_reader["Field"];
Except in this case I need to differentiate from TableA's name column, and TableB's name column.
In PHP I would simply ask for the field "TableA.name" or "TableB.name" accordingly but when I try something like
data_reader["TableB.name"];
in C#, my code errors out.
How can fix this? And how can I read a query on multiple tables in C#?
The result set only sees the returned data/column names, not the underlying table. Change your query to something like
SELECT TableA.Name as Name_TA, TableB.Name as Name_TB from ...
Then you can refer to the fields like this:
data_reader["Name_TA"];
To those posting that it is wrong to use "SELECT *", I strongly disagree with you. There are many real world cases where a SELECT * is necessary. Your absolute statements about its "wrong" use may be leading someone astray from what is a legitimate solution.
The problem here does not lie with the use of SELECT *, but with a constraint in ADO.NET.
As the OP points out, in PHP you can index a data row via the "TABLE.COLUMN" syntax, which is also how raw SQL handles column name conflicts:
SELECT table1.ID, table2.ID FROM table1, table;
Why DataReader is not implemented this way I do not know...
That said, a solution to be used could build your SQL statement dynamically by:
querying the schema of the tables you're selecting from
build your SELECT clause by iterating through the column names in the schema
In this way you could build a query like the following without having to know what columns currently exist in the schema for the tables you're selecting from
SELECT TableA.Name as Name_TA, TableB.Name as Name_TB from ...
You could try reading the values by index (a number) rather than by key.
name = data_reader[4];
You will have to experiment to see how the numbers correspond.
Welcome to the real world. In the real world, we don't use "SELECT *". Specify which columns you want, from which tables, and with which alias, if required.
Although it is better to use a column list to remove duplicate columns, if for any reason you want *****, then just use
rdr.item("duplicate_column_name")
This will return the first column value, since the inner join will have the same values in both identical columns, so this will accomplish the task.
Ideally, you should never have duplicate column names, across a database schema. So if you can rename your schema to not have conflicting names.
That rule is for this very situation. Once you've done your join, it is just a new recordset, and generally the table names do go with it.