I am inserting data through a user-defined table type, and I am getting back the inserted identity values in incorrect order.
Here is the stored procedure:
CREATE PROCEDURE [dbo].[InsertBulkAgency](
#AgencyHeaders As AgencyType READONLY)
AS
BEGIN
insert into [Agency]
output Inserted.Id
select *,'cm',GETDATE(),'comp',GETDATE() from #AgencyHeaders
END
And here is the C# code to save the identity values:
using (SqlCommand cmd = new SqlCommand("InsertBulkAgency", myConnection))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#AgencyHeaders", SqlDbType.Structured).Value = dt;
myConnection.Open();
rdr = cmd.ExecuteReader();
while (rdr.Read())
{
sample.Add(rdr["Id"].ToString());
}
myConnection.Close();
}
The returned values in the list should be sequential but they are completely random. How can I retrieve back the correct order of inserted values?
Did you try adding an ORDER BY?
insert into dbo.[Agency]
output inserted.Id
select *,'cm',GETDATE(),'comp',GETDATE() from #AgencyHeaders
ORDER BY inserted.Id;
Or using .sort() once you have the data back in your application.
If you don't have an ORDER BY, you shouldn't expect any specific order from SQL Server. Why should the values in the list be in any sequential order, if you have just said "give me this set"? Why should SQL Server predict that you want them sorted in any specific way? And if it did assume you wanted the data sorted, why wouldn't it pick name or any other column to order by? Truth is, SQL Server will pick whatever sort order it deems most efficient, if you've effectively told it you don't care, by not bothering to specify.
Also, why are you converting the Id to a string (which will also cause problems with sorting, since '55' < '9')? I suggest you make sure your list uses a numeric type rather than a string, otherwise it will not always sort the way you expect.
Related
Language: C# DB: Access (So, OleDbDataAdapter)
Context:
My current project, I'm building a sort of sql query builder. Depending on what the user selects from a drop down list, an sql command can look like this, not sure if it's relevant to the question though [] are columns/table () are drop down optionals, the parenthesis aren't actually there:
Select * From [db] where [Date] (between) #value1 and #value2 AND [ID] (=) #ID AND [Usd] (=) #Usd
Well, you get the idea.
I run this through the following code:
sbuilder = new StringBuilder();
sbuilder.Append("Select * FROM ").Append(Current_Table).Append(" WHERE ");
using (OleDbConnection connection = new OleDbConnection(Con))
{
connection.Open();
string query = sbuilder.ToString();
OleDbDataAdapter da = new OleDbDataAdapter(query, connection);
da.SelectCommand.Parameters.Clear();
//some code to build the string, AND build the select parameters, here's the important one though
da.SelectCommand.Parameters.AddWithValue("#USD", Convert.ToDouble(FilterUSD.Text));
DataTable dt = new DataTable();
da.Fill(dt);
DGVMain.DataSource = dt;
connection.Close();
}
My problem:
When I retrieve the values, as you expect, they must match the value of FilterUSD.Text, so when a user searches for 12, and the DB contains 12.31251, he will return 0 rows. How do I make it so that when the parameter is 12, it would return all values that have a base value 12 and any following decimal values. The examples i've looked at online seem to suggeset using an sql data reader, and retrieving the values into a variable Double.
How do I proceed in the context of using a data adapter, as I am currently using, to fill the datatable? (and later, a datagridview)
My hunch is I will have to make use of the parameters of my select command.
Found this link, I dont know how to convert it for my use though as a datatable: Read decimal from SQL Server database
I'm assuming that your only ever searching for positive whole numbers, so here's two ways which I can think of. I'd probably go with the first one, incase anything unexpected happens with the CAST function.
Add two comparisons in the WHERE clause to pickups records greater than or equal to your search parameter and records less than your search parameter + 1.
[Usd] >= 12 AND [Usd] < 13
Cast the db field to an int in the WHERE clause, so that the decimal places are removed.
cast([Usd] as int) = 12
EDIT: Didn't realise you were using Access (above is for SQL Server). This should be used instead: Int([Usd]) = 12.
If you want to work with negative numbers as well, then you'll get different results from these 2 options.
Select * From [db] where [Date] (between) #value1 and #value2 AND [ID] (=) #ID AND left([Usd],len(Usd)) (=) #Usd
the code I added is the left function I'm not aware if this is working with access but I've used this in sql try to give it a shot. if the DB contains 12.33321 and your parameter name "#Usd" is 12 the db will only give you the whole number you wanted and if -12 but the DB contains -12.3123 it will still give you -12.
I'm trying to insert records using a high performance table parameter method ( http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/ ), and I'm curious if it's possible to retrieve back the identity values for each record I insert.
At the moment, the answer appears to be no - I insert the data, then retrieve back the identity values, and they don't match. Specifically, they don't match about 75% of the time, and they don't match in unpredictable ways. Here's some code that replicates this issue:
// Create a datatable with 100k rows
DataTable dt = new DataTable();
dt.Columns.Add(new DataColumn("item_id", typeof(int)));
dt.Columns.Add(new DataColumn("comment", typeof(string)));
for (int i = 0; i < 100000; i++) {
dt.Rows.Add(new object[] { 0, i.ToString() });
}
// Insert these records and retrieve back the identity
using (SqlConnection conn = new SqlConnection("Data Source=localhost;Initial Catalog=testdb;Integrated Security=True")) {
conn.Open();
using (SqlCommand cmd = new SqlCommand("proc_bulk_insert_test", conn)) {
cmd.CommandType = CommandType.StoredProcedure;
// Adding a "structured" parameter allows you to insert tons of data with low overhead
SqlParameter param = new SqlParameter("#mytable", SqlDbType.Structured);
param.Value = dt;
cmd.Parameters.Add(param);
SqlDataReader dr = cmd.ExecuteReader();
// Set all the records' identity values
int i = 0;
while (dr.Read()) {
dt.Rows[i].ItemArray = new object[] { dr.GetInt32(0), dt.Rows[i].ItemArray[1] };
i++;
}
dr.Close();
}
// Do all the records' ID numbers match what I received back from the database?
using (SqlCommand cmd = new SqlCommand("SELECT * FROM bulk_insert_test WHERE item_id >= #base_identity ORDER BY item_id ASC", conn)) {
cmd.Parameters.AddWithValue("#base_identity", (int)dt.Rows[0].ItemArray[0]);
SqlDataReader dr = cmd.ExecuteReader();
DataTable dtresult = new DataTable();
dtresult.Load(dr);
}
}
The database is defined using this SQL server script:
CREATE TABLE bulk_insert_test (
item_id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
comment varchar(20)
)
GO
CREATE TYPE bulk_insert_table_type AS TABLE ( item_id int, comment varchar(20) )
GO
CREATE PROCEDURE proc_bulk_insert_test
#mytable bulk_insert_table_type READONLY
AS
DECLARE #TableOfIdentities TABLE (IdentValue INT)
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id INTO #TableOfIdentities(IdentValue)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Here's the problem: the values returned from proc_bulk_insert_test are not in the same order as the original records were inserted. Therefore, I can't programmatically assign each record the item_id value I received back from the OUTPUT statement.
It seems like the only valid solution is to SELECT back the entire list of records I just inserted, but frankly I'd prefer any solution that would reduce the amount of data piped across my SQL Server's network card. Does anyone have better solutions for large inserts while still retrieving identity values?
EDIT: Let me try clarifying the question a bit more. The problem is that I would like my C# program to learn what identity values SQL Server assigned to the data that I just inserted. The order isn't essential; but I would like to be able to take an arbitrary set of records within C#, insert them using the fast table parameter method, and then assign their auto-generated ID numbers in C# without having to requery the entire table back into memory.
Given that this is an artificial test set, I attempted to condense it into as small of a readable bit of code as possible. Let me describe what methods I have used to resolve this issue:
In my original code, in the application this example came from, I would insert about 15 million rows using 15 million individual insert statements, retrieving back the identity value after each insert. This worked but was slow.
I revised the code using high performance table parameters for insertion. I would then dispose of all of the objects in C#, and read back from the database the entire objects. However, the original records had dozens of columns with lots of varchar and decimal values, so this method was very network traffic intensive, although it was fast and it worked.
I now began research to figure out whether it was possible to use the table parameter insert, while asking SQL Server to just report back the identity values. I tried scope_identity() and OUTPUT but haven't been successful so far on either.
Basically, this problem would be solved if SQL Server would always insert the records in exactly the order I provided them. Is it possible to make SQL server insert records in exactly the order they are provided in a table value parameter insert?
EDIT2: This approach seems very similar to what Cade Roux cites below:
http://www.sqlteam.com/article/using-the-output-clause-to-capture-identity-values-on-multi-row-inserts
However, in the article, the author uses a magic unique value, "ProductNumber", to connect the inserted information from the "output" value to the original table value parameter. I'm trying to figure out how to do this if my table doesn't have a magic unique value.
Your TVP is an unordered set, just like a regular table. It only has order when you specify as such. Not only do you not have any way to indicate actual order here, you're also just doing a SELECT * at the end with no ORDER BY. What order do you expect here? You've told SQL Server, effectively, that you don't care. That said, I implemented your code and had no problems getting the rows back in the right order. I modified the procedure slightly so that you can actually tell which identity value belongs to which comment:
DECLARE #TableOfIdentities TABLE (IdentValue INT, comment varchar(20))
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id, Inserted.comment
INTO #TableOfIdentities(IdentValue, comment)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Then I called it using this code (we don't need all the C# for this):
DECLARE #t bulk_insert_table_type;
INSERT #t VALUES(5,'foo'),(2,'bar'),(3,'zzz');
SELECT * FROM #t;
EXEC dbo.proc_bulk_insert_test #t;
Results:
1 foo
2 bar
3 zzz
If you want to make sure the output is in the order of identity assignment (which isn't necessarily the same "order" that your unordered TVP has), you can add ORDER BY item_id to the last select in your procedure.
If you want to insert into the destination table so that your identity values are in an order that is important to you, then you have a couple of options:
add a column to your TVP and insert the order into that column, then use a cursor to iterate over the rows in that order, and insert one at a time. Still more efficient than calling the entire procedure for each row, IMHO.
add a column to your TVP that indicates order, and use an ORDER BY on the insert. This isn't guaranteed, but is relatively reliable, particularly if you eliminate parallelism issues using MAXDOP 1.
In any case, you seem to be placing a lot of relevance on ORDER. What does your order actually mean? If you want to place some meaning on order, you shouldn't be doing so using an IDENTITY column.
You specify no ORDER BY on this: SELECT * FROM #TableOfIdentities so there's no guarantee of order. If you want them in the same order they were sent, do an INNER JOIN in that to the data that was inserted with an ORDER BY which matches the order the rows were sent in.
I have an sqlite3 database with several tables. One of them has two fields: s_id and user_id, the first is integer, the second is integer primary key.
I can watch the table contents with SQLite Data Browser,and there are two rows in the table.
user_id values are 1 and 2.
s_id values are, however, strings (like "user1" and "user2"), and sqlite data browser shows these strings.
I am trying to retreive the information using System.Data.SQLite and the following code in C#:
using (SQLiteConnection connection = new SQLiteConnection(string.Format(#"Data Source={0};Legacy Format=True;", path)))
{
connection.Open();
using (SQLiteCommand command = new SQLiteCommand("SELECT * FROM users", connection))
{
using (SQLiteDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
string user = reader["s_id"].ToString();
}
}
}
}
I searched the internet and found that sqlite can store string data in int fields. But I cannot read these strings using C#. The result is always "0". Even if I expand the reader in the Watch till I'll be able to see the objects, the value there is 0 too. (The second object corresponding to user_id has value as it should have, 1 or 2).
Do you know how this string value in integer field can be retrieved in C#?
Thank you.
First things first: If you're going to store TypeX in a database column, don't declare it as TypeY, just because you can. Storing only values of the appropriate type is (as far as sqlite is concerned) essential for being technology independent (as you can see).
If, for some weird reason, you aren't able to change the database itself, you should cast the value in your query, like this:
SELECT CAST(s_id AS VARCHAR(255)) AS s_id FROM users;
how do you check for no rows returned after ExecuteNonQuery for SELECT sql statement returns no rows??
The ExecuteNonQuery Method returns the number of row(s) affected by either an INSERT, an UPDATE or a DELETE. This method is to be used to perform DML (data manipulation language) statements as stated previously.
The ExecuteReader Method will return the result set of a SELECT. This method is to be used when you're querying for a bunch of results, such as rows from a table, view, whatever.
The ExecuteScalar Method will return a single value in the first row, first column from a SELECT statement. This method is to be used when you expect only one value from the query to be returned.
In short, that is normal that you have no results from a SELECT statement while using the ExecuteNonQuery method. Use ExecuteReader instead. Using the ExecuteReader method, will will get to know how many rows were returned through the instance of the SqlDataReader object returned.
int rows = 0;
if (reader.HasRows)
while (reader.Read())
rows++;
return rows; // Returns the number of rows read from the reader.
I don't see any way to do this. Use ExecuteScalar with select count(*) where... to count the rows that match the criteria for your original SELECT query. Example below, paraphrased from here:
using (SqlCommand thisCommand =
new SqlCommand("SELECT COUNT(*) FROM Employee", thisConnection))
{
Console.WriteLine("Number of Employees is: {0}",
thisCommand.ExecuteScalar());
}
If you need the rows as well, you would already be using ExecuteReader, I imagine.
Use the ExecuteReader method instead. This returns a SqlDataReader, which has a HasRows property.
ExecuteNonQuery shouldn't be used for SELECT statements.
This is late, but I ran into this problem recently and thought it would be helpful for others coming in later (like me) seeking help with the same problem. Anyway, I believe you actually could use the ExecuteNonQuery they way you are trying to. BUT... you have to adjust your underlying SELECT query to a stored procedure instead that has SELECT query and an output parameter which is set to equal the row count.
As stated in the MSDN documentation:
Although the ExecuteNonQuery returns no rows, any output parameters or return values mapped to parameters are populated with data.
Given that, here's how I did it. By the way, I would love feedback from the experts out there if there are any flaws in this, but it seems to work for me.
First, your stored procedure should have two SELECT statements: one to return your dataset and another tied to an output parameter to return the record count:
CREATE PROCEDURE spMyStoredProcedure
(
#TotalRows int output
)
AS
BEGIN
SELECT * FROM MyTable; //see extra note about this line below.
SELECT #TotalRows COUNT(*) FROM MyTable;
END
Second, add this code (in vb.net, using SqlCommand etc..).
Dim cn As SqlConnection, cm As SqlCommand, dr As SqlDataReader
Dim myCount As Int32
cn = New SqlConnection("MyConnectionString")
cn.Open() //I open my connection beforehand, but a lot of people open it right before executing the queries. Not sure if it matters.
cm = New SqlCommand("spMyStoredProcedure", cn)
cm.CommandType = CommandType.StoredProcedure
cm.Parameters.Add("#TotalRows", SqlDbType.Int).Direction = ParameterDirection.Output
cm.ExecuteNonQuery()
myCount = CType(cm.Parameters("#TotalRows").Value, Integer)
If myCount > 0 Then
//Do something.
End If
dr = cm.ExecuteReader()
If dr.HasRows Then
//Return the actual query results using the stored procedure's 1st SELECT statement
End If
dr.Close()
cn.Close()
dr = Nothing
cm = Nothing
cn = Nothing
That's it.
Extra note. I assumed you may have wanted to get the "MyCount" amount to do something other than determining whether to continue returning you're query. The reason is because with this method, you don't really need to do that. Since I'm utilizing the "ExecuteReader" method after getting the count, I can determine whether to continue returning intended data set using the data reader's "HasRows" property. To return a data set, however, you need a SELECT statement which returns a data set, hence the reason for my 1st SELECT statement in my stored procedure.
By the way, the cool thing about this method of using the "ExecuteNonQuery" method is you can use it to get the total row count before closing the DataReader (you cannot read output parameters before closing the DataReader, which is what I was trying to do, this method gets around that). I'm not sure if there is a performance hit or a flaw in doing this to get around that issue, but like I said... it works for me. =D
I am newbie to db programming and need help with optimizing this query:
Given tables A, B and C and I am interested in one column from each of them, how to write a query such that I can get one column from each table into 3 different arrays/lists in my C# code?
I am currently running three different queries to the DB but want to accomplish the same in one query (to save 2 trips to the DB).
#patmortech Use UNION ALL instead of UNION if you don't care about duplicate values or if you can only get unique values (because you are querying via primary or unique keys). Much faster performance with UNION ALL.
There is no sense of "arrays" in SQL. There are tables, rows, and columns. Resultsets return a SET of rows and columns. Can you provide an example of what you are looking for? (DDL of source tables and sample data would be helpful.)
As others have said, you can send up multiple queries to the server within a single execute statement and return multiple resultsets via ADO.NET. You use the DataReader .NextResult() command to return the next resultset.
See here for more information: MSDN
Section: Retrieving Multiple Result Sets using NextResult
Here is some sample code:
static void RetrieveMultipleResults(SqlConnection connection)
{
using (connection)
{
SqlCommand command = new SqlCommand(
"SELECT CategoryID, CategoryName FROM dbo.Categories;" +
"SELECT EmployeeID, LastName FROM dbo.Employees",
connection);
connection.Open();
SqlDataReader reader = command.ExecuteReader();
while (reader.HasRows)
{
Console.WriteLine("\t{0}\t{1}", reader.GetName(0),
reader.GetName(1));
while (reader.Read())
{
Console.WriteLine("\t{0}\t{1}", reader.GetInt32(0),
reader.GetString(1));
}
reader.NextResult();
}
}
}
With a stored procedure you can return more than one result set from the database and have a dataset filled with more than one table, you can then access these tables and fill your arrays/lists.
You can do 3 different SELECT statements and execute in 1 call. You will get 3 results sets back. How you leverage those results depends on what data technology you are using. LINQ? Datasets? Data Adapter? Data Reader? If you can provide that information (perhaps even sample code) I can tell you exactly how to get what you need.
Not sure if this is exactly what you had in mind, but you could do something like this (as long as all three columns are the same data type):
select field1, 'TableA' as TableName from tableA
UNION
select field2, 'TableB' from tableB
UNION
select field3, 'TableC' from tableC
This would give you one big resultset with all the records. Then you could use a data reader to read the results, keep track of what the previous record's TableName value was, and whenever it changes you could start putting the column values into another array.
Take the three trips. The answers so far suggest how far you would need to advance from "new to db programming" to do what you want. Master the simplest ways first.
If they are three huge results, then I suspect you're trying to do something in C# that would better be done in SQL on the database without bringing back the data. Without more detail, this sounds suspiciously like an antipattern.