I like to find a way to handle multiple updates to a sql db (with one singe db roundtrip). I read about table-valued parameters in SQL Server 2008 http://www.codeproject.com/KB/database/TableValueParameters.aspx which seems really useful. But it seems I need to create both a stored procedure and a table type to use it. Is that true? Perhaps due to security? I would like to run a text query simply like this:
var sql = "INSERT INTO Note (UserId, note) SELECT * FROM #myDataTable";
var myDataTable = ... some System.Data.DataTable ...
var cmd = new System.Data.SqlClient.SqlCommand(sql, conn);
var param = cmd.Parameters.Add("#myDataTable", System.Data.SqlDbType.Structured);
param.Value=myDataTable;
cmd.ExecuteNonQuery();
So
A) do I have to create both a stored procedure and a table type to use TVP's? and
B) what alternative method is recommended to send multiple updates (and inserts) to SQL Server?
Yes, you need to create the types.
Alternatives are sending a big string sql batch or passing XML to sprocs.
The downside to big sql string batches is it can blow the sql proc cache and might cause sql to recompile - especially if the batch is unique because of input data being part of that large string. By definition each batch would be unique.
XML was the main alternative before TVPs. The one downside to XML, for at least awhile, sql azure didn't support it (that might change?) so it limits your options.
TVPs seem to be the way to do this. Our project just converted to using TVPs.
Hope that helps.
Related
I have 4-5 tables in one database (not SQL Server).
In my UI, users can enter some SQL conditions together with column names in a textbox. I need to verify if the SQL is correct and if these columns exist, and show any errors accordingly. I am using C# for server side.
I have a SQL Server database where our UI stores all the UI related information.
One approach is to create all these tables (just the table structure) in my SQL Server as well and then query a simple select on each table and show the errors or success message(s) accordingly.
So basically I would have the where clause as below or more conditions:
where a = b and c in(1,2)
As mentioned above I would execute the above where clause against each table I created in SQL Server which would return error if column does not exist.
Is there a better way to approach this? I was thinking in case there is some other way to work without creating so many tables on my SQL Server.
I don't want to hard code these as the structure might change in near future. So looking for some maintainable solution. May be create a single table and store all this information in it.
Any suggestions are appreciated.
In SQL server you can query the system object :
information_schema.columns.
That contains a list of all columns for all tables and views.
However, I agree with previous comments - the design you describe is bad bad bad.
Ignoring the SQL injection troubles for a second, if the users have control over only the WHERE clause of the query, then you could try and run something like
select top 0 * from <tables> where <user-entered-where-clause>
and then gracefully handle any errors that are returned.
use dmv function in MS sql server to validate the query string.
assign user string to the variable #Str_query
declare #Str_query as nvarchar(max);
set #Str_query ='SELECT [role_code],[role_description] FROM [dbname].[dbo].[Roles]'
SELECT error_message FROM sys.dm_exec_describe_first_result_set(#Str_query, NULL, 0) WHERE column_ordinal = 0
if there is an error message then query string is not valid for execution.
I am building an application in which I will producing some reports based off the results of some SQL queries executed against a number of different databases and servers. Since I am unable to create stored procedures on each server, I have my SQL scripts saved locally, load them into my C# application and execute them against each server using ADO.NET. All of the SQL scripts are selects that return tables, however, some of them are more complicated than others and involve multiple selects into table variables that get joined on, like the super basic example below.
My question is, using ADO.NET, is it possible to assign a string of multiple SQL queries that ultimately only returns a single data table to a SqlCommand object - e.g. the two SELECT statements below comprising my complete script? Or would I have to create a transaction and execute each individual query separately as its own command?
-- First Select
SELECT *
INTO #temp
FROM Table1;
--Second Select
SELECT *
FROM Table1
JOIN #temp
ON Table1.Id = #temp.Id;
Additionally, some of my scripts have comments embedded in them like the rudimentary example above - would these need to be removed or are they effectively ignored within the string? This seems to be working with single queries, in other words the "--This is a comment" is effectively ignored.
private void button1_Click(object sender, EventArgs e)
{
string ConnectionString = "Server=server1;Database=test1;Trusted_Connection=True";
using (SqlConnection conn = new SqlConnection(ConnectionString))
{
SqlCommand cmd = new SqlCommand("--This is a comment \n SELECT TOP 10 * FROM dbo.Tablw1;");
DataTable dt = new DataTable();
SqlDataAdapter sqlAdapt = new SqlDataAdapter(cmd.CommandText.ToString(), conn);
sqlAdapt.Fill(dt);
MessageBox.Show(dt.Rows.Count.ToString());
}
}
Yes, that is absolutely fine. Comments are ignored. It should work fine. The only thing to watch is the scopin of temporary tables - if you are used to working with stored procedures, the scope is temporary (they are removed when the stored procedure ends); with direct commands: it isn't - they are connection-specific but survive between multiple operations. If that is a problem, take a look at "table variables".
Note: technically this is up to the backend provider; assuming you are using a standard database engine, you'll be OK. If you are using something exotic, then it might be a genuine question. For example, it might not work on "Bob's homemade OneNote ADO.NET provider".
Yes, you can positively do it.
You can play with different types of collections, or with string Builder for passing queries even you can put the string variable and assign the query to it.
While the loop is running put in temp table or CTE, its totally depends on you to choose the approach. and add the data to datatable.
So if you want the entire data to be inserted or Updated or deleted then you can go for transaction,it won't be any issue.
I don't use ado.net, I use Entity Framework but I think this is more a SQL question than an ADO.NET question; Forgive me if I'm wrong. Provided you are selecting from Table1 in both queries I think you should use this query instead.
select *
from Table1 tbl1
join Table1 tbl2
on tbl1.id = tbl2.id
Actually I really don't ever see a reason you would have to move things into temp tables with options like Common Table Expressions available to you.
look up CTEs if you don't already know about them
https://www.simple-talk.com/sql/t-sql-programming/sql-server-cte-basics/
Does anyone know how to store a single backslash into PostgreSQL Database?
I am using C# and Nqgsql to access PostgreSQL, and my case is I want to store "1\\0\\0\\0\\1\\0" into the database, and the expected string in DB field will be "1\0\0\0\1\0", that is I only need one backslash in the db field, thus when I get the data from db, it will still be "1\\0\\0\\0\\1\\0" in memory. But my problem is when the memory string is "1\\0\\0\\0\\1\\0", the string stored into db field is also "1\\0\\0\\0\\1\\0", then when I get the data from db, the memory string will be "1\\\\0\\\\0\\\\0\\\\1\\\\0".
The variables I used in c# code is set as the following format:
var a = "1\\0\\0\\0\\1\\0";
var b = #"1\0\0\0\1\0";
when store into db, it seems that the backslashes in both variables have been doubled. How to deal with this issue?
You should avoid this entirely by using parametrized queries. Consult an example in Npgsql: User's Manual in Using parameters in a query section.
But if you really want to construct a literal query then you can use E'' syntax, like this:
var sql = #"insert into table_name (column_name) values (E'1\\0\\0\\0\\1\\0')";
This syntax is independent of server or connection configuration like standard_conforming_strings. But it is Postgres specific.
If you want your code be portable between different database engines the you can issue set standard_conforming_strings=on just after connecting. Then this works:
var sql = #"insert into table_name (column_name) values ('1\0\0\0\1\0')";
This option is turned on by default since PostgreSQL 9.1 and available since 8.2.
I had this problem as well. I was able to solve it by going in the database's config and changing "standard_conforming_strings" to OFF.
I'm trying to send a DataTable to a stored procedure using c#, .net 2.0 and SQLServer 2012 Express.
This is roughly what I'm doing:
//define the DataTable
var accountIdTable = new DataTable("[dbo].[TypeAccountIdTable]");
//define the column
var dataColumn = new DataColumn {ColumnName = "[ID]", DataType = typeof (Guid)};
//add column to dataTable
accountIdTable.Columns.Add(dataColumn);
//feed it with the unique contact ids
foreach (var uniqueId in uniqueIds)
{
accountIdTable.Rows.Add(uniqueId);
}
using (var sqlCmd = new SqlCommand())
{
//define command details
sqlCmd.CommandType = CommandType.StoredProcedure;
sqlCmd.CommandText = "[dbo].[msp_Get_Many_Profiles]";
sqlCmd.Connection = dbConn; //an open database connection
//define parameter
var sqlParam = new SqlParameter();
sqlParam.ParameterName = "#tvp_account_id_list";
sqlParam.SqlDbType = SqlDbType.Structured;
sqlParam.Value = accountIdTable;
//add parameter to command
sqlCmd.Parameters.Add(sqlParam);
//execute procedure
rResult = sqlCmd.ExecuteReader();
//print results
while (rResult.Read())
{
PrintRowData(rResult);
}
}
But then I get the following error:
ArgumentOutOfRangeException: No mapping exists from SqlDbType Structured to a known DbType.
Parameter name: SqlDbType
Upon investigating further (in MSDN, SO and other places) it appears as if .net 2.0 does not support sending a DataTable to the database (missing things such as SqlParameter.TypeName), but I'm still not sure since I haven't seen anyone explicitly claiming that this feature is not available in .net 2.0
Is this true?
If so, is there another way to send a collection of data to the database?
Thanks in advance!
Out of the box, ADO.NET does not suport this with good reason. A DataTable could take just about any number of columns, which may or may not map up to a real table in your database.
If I'm understanding what you want to do - upload the contents of a DataTable quickly to a pre-defined, real table with the same structure, I'd suggest you investigate SQLBulkCopy.
From the documentation:
Microsoft SQL Server includes a popular command-prompt utility named
bcp for moving data from one table to another, whether on a single
server or between servers. The SqlBulkCopy class lets you write
managed code solutions that provide similar functionality. There are
other ways to load data into a SQL Server table (INSERT statements,
for example), but SqlBulkCopy offers a significant performance
advantage over them.
The SqlBulkCopy class can be used to write data only to SQL Server
tables. However, the data source is not limited to SQL Server; any
data source can be used, as long as the data can be loaded to a
DataTable instance or read with a IDataReader instance.
SqlBulkCopy will fail when bulk loading a DataTable column of type
SqlDateTime into a SQL Server column whose type is one of the
date/time types added in SQL Server 2008.
However, you can define Table Value Parameters in SQL Server in later versions, and use that to send a Table (DateTable) in the method you're asking. There's an example at http://sqlwithmanoj.wordpress.com/2012/09/10/passing-multipledynamic-values-to-stored-procedures-functions-part4-by-using-tvp/
Per my experience, if you're able to compile the code in C# that means the ADO.Net support that type. But if it fails when you execute the code then the target database might not support it. In your case you mention the [Sql Server 2012 Express], so it might not support it. The Table Type was supported from [Sql Server 2005] per my understanding but you had to keep the database compatibility mode to greater than 99 or something. I am 100% positive it will work in 2008 because I have used it and using it extensively to do bulk updates through the stored procedures using [User Defined Table Types] (a.k.a UDTT) as the in-parameter for the stored procedure. Again you must keep the database compatibility greater than 99 to use MERGE command for bulk updates.
And of course you can use SQLBulkCopy but not sure how reliable it is, is depending on the
I have written a single stored procedure that returns 2 tables:
select *
from workers
select *
from orders
I call this stored procedure from my C# application and get a DataSet with two tables, and everything is working fine.
My question is how can I change the tables name at the SQL Server side so that in the C# side I will be able to access it via a name (instead of Tables[0]):
myDataSet.Tables["workers"]...
I tried to look for the answer in Google but couldn't find it. Maybe the search keywords was not sufficient.
You cannot really do anything from the server-side to influence those table names - those names only exist on the client-side, in your ADO.NET code.
What you can do is on the client-side - add table mappings - something like:
SqlDataAdapter dap = new SqlDataAdapter(YourSqlCommandHere);
dap.TableMappings.Add("Table", "workers");
dap.TableMappings.Add("Table1", "orders");
This would "rename" the Table (first result set) to workers and Table1 (second result set) to orders before you actually fill the data. So after the call to
dap.Fill(myDataSet);
you would then have myDataSet.Tables["workers"] and myDataSet.Tables["orders"] available for you to use.
The TDS Protocol documentation (Which is the protocol used to return results from SQL Server) does not mention a "resultset name". So the only way you will ever be able to access the result sets in ADO.net is by the number as mentioned in your example.