Pass xml string as input to MySQL stored procedure - c#

We are using the following code sample to send data from my application to an external DB via stored procedure (SQL Server).
Here I need to support MySQL also. So based on DB selection by end user, we need to send the data to the either MySQL or SQL Server
The c# code will be running on a different machine and the DB server will be different server.
C# Code
using (SqlConnection sqlConnection = new SqlConnection(<<MyConnectionString>>))
{
using (SqlCommand sqlCommand = new SqlCommand(<<StoredProcedureName>>, sqlConnection))
{
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.Parameters.Add("tblStudent", SqlDbType.Xml).Value = students.ToList().ToXML();
sqlConnection.Open();
sqlCommand.ExecuteNonQuery();
}
}
Stored Procedure
CREATE PROCEDURE `usp_UpdateStudent` (#tblStudent XML)
BEGIN
SET NOCOUNT ON;
INSERT INTO Student(StudentId,StudentName)
SELECT Student.value('(StudentId)[1]', 'nvarchar(100)') as StudentId,
Student.value('(StudentName)[1]', 'nvarchar(100)') as StudentName
FROM
#tblStudent.nodes('/ArrayOfStudent/Student')AS TEMPTABLE(Student)
END
I searched on the web on how to pass xml string as input parameter from c# to a stored procedure. But I don't get any concrete answer.
Please advice on how to create a stored procedure with XML as input parameter and also how to pass the XML string from c# to the same.
Note: The above code works as expected in SQL Server. When I tried to implement the same with MySQL, I found that MySQL do not support xml as input type parameter in Stored Procedure. It looks like I need to pass the xml as normal text and parse the text in stored procedure.
Please let me know if there is more efficient way to do this.

Using Load_File() imports the xml data into a local variable, while ExtractValue() then queries the XML data using XPath. For instance, in the code below, it retrieves a count of students from the xml_content variable:
declare xml_content text;
declare v_row_count int unsigned;
set xml_content = load_file(path);
set v_row_count = extractValue(xml_content, concat('count(', node, ')'))
path: 'C:\students1.xml', node: '/student_list/student'

I realise this is an old question, but I too failed to find a good answer on SO, so I was forced to do a bit of work myself! The end result seems to work. Adapting it for your purposes, gives you something like:
DELIMITER $$
CREATE procedure `usp_InsertStudent` (ptblStudent text)
BEGIN
declare cnt int;
declare ptr int;
declare rowPtr varchar(100);
set cnt = (extractValue(ptblStudent, 'count(/ArrayOfStudent/Student)'));
set ptr = 0;
while ptr < cnt do
SET ptr = ptr + 1;
SET rowPtr = concat('/ArrayOfStudent/Student[', ptr, ']');
INSERT INTO Student (StudentId,StudentName)
values (extractValue(ptblStudent, concat(rowPtr, '/StudentId')),
extractValue(ptblStudent, concat(rowPtr, '/StudentName')));
end while;
SELECT ptr;
END;
$$
DELIMITER ;
As an aside - I changed the routine name (your example was doing an insert not an update).
By way of explanation. If you do extractValue('/ArrayOfStudent/Student/StudentId') you get a single result string, with all the values separated by a space. Given that it is not so easy (in my limited experience) to split a string in MySQL, it seemed better to extract the individual values row by row (this was particularly true when there are many fields - I did first try extracting all fields at once into space separated strings and then splitting the strings into separate temporary tables each with an auto_increment id, and then joining the temporary tables on the ids, but this quickly became messy when more than three fields were required), hence the requirement to check the row count, and the use of the while loop. This does mean that the inserts become single inserts, which is why I return ptr to indicate the number of rows added, rather than row_count() (which in this case would be 1!).
I was pleasantly pleased at the flexibility of MySQL when it came to implicit casting: I have so far tested ints, doubles and DateTimes successfully, in addition to strings - no explicit casting has to date been necessary.
On a more general level, your question also raised the issue of coding to multiple data providers. Some years ago, a colleague of mine persuaded me to go the DbConnection route as advocated in some of the comments. With the benefit of hindsight, this was a mistake. The problem is, that you lose the chance to take advantage of particular features of one or other db provider as exposed through their .Net libraries. So what I do now, is very much what you were proposing: I define my Data Access Layer by means of an interface; this interface is then implemented by one class per db provider, each using the native .Net libraries. Thus the SQL Server implementation uses SqlConnection, the MySQL implementation MySqlConnection, the Oracle implementation OracleConnection and so on. In this way my application does not care about the implementation details, but I am free to take advantage of features unique to one db or another. To give a simple example: MS SQL server allows stored procedures to return multiple recordsets (with differing fields), thus allowing you to populate a complex DataSet in one call to the db; all other dbs that I use, require one procedure per recordset, making it necessary to build the DataSet within the DAL. To the application, there is no difference, as the Interface expects a DataSet to be returned.

Related

'Invalid column name 'password01' [duplicate]

I am very new to working with databases. Now I can write SELECT, UPDATE, DELETE, and INSERT commands. But I have seen many forums where we prefer to write:
SELECT empSalary from employee where salary = #salary
...instead of:
SELECT empSalary from employee where salary = txtSalary.Text
Why do we always prefer to use parameters and how would I use them?
I wanted to know the use and benefits of the first method. I have even heard of SQL injection but I don't fully understand it. I don't even know if SQL injection is related to my question.
Using parameters helps prevent SQL Injection attacks when the database is used in conjunction with a program interface such as a desktop program or web site.
In your example, a user can directly run SQL code on your database by crafting statements in txtSalary.
For example, if they were to write 0 OR 1=1, the executed SQL would be
SELECT empSalary from employee where salary = 0 or 1=1
whereby all empSalaries would be returned.
Further, a user could perform far worse commands against your database, including deleting it If they wrote 0; Drop Table employee:
SELECT empSalary from employee where salary = 0; Drop Table employee
The table employee would then be deleted.
In your case, it looks like you're using .NET. Using parameters is as easy as:
string sql = "SELECT empSalary from employee where salary = #salary";
using (SqlConnection connection = new SqlConnection(/* connection info */))
using (SqlCommand command = new SqlCommand(sql, connection))
{
var salaryParam = new SqlParameter("salary", SqlDbType.Money);
salaryParam.Value = txtMoney.Text;
command.Parameters.Add(salaryParam);
var results = command.ExecuteReader();
}
Dim sql As String = "SELECT empSalary from employee where salary = #salary"
Using connection As New SqlConnection("connectionString")
Using command As New SqlCommand(sql, connection)
Dim salaryParam = New SqlParameter("salary", SqlDbType.Money)
salaryParam.Value = txtMoney.Text
command.Parameters.Add(salaryParam)
Dim results = command.ExecuteReader()
End Using
End Using
Edit 2016-4-25:
As per George Stocker's comment, I changed the sample code to not use AddWithValue. Also, it is generally recommended that you wrap IDisposables in using statements.
You are right, this is related to SQL injection, which is a vulnerability that allows a malicioius user to execute arbitrary statements against your database. This old time favorite XKCD comic illustrates the concept:
In your example, if you just use:
var query = "SELECT empSalary from employee where salary = " + txtSalary.Text;
// and proceed to execute this query
You are open to SQL injection. For example, say someone enters txtSalary:
1; UPDATE employee SET salary = 9999999 WHERE empID = 10; --
1; DROP TABLE employee; --
// etc.
When you execute this query, it will perform a SELECT and an UPDATE or DROP, or whatever they wanted. The -- at the end simply comments out the rest of your query, which would be useful in the attack if you were concatenating anything after txtSalary.Text.
The correct way is to use parameterized queries, eg (C#):
SqlCommand query = new SqlCommand("SELECT empSalary FROM employee
WHERE salary = #sal;");
query.Parameters.AddWithValue("#sal", txtSalary.Text);
With that, you can safely execute the query.
For reference on how to avoid SQL injection in several other languages, check bobby-tables.com, a website maintained by a SO user.
In addition to other answers need to add that parameters not only helps prevent sql injection but can improve performance of queries. Sql server caching parameterized query plans and reuse them on repeated queries execution. If you not parameterized your query then sql server would compile new plan on each query(with some exclusion) execution if text of query would differ.
More information about query plan caching
Two years after my first go, I'm recidivating...
Why do we prefer parameters? SQL injection is obviously a big reason, but could it be that we're secretly longing to get back to SQL as a language. SQL in string literals is already a weird cultural practice, but at least you can copy and paste your request into management studio. SQL dynamically constructed with host language conditionals and control structures, when SQL has conditionals and control structures, is just level 0 barbarism. You have to run your app in debug, or with a trace, to see what SQL it generates.
Don't stop with just parameters. Go all the way and use QueryFirst (disclaimer: which I wrote). Your SQL lives in a .sql file. You edit it in the fabulous TSQL editor window, with syntax validation and Intellisense for your tables and columns. You can assign test data in the special comments section and click "play" to run your query right there in the window. Creating a parameter is as easy as putting "#myParam" in your SQL. Then, each time you save, QueryFirst generates the C# wrapper for your query. Your parameters pop up, strongly typed, as arguments to the Execute() methods. Your results are returned in an IEnumerable or List of strongly typed POCOs, the types generated from the actual schema returned by your query. If your query doesn't run, your app won't compile. If your db schema changes and your query runs but some columns disappear, the compile error points to the line in your code that tries to access the missing data. And there are numerous other advantages. Why would you want to access data any other way?
In Sql when any word contain # sign it means it is variable and we use this variable to set value in it and use it on number area on the same sql script because it is only restricted on the single script while you can declare lot of variables of same type and name on many script. We use this variable in stored procedure lot because stored procedure are pre-compiled queries and we can pass values in these variable from script, desktop and websites for further information read Declare Local Variable, Sql Stored Procedure and sql injections.
Also read Protect from sql injection it will guide how you can protect your database.
Hope it help you to understand also any question comment me.
Old post but wanted to ensure newcomers are aware of Stored procedures.
My 10ยข worth here is that if you are able to write your SQL statement as a stored procedure, that in my view is the optimum approach. I ALWAYS use stored procs and never loop through records in my main code. For Example: SQL Table > SQL Stored Procedures > IIS/Dot.NET > Class.
When you use stored procedures, you can restrict the user to EXECUTE permission only, thus reducing security risks.
Your stored procedure is inherently paramerised, and you can specify input and output parameters.
The stored procedure (if it returns data via SELECT statement) can be accessed and read in the exact same way as you would a regular SELECT statement in your code.
It also runs faster as it is compiled on the SQL Server.
Did I also mention you can do multiple steps, e.g. update a table, check values on another DB server, and then once finally finished, return data to the client, all on the same server, and no interaction with the client. So this is MUCH faster than coding this logic in your code.
Other answers cover why parameters are important, but there is a downside! In .net, there are several methods for creating parameters (Add, AddWithValue), but they all require you to worry, needlessly, about the parameter name, and they all reduce the readability of the SQL in the code. Right when you're trying to meditate on the SQL, you need to hunt around above or below to see what value has been used in the parameter.
I humbly claim my little SqlBuilder class is the most elegant way to write parameterized queries. Your code will look like this...
C#
var bldr = new SqlBuilder( myCommand );
bldr.Append("SELECT * FROM CUSTOMERS WHERE ID = ").Value(myId);
//or
bldr.Append("SELECT * FROM CUSTOMERS WHERE NAME LIKE ").FuzzyValue(myName);
myCommand.CommandText = bldr.ToString();
Your code will be shorter and much more readable. You don't even need extra lines, and, when you're reading back, you don't need to hunt around for the value of parameters. The class you need is here...
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Data.SqlClient;
public class SqlBuilder
{
private StringBuilder _rq;
private SqlCommand _cmd;
private int _seq;
public SqlBuilder(SqlCommand cmd)
{
_rq = new StringBuilder();
_cmd = cmd;
_seq = 0;
}
public SqlBuilder Append(String str)
{
_rq.Append(str);
return this;
}
public SqlBuilder Value(Object value)
{
string paramName = "#SqlBuilderParam" + _seq++;
_rq.Append(paramName);
_cmd.Parameters.AddWithValue(paramName, value);
return this;
}
public SqlBuilder FuzzyValue(Object value)
{
string paramName = "#SqlBuilderParam" + _seq++;
_rq.Append("'%' + " + paramName + " + '%'");
_cmd.Parameters.AddWithValue(paramName, value);
return this;
}
public override string ToString()
{
return _rq.ToString();
}
}

Oracle Custom Web Interface (ASP.NET, C#) - Getting DBMS_OUTPUT contents

I'm writing a customized, simple Web interface for Oracle DB, using ASP.NET, a Web API project in C#, and Oracle.DataAccess (ODP.NET). This is an educational project which I am designing for an extra project for a college course. There's several reasons for me designing this project, but the upshot is that using Oracle-provided tools (SQL Developer, Enterprise Manage Express, etc.) are not suitable for the task at hand.
I have an API call that can accept a query string, execute it against the DBMS and return the DBMS's output as JSON data, along with some additional return data. This has been sufficient for simple SELECT queries and other basic DDL/DML queries. However, now we're branching into PL/SQL.
For example, the most basic PL/SQL HELLO WORLD program that we'd execute looks like:
BEGIN
DBMS_OUTPUT.PUT_LINE('Hello World');
END;
When I feed this query into my C# API, it does execute successfully. However, I want to be able to retrieve the output of the DBMS_OUTPUT.PUT_LINE call(s).
This question has been addressed before and I have looked into a few of the solutions, and came down on one involving a piece of code which calls the following PL/SQL on the database:
BEGIN
Dbms_output.get_line(:line, :status);
END;
The C# code obviously creates and adds the correct parameter objects to the request before sending it. I plan to call this function repeatedly until a NULL value comes back, indicating the end of output. This data would then be added to the JSON object returned by the API so that the Web interface can display the output. However, this function never returns any lines of output.
My hunch (I'm still learning Oracle myself, so not sure) is that either the server isn't actually outputting the data, or that the buffer is flushed after the PL/SQL anonymous procedure (the Hello World) program finishes.
It was also suggested to add set serveroutput on; to the PL/SQL query but this did not work: it produced the error ORA-00922: missing or invalid option.
Here is the actual C# code being used to retrieve a line of output from the DBMS_OUTPUT buffer:
private string GetDbmsOutputLine(OracleConnection conn)
{
OracleCommand command = new OracleCommand
{
CommandText = "begin dbms_output.get_line(:line, :status); end;",
CommandType = CommandType.Text,
Connection = conn,
};
OracleParameter lineParameter = new OracleParameter("line",
OracleDbType.Varchar2);
lineParameter.Size = 32000;
lineParameter.Direction = ParameterDirection.Output;
command.Parameters.Add(lineParameter);
OracleParameter statusParameter = new OracleParameter("status",
OracleDbType.Int32);
statusParameter.Direction = ParameterDirection.Output;
command.Parameters.Add(statusParameter);
command.ExecuteNonQuery();
if (command.Parameters["line"].Value is DBNull)
return null;
string line = command.Parameters["line"].Value as string;
return line;
}
Edit: I tried manually calling the following procedure prior to executing the user's code: BEGIN DBMS_OUTPUT.ENABLE(32768); END;. This executes without error but after doing so the later calls to DBMS_OUTPUT.GET_LINE still return null.
It looks like what may be happening is that each time I execute a new query to the database, even though it's on the same connection, that the DBMS_OUTPUT buffer is being cleared. I am not sure if this is the case, but it seems to be - nothing else would readily explain the lack of data in the buffer.
Still searching for a way to handle this...
Points to keep in mind:
This is an academic project for student training and development; hence, it is not expected that this mini-application be "production-ready" in any way. Allowing users to execute raw queries posted via the Web obviously leads to all sorts of security risks - which is why this would never be put into an actual production scenario.
I currently open a connection and maintain it throughout a single API call by passing it into each OracleCommand object I create. This, in theory, should mean that the buffer is maintained, but it doesn't appear to be the case. Either the data I write is not making it to the buffer in the first place, or the buffer is flushed each time an OracleCommand object is actually executed against the database connection.
With the caveat that in reality you'd never write code that expects that anyone will ever see data that you attempt to write to the dbms_output...
Within a session, you'd need to call dbms_output.enable that allocates the buffer that is written to by dbms_output. Depending on the Oracle version, you may be able to pass in a null to indicate that you want an unlimited buffer size. In older versions, you'd need to allocate a fixed buffer size (and you'd get an error if you try to write too much data to the buffer). Then you'd call the procedure that calls dbms_output.put[_line]. Finally, you'd be able to call dbms_output.get[_line]. Note that all three things have to happen in the context of a single session. Each session has a separate dbms_output buffer (or no dbms_output buffer).

Reduce number of database calls

I have a stored-procedure which accepts five parameters and performing a update on a table
Update Table
Set field = #Field
Where col1= #Para1 and Col2=#Para and Col3=#Para3 and col4 =#aPara4
From the user prospective you can select multiple values for all the condition parameters.
For example you can select 2 options which needs to match Col1 in database table (which need to pass as #Para1)
So I am storing all the selected values in separates lists.
At the moment I am using foreach loop to do the update
foreach (var g in _list1)
{
foreach (var o in _list2)
{
foreach (var l in _list3)
{
foreach (var a in _list4)
{
UpdateData(g, o, l,a);
}
}
}
}
I am sure this is not a good way of doing this since this will call number of database call. Is there any way I can ignore the loop and do a minimum number of db calls to achieve the same result?
Update
I am looking for some other approach than Table-Valued Parameters
You can bring query to this form:
Update Table Set field = #Field Where col1 IN {} and Col2 IN {} and Col3 IN {} and col4 IN {}
and pass parameters this way: https://stackoverflow.com/a/337792/580053
One possible way would be to use Table-Valued Parameters to pass the multiple values per condition to the stored procedure. This would reduce the loops in your code and should still provide the functionality that you are looking for.
If I am not mistaken they were introduced in SQL Server 2008, so as long as you don't have to support 2005 or earlier they should be fine to use.
Consider using the MS Data Access Application Block from the Enterprise Library for the UpdateDataSet command.
Essentially, you would build a datatable where each row is a parameter set, then you execute the "batch" of parameter sets against the open connection.
You can do the same without that of course, by building a string that has several update commands in it and executing it against the DB.
Since table-valued parameters are off limits to you, you may consider an XML-based approach:
Build an XML document containing the four columns that you would like to pass.
Change the signature of your stored procedure to accept a single XML-valued parameter instead of four scalar parameters
Change the code of your stored procedure to perform the updates based on the XML that you get
Call your new stored procedure once with the XML that you constructed in memory using the four nested loops.
This should reduce the number of round-trips, and speed up the overall execution time. Here is a link to an article explaining how inserting many rows can be done at once using XML; your situation is somewhat similar, so you should be able to use the approach outlined in that article.
So long as you have the freedom to update the structure of the stored procedure; the method I would suggest for this would be to use a table value parameter instead of the multiple parameters.
A good example which goes into both server and database code for this can be found at: http://www.codeproject.com/Articles/39161/C-and-Table-Value-Parameters
Why are you using a stored procedure for this? In my opinion you shouldn't use SP to do simple CRUD operations. The real power of stored procedures is for heavy calculations and things like that.
Table-valued parameters would be my choice, but since you are looking for other approach why don't you go the simpler way and just dynamically construct a bulk/mass update query on your server side code and run it against the DB?

Prepared statement vs. Stored Procedure with RefCursor

I've been calling Oracle stored procedures that return RefCursors in my C# application. A sample stored procedure is given below.
CREATE OR REPLACE
PROCEDURE "DOSOMETHING"(
P_RECORDS OUT SYS_REFCURSOR)
AS
BEGIN
OPEN P_RECORDS FOR
SELECT SOMETHING FROM SOMETABLE;
END;
When using the OracleDataReader to read the results for this, everytime the procedure is called the database parses the procedure. After quite a bit of searching I found out that eliminating this parse call is impossible with .NET when using RefCursor.
However if I just call the procedure using a prepared statement as below, this parse call can be avoided.
public void DoSomething()
{
var command = ServerDataConnection.CreateCommand();
command.CommandType = CommandType.Text;
command.CommandText = "SELECT SOMETHING FROM SOMETABLE";
command.Prepare();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
DoSomethingToResult();
}
}
}
My question is which of these methods will have a minimum performance impact? Will changing the procedures to prepared statements to avoid parse calls have an even more negative impact on the application performance?
Please note that these select statements can return a large set of results. Possibly thousands of rows.
Using a ref cursor in PL/SQL will provoke a parse call each time the cursor is opened. The same parse call will be issued each time you call command.Prepare(). As it is now, your .NET code would parse the query as much as the PL/SQL code.
You could reuse your command object without additional parse calls if you need to issue the exact same query (with just a change in parameters). However, those parses would be soft parses so the performance may not be noticeable (most of the work is done in the hard parse, the first time the database encounters a query). Since your query returns lots of rows, the amount of work involved in a soft parse is certainly negligible compared to the amount of work to actually fetch those rows.

What's a good alternative to firing a stored procedure 368 times to update the database?

I'm working on a .NET component that gets a set of data from the database, performs some business logic on that set of data, and then updates single records in the database via a stored procedure that looks something like spUpdateOrderDetailDiscountedItem.
For small sets of data, this isn't a problem, but when I had a very large set of data that required an iteration of 368 stored proc calls to update the records in the database, I realized I had a problem. A senior dev looked at my stored proc code and said it looked fine, but now I'd like to explore a better method for sending "batch" data to the database.
What options do I have for updating the database in batch? Is this possible with stored procs? What other options do I have?
I won't have the option of installing a full-fledged ORM, but any advice is appreciated.
Additional Background Info:
Our current data access model was built 5 years ago and all calls to the db currently get executed via modular/static functions with names like ExecQuery and GetDataTable. I'm not certain that I'm required to stay within that model, but I'd have to provide a very good justification for going outside of our current DAL to get to the DB.
Also worth noting, I'm fairly new when it comes to CRUD operations and the database. I much prefer to play/work in the .NET side of code, but the data has to be stored somewhere, right?
Stored Proc contents:
ALTER PROCEDURE [dbo].[spUpdateOrderDetailDiscountedItem]
-- Add the parameters for the stored procedure here
#OrderDetailID decimal = 0,
#Discount money = 0,
#ExtPrice money = 0,
#LineDiscountTypeID int = 0,
#OrdersID decimal = 0,
#QuantityDiscounted money = 0,
#UpdateOrderHeader int = 0,
#PromoCode varchar(6) = '',
#TotalDiscount money = 0
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
Update OrderDetail
Set Discount = #Discount, ExtPrice = #ExtPrice, LineDiscountTypeID = #LineDiscountTypeID, LineDiscountPercent = #QuantityDiscounted
From OrderDetail with (nolock)
Where OrderDetailID = #OrderDetailID
if #UpdateOrderHeader = -1
Begin
--This code should get code the last time this query is executed, but only then.
exec spUpdateOrdersHeaderForSkuGroupSourceCode #OrdersID, 7, 0, #PromoCode, #TotalDiscount
End
If you are using SQL 2008, then you can use a table-valued parameter to push all of the updates in one s'proc call.
update
Incidentally, we are using this in combination with the merge statement. That way sql server takes care of figuring out if we are inserting new records or updating existing ones. This mechanism is used at several major locations in our web app and handles hundreds of changes at a time. During regular load we will see this proc get called around 50 times a second and it is MUCH faster than any other way we've found... and certainly a LOT cheaper than buying bigger DB servers.
An easy and alternative way I've seen in use is to build a SQL statement consisting of sql_execs calling the sproc with the parameters in the string. Not sure if this is advised or not, but from the .NET perspective, you are only populating one SqlCommand and calling ExecuteNonQuery once...
Note if you choose this then please, please use the StringBuilder! :-)
Update: I much prefer Chris Lively's answer, didn't know about table-valued parameters until now... unfortunately the OP is using 2005.
You can send the full set of data as XML input to the stored procedure. Then you can perform Set operations to modify the database. Set based will beat RBARs on performance almost every single time.
If you are using a version of SQL Server prior to 2008, you can move your code entirely into the stored procedure itself.
There are good and "bad" things about this.
Good
No need to pull the data across a network wire.
Faster if your logic is set based
Scales up
Bad
If you have rules against any logic in the database, this would break your design.
If the logic cannot be set based then you might end up with a different set of performance problems
If you have outside dependencies, this might increase difficulty.
Without details on exactly what operations you are performing on the data it's hard to give a solid recommendation.
UPDATE
Ben asked what I meant in one of my comments about the CLR and SQL Server. Read Using CLR Integration in SQL Server 2005. The basic idea is that you can write .Net code to do your data manipulation and have that code live inside the SQL server itself. This saves you from having to read all of the data across the network and send updates back that way.
The code is callable by your existing proc's and gives you the entire power of .net so that you don't have to do things like cursors. The sql will stay set based while the .net code can perform operations on individual records.
Incidentally, this is how things like heirarchyid were implemented in SQL 2008.
The only real downside is that some DBA's don't like to introduce developer code like this into the database server. So depending on your environment, this may not be an option. However, if it is, then it is a very powerful way to take care of your problem while leaving the data and processing within your database server.
Can you create batched statement with 368 calls to your proc, then at least you will not have 368 round trips. ie pseudo code
var lotsOfCommands = "spUpdateOrderDetailDiscountedItem 1; spUpdateOrderDetailDiscountedItem 2;spUpdateOrderDetailDiscountedItem ... 368'
var new sqlcommand(lotsOfCommands)
command.CommandType = CommandType.Text;
//execute command
I had issues when trying to the same thing (via inserts, updates, whatever). While using an OleDbCommand with parameters, it took a bunch of time to constantly re-create the object and parameters each time I called it. So, I made a property on my object for handling such call and also added the appropriate "parameters" to the function. Then, when I needed to actually call/execute it, I would loop through each parameter in the object, set it to whatever I needed it to be, then execute it. This created SIGNIFICANT performance improvement... Such pseudo-code of my operation:
protected OleDbCommand oSQLInsert = new OleDbCommand();
// the "?" are place-holders for parameters... can be named parameters,
// just for visual purposes
oSQLInsert.CommandText = "insert into MyTable ( fld1, fld2, fld3 ) values ( ?, ?, ? )";
// Now, add the parameters
OleDbParameter NewParm = new OleDbParameter("parmFld1", 0);
oSQLInsert.Parameters.Add( NewParm );
NewParm = new OleDbParameter("parmFld2", "something" );
oSQLInsert.Parameters.Add( NewParm );
NewParm = new OleDbParameter("parmFld3", 0);
oSQLInsert.Parameters.Add( NewParm );
Now, the SQL command, and place-holders for the call are all ready to go... Then, when I'm ready to actuall call it, I would do something like..
oSQLInsert.Parameters[0].Value = 123;
oSQLInsert.Parameters[1].Value = "New Value";
oSQLInsert.Parameters[2].Value = 3;
Then, just execute it. The repetition of 100's of calls could be killed by time by creating your commands over and over...
good luck.
Is this a one-time action (like "just import those 368 new customers once") or do you regularly have to do 368 sproc calls?
If it's a one-time action, just go with the 368 calls.
(if the sproc does much more than just updates and is likely to drag down the performance, run it in the evening or at night or whenever no one's working).
IMO, premature optimization of database calls for one-time actions is not worth the time you spend with it.
Bulk CSV Import
(1) Build data output via string builder as CSV then do a Bulk CSV import:
http://msdn.microsoft.com/en-us/library/ms188365.aspx
Table-valued parameters would be best, but since you're on SQL 05, you can use the SqlBulkCopy class to insert batches of records. In my experience, this is very fast.

Categories