I have a program that gets data from SQL server table. The code is the following:
SqlConnection conn=new SqlConnection(...);//correct
conn.Open();
DataTable dt=new DataTable();
SqlCommand selectCMD = new SqlCommand("SELECT * FROM TABLE WHERE Condition", conn);
SqlDataAdapter custDA = new SqlDataAdapter();
custDA.SelectCommand = selectCMD;
custDA.Fill(dt);
Datagridview1.DataSource=dt;
Datagridview1.DataBind();
But the problem is that when executing the same query in SQL server management studio, it takes less then second to execute. Meanwhile when using program, it takes half a minute to get the result. Using debugger I see, that the main row, where program "thinks" a lot time is when data adapter is filling DataTable. Any suggestions how to reduce the time? What's wrong in my code?
Managaement studio just displays text results. SqlDataAdapter must map each result column value to a DataGridView column value. One will take much more time than the other. With Management Studio, it also virtualizes the results--it doesn't show all the results at once as you scroll down for large result sets more data is retrieved.
Check that you might need proper indexing on required columns . When you are running query from SQL it might be using an execution plan which is more optimized compared to when .Fill method is doing .
Can you try cleaning out procedure cache and memory buffers using SSMS:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Doing so before you test your query prevents usage of cached execution plans and previous results cache.
You can to create index and to set options to accelerate query execution. And you can to use this approach to load data SqlDataAdapter.Fill - Asynchronous approach
create index condition_idx on table (condition)
new SqlCommand("set nocount on;SELECT * FROM TABLE WHERE Condition", conn);
Thank Everyone for help. I used parameters in SqlCommand object. Unfortunately I haven't mentioned that so you couldn't help me. But as James posted a link, I have found that when SqlCommand is uded with parameters then the execution is made using stored procedure sp_execute. Because server has to compile it, that's why it takes so long. After parameters were removed, everything is working good. The other way is that you can turn off automatic recomplilation each time stored procedure executes. Thanks everyone again. +1 for everyone.
Related
Is there any way to intercept the SQL that's generated by SqlCommand?
I currently have a method that will execute a stored procedure:
public int ExecSP(string spName, params objec[] params)
{
using (SqlConnection con = new SqlConnection("AdventureWorks"))
using (SqlCommand cmd = new SqlCommand(spName, con))
{
cmd.CommandType = CommandType.StoredProcedure;
//..calls method to add params and values to cmd object
con.Open();
return cmd.ExecuteNonQuery();
}
}
When I use this to call the following:
ExecSP("HumanResources.uspUpdateEmployeePersonalInfo", 1, "295847284", new DateTime(1963, 3, 2), "S", "M");`
I get the following in SQLProfiler:
exec HumanResources.uspUpdateEmployeePersonalInfo #BusinessEntityID=1,#NationalIDNumber=N'295847284',#BirthDate='1963-03-02 00:00:00',#MaritalStatus=N'S',#Gender=N'M'
What I would like to do is intercept that SQL command so that I may add a comment to the end of it that will contain some pertinent information so that it looks like:
exec HumanResources.uspUpdateEmployeePersonalInfo #BusinessEntityID=1,#NationalIDNumber=N'295847284',#BirthDate='1963-03-02 00:00:00',#MaritalStatus=N'S',#Gender=N'M' -- IMPORTANT INFORMATION HERE
I can't change the CommandType to Text and I can't add an extra parameter to the stored procedure. I tried looking at these other questions but had no luck:
Can I override SqlCommand functions?
SqlCommand to T-SQL
In the case of CommandType = CommandType.StoredProcedure, this isn't possible.
Why? The SQL you see in SQL Profiler isn't generated by the client. :-/
When SqlCommand executes a CommandType.StoredProcedure command, it sends SQL Server an execute remote procedure call message with the name of the stored procedure and a data structure containing the parameters.
The TextData SQL Profiler displays for the RPC:Starting/RPC:Completed events is generated server-side. Since this text isn't generated client-side, it's not possible to modify it client-side.
What you're seeing is a debug/profile message, but that doesn't perfectly represent what was actually executed (the database will use sp_executesql() behind the scenes).
What you are looking for doesn't exist. The whole point of using parameter objects is that parameter values are NEVER, at any time, included as part of the sql command string, even on the server.
This accomplishes three main things:
It prevents any possibility of sql injection attacks via that query. You'll never have a problem with some weird unicode character set problem allowing an attacker to insert a command into the data for your query, like they did a couple years back with php/mysql, or some new language feature creating a situation where your old escape code wasn't good enough. The user-data portion of a query is always separated from the command logic portion of the query at every stage.
It improves performance, by allowing the server to cache the execution plan. This saves compile steps as your application executes what may be the same query over and over, just with different parameter data.
It avoids problems with formatting for things like dates, text data with apostrophes, and the like.
If you want to get debugging data, write a method to output the query and the parameter data that goes with it. If you want to add info the your profile trace, you can switch to CommandType.Text and call the stored procedure via your own EXEC procedurename #param1, #param2, ... #paramN -- INFO HERE string.
So I need to update some over 1 million records from an asp.net web forms project in the code behind with a data given in a TextBox. I tried to do that with LINQ but that takes to long...
1st Question: What is the best solution?
I noticed that if I run the update in SQL (MSSQL) it only takes 20-30 seconds and that is an acceptable time.
2nd Question: Should I create procedures in SQL and import them in my project and call the procedures ? Will that give me a much better time? Basically, will using imported procedures bring the time down to close to the time needed for that query to run in SQL?
If running it via a normal query is faster, create a stored procedure that accepts a parameter:
using (SqlConnection con = new SqlConnection("connection string here")) {
using (SqlCommand cmd = new SqlCommand("sp_Stored_Proc_Name_Here", con)) {
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#VariableNameHere", SqlDbType.VarChar).Value = textBoxNameHere.Text;
con.Open();
cmd.ExecuteNonQuery();
}
}
Stored Procedures should be the best solution.
However, you can use ADO.NET to call these stored procedures without the need to import them in your Project.
Stored Procedures are SQL-Injection Safe
Use this to gain high-performance on your app:
1) Create a stored proc that accepts a TVP (Table-Valued-Parameter) when calling it.
That way, you call the proc only ONE TIME and flush ALL the data at once
You MUST use ADO.Net to call it (LINQ does not support TVPs)
Here are the resources:
Define the proc http://technet.microsoft.com/en-us/library/bb510489.aspx
Call from C# http://msdn.microsoft.com/en-us/library/bb675163(v=vs.110).aspx
2) Make sure your TVP uses a table-value-data-type that matches in column-data-types against the expected indexes of the target table (so SQL can effectively use indexes or anything required).
3) Use a correct transaction isolation level inside your proc. For example, "READ COMMITTED", which gives good response.
4) Add into your stored proc the "SET NOCOUNT ON", so the proc won't signal ADO.Net each time a T-SQL statement is executed.
5) On your SqlConnection object (in C#), set the PacketSize property to something between 3000 and 8000. Play with the values until you get the right response time. http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.packetsize.aspx
I am trying to copy a large datatable (columns with more than 1000 rows) created dynamically in the applicaiton to a MySQL table using c#, WPF.
I have searched for various ways to do this but was unsuccessful to implement. I think the MySqlDataAdapter class is something I should use but I cant make it work. This is what I tried to do...
MySqlConnection con = new MySqlConnection(MyConString);
MySqlCommand comm = new MySqlCommand("Select * From kinectdata", con);
MySqlDataAdapter test1 = new MySqlDataAdapter(comm);
test1.Update(skelData);
Speed of this transfer is also important so I prefered not to call an insert or update statement 1000 times.
Many thanks for your feedback!
M
You can build a single INSERT statement which inserts all 1000 rows.
INSERT INTO table VALUES (1,2,3), (4,5,6), (7,8,9);
1000 rows is not that much, in database terms its nothing, using insert should be very fast. No more then 2 seconds.
In your example you do have to declare command type and set your query and command text.
I have a fairly simple query that I keep getting timeouts (it takes over three minutes to complete, I stopped it early so I could post this question) on when it is running in code, however when I run the same query from the same computer in Sql Server Management Studio the query will only take 2532 ms the first query when the data is not cached on the server and 524 ms for repeated queries.
Here is my c# code
using (var conn = new SqlConnection("Data Source=backend.example.com;Connect Timeout=5;Initial Catalog=Logs;Persist Security Info=True;User ID=backendAPI;Password=Redacted"))
using (var ada = new SqlDataAdapter(String.Format(#"
SELECT [PK_JOB],[CLIENT_ID],[STATUS],[LOG_NAME],dt
FROM [ES_HISTORY]
inner join [es_history_dt] on [PK_JOB] = [es_historyid]
Where client_id = #clientID and dt > #dt and (job_type > 4 {0}) {1}
Order by dt desc"
, where.ToString(), (cbShowOnlyFailed.Checked ? "and Status = 1" : "")), conn))
{
ada.SelectCommand.Parameters.AddWithValue("#clientID", ClientID);
ada.SelectCommand.Parameters.AddWithValue("#dt", dtpFilter.Value);
//ada.SelectCommand.CommandTimeout = 60;
conn.Open();
Logs.Clear();
ada.Fill(Logs); //Time out exception for 30 sec limit.
}
here is my code I am running in SSMS, I pulled it right from ada.SelectCommand.CommandText
declare #clientID varchar(200)
set #clientID = '138'
declare #dt datetime
set #dt = '9/19/2011 12:00:00 AM'
SELECT [PK_JOB],[CLIENT_ID],[STATUS],[LOG_NAME],dt
FROM [ES_HISTORY]
inner join [es_history_dt] on [PK_JOB] = [es_historyid]
Where client_id = #clientID and dt > #dt and (job_type > 4 or job_type = 0 or job_type = 1 or job_type = 4 )
Order by dt desc
What is causing the major discrepancy for the difference in time?
To keep the comment section clean, I will answer some FAQ's here.
The same computer and logon is used for both the application and ssms.
Only 15 rows are returned in my example query. However, es_history contains 11351699 rows and es_history_dt contains 8588493 rows. Both tables are well indexed and the execution plan in SSMS says they are using index seeks for the look-ups so they are fast lookups. The program is behaving as if it is not using the indexes for the C# version of the query.
Your code in SSMS is not the same code you run in your application. This line in your application adds a NVARCHAR parameter:
ada.SelectCommand.Parameters.AddWithValue("#clientID", ClientID);
while in the SSMS script you declare it as VARCHAR:
declare #clientID varchar(200)
Due to the rules of Data Type Precedence the Where client_id = #clientID expression in your query is not SARG-able where #clientID is of type NVARCHAR (I'm making a leap of faith and assume that client_id column is of type VARCHAR). The application thus forces a table scan where the SSMS query can do a quick key seek. This is a well know and understood issue with using Parameters.AddWithValue and has been discussed in many articles before, eg. see How Data Access Code Affects Database Performance. Once the problem is understood, the solutions are trivial:
add parameters with the constructor that accepts a type: Parameters.Add("#clientID", SqlDbType.Varchar, 200) (and do pass in the explicit length to prevent cache pollution, see Query performance and plan cache issues when parameter length not specified correctly
or cast the parameter in the SQL text: where client_id = cast(#clientID as varchar(200)).
The first solution is superior because it solves the cache pollution problem in addition to the SARG-ability problem.
I would also recommend you read Slow in the Application, Fast in SSMS? Understanding Performance Mysteries
Had the same issue:
Call Stored procedure from code: 30 seconds+
Call Same Stored procedure from SSMS: milliseconds.
Call SQL from within stored procedure from code: milliseconds.
Solution:
Drop Stored Procedure, then recreate exactly the same stored procedure, now both returning in milliseconds. No code change.
Run the profiler on your c# connection - there may be other activity going on that you are not aware of.
Capture the execution plan from both SSMS when you manually run your query and then from Profiler when you are running your application. Compare and contrast.
Run DBCC FREEPROCCACHE, as suggested here, just to make sure the problem isn't due to a stale query execution plan.
Which one would be better in executing an insert statement for ms-sql database:
Sql DataAdapter or SQL Command
Object?
Which of them would be better, while inserting only one row and while inserting multiple rows?
A simple example of code usage:
SQL Command
string query = "insert into Table1(col1,col2,col3) values (#value1,#value2,#value3)";
int i;
SqlCommand cmd = new SqlCommand(query, connection);
// add parameters...
cmd.Parameters.Add("#value1",SqlDbType.VarChar).Value=txtBox1.Text;
cmd.Parameters.Add("#value2",SqlDbType.VarChar).Value=txtBox2.Text;
cmd.Parameters.Add("#value3",SqlDbType.VarChar).Value=txtBox3.Text;
cmd.con.open();
i = cmd.ExecuteNonQuery();
cmd.con.close();
SQL Data Adapter
DataRow dr = dsTab.Tables["Table1"].NewRow();
DataSet dsTab = new DataSet("Table1");
SqlDataAdapter adp = new SqlDataAdapter("Select * from Table1", connection);
adp.Fill(dsTab, "Table1");
dr["col1"] = txtBox1.Text;
dr["col2"] = txtBox5.Text;
dr["col3"] = "text";
dsTab.Tables["Table1"].Rows.Add(dr);
SqlCommandBuilder projectBuilder = new SqlCommandBuilder(adp);
DataSet newSet = dsTab.GetChanges(DataRowState.Added);
adp.Update(newSet, "Table1");
Updating a data source is much easier using DataAdapters. It's easier to make changes since you just have to modify the DataSet and call Update.
There is probably no (or very little) difference in the performance between using DataAdapters vs Commands. DataAdapters internally use Connection and Command objects and execute the Commands to perform the actions (such as Fill and Update) that you tell them to do, so it's pretty much the same as using only Command objects.
I would use LinqToSql with a DataSet for single insert and most Database CRUD requests. It is type safe, relatively fast for non compilcated queries such as the one above.
If you have many rows to insert (1000+) and you are using SQL Server 2008 I would use SqlBulkCopy. You can use your DataSet and input into a stored procedure and merge into your destination
For complicated queries I recommend using dapper in conjunction with stored procedures.
I suggest you would have some kind of control on your communication with the database. That means abstracting some code, and for that the CommandBuilder automatically generates CUD statements for you.
What would be even better is if you use that technique together with a typed Dataset. then you have intellisense and compile time check on all your columns