CLR Trigger only particular column get updated - c#

I wrote a clr trigger whenever a new file get inserted in to my table and then pass the value to my WCF service, now i have to change the process to "update" only the particular column get updated then i have to pull the value from other two tables.
Am just wondering is this anyway i can start the clr trigger just only the particular column get updated ?
The scenario like this
Table 1: Customer Details (Cust.No, Cust.Name,Desc)
Table 2: Address (DoorNo,Street,City,State).
Here what am trying to do, if the "Desc" column in Table1 get updated then the clr trigger get triggered and pass all the values in Table1 and Table2 based on the "Desc".
Here is my code for Insert:
[Microsoft.SqlServer.Server.SqlTrigger(Name = "WCFTrigger",Target = "tbCR", Event = "FOR UPDATE, INSERT")]
public static void Trigger1()
{
SqlCommand cmd;
SqlTriggerContext myContext = SqlContext.TriggerContext;
SqlPipe pipe = SqlContext.Pipe;
SqlDataReader reader;
if(myContext.TriggerAction== TriggerAction.Insert)
{
using (SqlConnection conn = new SqlConnection(#"context connection=true"))
{
conn.Open();
//cmd = new SqlCommand(#"SELECT * FROM tbCR", conn);
cmd = new SqlCommand(#"SELECT * FROM INSERTED", conn);
reader = cmd.ExecuteReader();
reader.Read();
//get the insert value's here
string Cust.No, Cust.Name,Desc;
Cust.No = reader[0].ToString();
Cust.Name = reader[1].ToString();
Desc = reader[2].ToString();
myclient.InsertOccured(Cust.No, Cust.Name,Desc);
reader.Dispose();
}
}
}

You cannot prevent running the trigger selectively, it will always run no matter the columns updated. However, once launched you can consult the COLUMNS_UPDATED() function:
Returns a varbinary bit pattern that indicates the columns in a table
or view that were inserted or updated. COLUMNS_UPDATED is used
anywhere inside the body of a Transact-SQL INSERT or UPDATE trigger to
test whether the trigger should execute certain actions.
So you would adjust your trigger logic to have appropiate action according to what columns where updated.
That being said, calling WCF from SQLCLR is a very very bad idea. Calling WCF from a trigger is even worse. Your server will die in production as transactions will block/abort waiting on some HTTP response to crawl back across the wire. Not to mention that your calls are inherently incorrect in presence of rollbacks, as you cannot undo an HTTP call. The proper way to do such actions is to decouple the operation and the WCF call by means of a queue. You can do this with tables used as queues, you could use true queues or you could use Change Tracking. Any of these would allow you to decouple the change and the WCF call and would allow you to make the call from a separate process, not from SQLCLR

Related

MySqlConnection Threading Optimization

How to optimize the functions that connect to the database so that if many users access the database at the same time, the server does not crash or create another problem.
Is it possible to use threading? Is it possible that if the database is late with the response, the main thread freezes or blocks other code?
public static void UpdatePassword(string email, string password)
{
using (MySqlConnection connection = new MySqlConnection(""))
{
connection.Open();
MySqlCommand command = connection.CreateCommand();
string saltedPassword = PasswordDerivation.Derive(password);
command.CommandText = "UPDATE users SET password=#password WHERE email=#email LIMIT 1";
command.Parameters.AddWithValue("#email", email);
command.Parameters.AddWithValue("#password", saltedPassword);
command.ExecuteNonQuery();
connection.Close();
}
}
In most situations, a single "program" should use a single connection to the database. Having lots of "connections" incurs overhead, at least for creating the connections.
Async actions are rarely beneficial in database work since SQL is very good at working efficiently with millions of rows in a single query.
MySQL is very good at letting separate clients talk to the database at the same time. However, this needs "transactions" to keep the data from getting messed up.
If your goal in C# is to get some parallelism, please describe it further. We will either convince you that it won't be as beneficial as you think or help you rewrite the SQL to be more efficient and avoid the need for parallelism.

Rollback SQL without transaction

I have a windows service that uploads data to a database and a MVC-app that utilises said service. The way it works today is something like this:
Upload(someStuff);
WriteLog("Uploaded someStuff");
ReadData(someTable);
WriteLog("Reading someTable-data");
Drop(oldValues);
WriteLog("Dropping old values");
private void Upload(var someStuff)
{
using(var conn = new connection(connectionstring))
{
//performQuery
}
}
private void WriteLog(string message)
{
using(var conn = etc..)
//Insert into log-table
}
private string ReadData(var table)
{
using etc..
//Query
}
///You get the gist.
The client can then see the current status of the upload through a query to the log-table.
I want to be able to perform a rollback if something fails. My first thought was to use a BeginTransaction() and then lastly a transaction.Commit(), but that would make my status-message behave bad. It would just go from "starting upload" and then fastforward to the last step where it would wait for a long time before "Done".
I want the user to be able to see if the process is stuck on some specific step, but I still want to be able to perform a full rollback if something unexpected happens.
How do I achieve this?
Edit:
I don't seem to have been clear in my question. If I do a separate connection for the logging, that would indeed work-ish. The problem is that the actual code will execute super-fast so the statusmessages would pass so fast that the user wouldn't even be able to see them before the final "committing"-message that would take 99% of the upload-time.
Design your table so that it has a (P)ending, (A)ctive (D)eleted flag - then to perform an update, new records are created called 'pending' Status P - your very final stage is to change the current Active to Deleted, and the Pending to Active (you could do that in a transaction). At your leisure, you can then delete the Status D (deleted) records at some time.
In the event of an error, the 'pending' record could become Deleted

How do I rollback a SQL CLR stored procedure when a user cancels the query?

I'm trying to create a SQL CLR stored procedure that will create a table, pass the table name onto a service which will bulk insert some data into it, display the results of the table, then clean up the table.
What I've tried so far:
Use SqlTransaction. Cancelling the transaction works, but it puts my query window into a state where I couldn't continue working on it.
The transaction active in this session has been committed or aborted by another session
Use TransactionScope. Same issue as 1.
Manually clean up the table in a finally clause by issuing a DROP TABLE SqlCommand. This doesn't seem to get run, though my SqlContext.Pipe.Send() prior to issuing the command does. It doesn't seem like it's related to any time constraints since if I issue a Thread.Sleep(2000) before printing another line, it still prints the second line whereas the command.ExecuteNonQuery() would stop before printing the second line.
Placing the manual cleanup code into a CER or SafeHandle. This doesn't work as having a CER requires some guarantees, including not allocating additional memory or calling methods that are not decorated with a ReliabilityContract.
Am I missing something obvious here? How do I handle the user cancelling their query?
Edit: There were multiple iterations of the code for each scenario, but the general actions taken are along the lines of the following:
[SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
SqlContext.Pipe?.Send("Constrain");
SqlCommand command1 = new SqlCommand($"CREATE TABLE qb.{code}_{guid:N} (Id INT)", connection);
command1.ExecuteNonQuery();
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
}
finally
{
SqlContext.Pipe?.Send("1");
//drop table here instead of sleep
Thread.Sleep(2000);
SqlContext.Pipe?.Send("2");
}
}
}
Unfortunately SQLCLR does not handle query cancellation very well. However, given the error message, that seems to imply that the cancellation does its own ROLLBACK. Have you tried not using a Transaction within the SQLCLR code but instead handling it from outside? Such as:
BEGIN TRAN;
EXEC SQLCLR_Stored_Procedure;
IF (##TRANCOUNT > 0) ROLLBACK TRAN;
The workflow noted above would need to be enforced. This can be done rather easily by creating a wrapper T-SQL Stored Procedure that executes those 3 steps and only give EXECUTE permission to the wrapper Stored Procedure. If permissions are then needed for the SQLCLR Stored Procedure, that can be accomplished rather easily using module signing:
Create an Asymmetric Key in the same DB as the SQLCLR Stored Procedure
Create a User from that Asymmetric Key
GRANT that Key-based User EXECUTE permission on the SQLCLR Stored Procedure
Sign the wrapper T-SQL Stored Procedure, using ADD SIGNATURE, with that Asymmetric Key
Those 4 steps allow the T-SQL wrapper proc to execute the SQLCLR proc, while the actual application Login can only execute the T-SQL wrapper proc :-). And, in, the event that the cancellation aborts the execution prior to executing ROLLBACK, the Transaction should be automatically rolled-back when the connection closes.
Also, do you have XACT_ABORT set to ON or OFF? UPDATE: O.P. states that it is set to OFF, and setting to ON did not seem to behave any differently.
Have you tried checking the connection state in the finally block? I am pretty sure that the SqlConnection is Closed upon the cancellation. You could try the following approaches, both in the finally block:
Test for the connection state and if Closed, re-open the SqlConnection and then execute the non-query command.
UPDATE: O.P. states that the connection is still open. Ok, how about closing it and re-opening it?
UPDATE 2: O.P. tested and found that the connection could not be re-opened.
Since the context is still available, as proven by your print commands working, use something like SqlContext.Pipe.ExecuteAndSend(new SqlCommand("DROP TABLE..;"));
UPDATE: O.P. states that this did not work.
OR, since you create a guaranteed unique table name in the code, you can try creating the table as a global temporary table (i.e. prefixed with two pound-signs: ##TableName) which will a) be available to the bulk import process, and b) clean itself up when the connection fully closes. In this approach, you technically wouldn't need to perform any manual clean up.
Of course, when Connection Pooling is enabled, the automatic cleanup happens only after the connection is re-opened and the first command is executed. In order to force an immediate cleanup, you would have to connect to SQL Server with Connection Pooling disabled. Is it possible to use a different Connection String just when this Stored Procedure is to be executed that includes Pooling=false;? Given how this Stored Procedure is being used, it does not seem like you would suffer any noticeable performance degradation from disabling Connection Pooling on just this one specific call. To better understand how Connection Pooling – enabled or disabled – affects the automatic cleanup of temporary objects, please see the blog post I just published that details this very behavior:
Sessions, Temporary Objects, and the Afterlife
This approach is probably the best overall since you probably cannot guarantee either that ROLLBACK would be executed (first approach mentioned) or that a finally clause would be executed (assuming you ever got that to work). In the end, uncommitted Transactions will be rolled-back, but if someone executes this via SSMS and it aborts without the ROLLBACK, then they are still in an open Transaction and might not be aware of it. Also, think about the connection being forcibly closed, or the Session being killed, or the server being shutdown / restarted. In those cases tempdb is your friend, whether by using a global temporary table, or at the very least creating the permanent Table in tempdb so that it is automatically removed the next time that the SQL Server starts (due to tempdb being created new, as a copy of model, upon each start of the SQL Server service).
Stepping back there are probably much better ways to pass the data out of your CLR procedure.
1) you can simply use the SqlContext Pipe to return a resultset without creating a table.
2) you can create a temp table (#) in the calling code and access it from inside the CLR procedure. You might want to introduce a TSQL wrapper procedure to make this convenient.
Anyway using BEGIN TRAN/COMMIT | ROLLBACK worked for me:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Threading;
static class SqlConnectionExtensions
{
public static DataTable ExecuteDataTable(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
using (var dr = cmd.ExecuteReader())
{
var dt = new DataTable();
dt.Load(dr);
return dt;
}
}
public static int ExecuteNonQuery(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
return cmd.ExecuteNonQuery();
}
}
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
connection.ExecuteNonQuery("begin transaction;");
SqlContext.Pipe?.Send("Constrain");
connection.ExecuteNonQuery($"CREATE TABLE qb.{code}_{guid:N} (Id INT)");
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
connection.ExecuteNonQuery("commit transaction");
}
catch (Exception ex)
{
connection.ExecuteNonQuery("rollback;");
throw;
}
}
}
}
While it is not something advised for use, this would be an ideal match for sp_bindsession. With sp_bindsession, you would call sp_getbindtoken from the context session, pass the token to the server and call sp_bindsession from the service connection.
Afterwards, the two connections behave "as one", with temporaries and transactions being transparently propagated.

SQL Timeout Expired When It Shouldn't

I am using the SqlConnection class and running into problems with command time outs expiring.
First off, I am using the SqlCommand property to set a command timeout like so:
command.CommandTimeout = 300;
Also, I have ensured that the Execution Timeout setting is set to 0 to ensure that there should be no timeouts on the SQL Management side of things.
Here is my code:
using (SqlConnection conn = new SqlConnection(connection))
{
conn.Open();
SqlCommand command = conn.CreateCommand();
var transaction = conn.BeginTransaction("CourseLookupTransaction");
command.Connection = conn;
command.Transaction = transaction;
command.CommandTimeout = 300;
try
{
command.CommandText = "TRUNCATE TABLE courses";
command.ExecuteNonQuery();
List<Course> courses = CourseHelper.GetAllCourses();
foreach (Course course in courses)
{
CourseHelper.InsertCourseLookupRecord(course);
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Log.Error(string.Format("Unable to reload course lookup table: {0}", ex.Message));
}
}
I have set up logging and can verify exactly 30 seconds after firing off this function, I receive the following error message in my stack trace:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
In the interest of full disclosure: InsertCourseLookupRecord() found inside the above using statements foreach, is performing another query to the same table in the same database. Here is the query it is performing:
INSERT INTO courses(courseid, contentid, name, code, description, url, metakeywords, metadescription)
VALUES(#courseid, #contentid, #name, #code, #description, #url, #metakeywords, #metadescription)"
There is over 1400 records in this table.
I will certify any individual(s) that helps me solve this as a most supreme grand wizard.
I believe what is happening is that you have a deadlock situation that is causing your query in the InsertCourseLookupRecord() function to fail. You are not passing your connection to InsertCourseLookupRecord() so I am assuming you are running that in a separate connection. So what happens is:
You started a transaction.
You truncate the table.
InsertCourseLookupRecord starts another connection and tries to insert
data into that table, but the table is locked because your
transaction isn't committed yet.
The connection in the function InsertCourseLookupRecord() times out at the timeout value defined for that connection of 30 seconds.
You could change the function to accept the command object as a parameter and use it inside the function instead of creating a new connection. This will then become part of the transaction and will all be committed together.
To do this change your function definition to:
public static int InsertCourseLookupRecord(string course, SqlCommand cmd)
Take all the connection code out of the function because you're going to use the cmd object. Then when you're ready to execute your query:
myCommand.Parameters.Clear(); //need only if you're using command parameters
cmd.CommandText = "INSERT BLAH BLAH BLAH";
cmd.ExecuteNonQuery();
It will run under the same connection and transaction context.
You call it like this in your using block:
CourseHelper.InsertCourseLookupRecord(course, command);
You could also just take the code in the InsertCourseLookupRecord and put it inside the for loop instead and then reuse the command object in your using block without needing to pass it to a function at all.
Because you are using two separate SqlConnection objects you are deadlocking your self due to the SqlTransaction you started in your outer code. The query in InsertCourseLookupReacord and maybe in GetAllCourses get blocked by the TRUNCATE TABLE courses call which has not yet been committed. They wait 300 seconds for the truncate to be committed then time out.
You have a few options.
Pass the SqlConnection and SqlTransaction in to GetAllCourses and InsertCourseLookupRecord so they can be part of the same transaction.
Use a "ambient transaction" by getting rid of the SqlTransaction and using a System.Transaction.TransactionScope instead. This causes all connections that are opened to the server all share a transaction. This can cause maintenance issues as depending on what the queries are doing it as it may need to invoke the Distributed Transaction Coordinator which may be disabled on some computers (from the looks of what you showed you would need the DTC as you have two open connections at the same time).
The best option is try to change your code to do option 1, but if you can't do option 2.
Taken from the documentation:
CommandTimeout has no effect when the command is executed against a context connection (a SqlConnection opened with "context connection=true" in the connection string)
Please review your connection string, thats the only possibility I can think of.

Sqldependency - Records added while processing

I have a question on sql dependency. Lets assume my application receives notification when the underlying query data changes and I am planning to select the data from table, process it and resubscribe/start the dependency again. If the processing takes 1-2 minutes and in the mean time there may be some data added during this processing time. Not sure how that data will get notified or do I have to wait for the next change to occur which can be few minutes to hrs?
Below is my sample code let me know if I am missing something
Code:
private void LoadNotifications()
{
DataTable dt = new DataTable();
using (SqlCommand command = new SqlCommand("SELECT ID FROM dbo.NOTIFICATIONS", m_sqlConn))
{
command.Notification = null;
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += new OnChangeEventHandler(OnDependencyChange);
if (m_sqlConn.State == ConnectionState.Closed)
{
m_sqlConn.Open();
}
//using (SqlDataReader reader = command.ExecuteReader())
//{
// if (reader.HasRows)
// {
//lETS ASSUME THIS TAKES 2-3 MINUTES
// }
//}
}
}
private void OnDependencyChange(object sender, SqlNotificationEventArgs e)
{
SqlDependency dependency = sender as SqlDependency;
dependency.OnChange -= OnDependencyChange;
LoadNotifications();
}
What you're describing is the typical non-synchronous nature of data changes and app notification. In short, changes may be happening constantly and your app will not see them happen in real-time. Furthermore, the changes may be going on whether or not your front-end app is open. Is there a requirement to see the data as it is being changed in order to make decisions on the front-end app or do you need to review data changes made by other users.
One way in which you might achieve either is a queue of changes in the form of a table that is populated by the underlying trigger. Then code your front-end app to periodically read from that table and mark them as read. This would allow you to de-couple the data changes from the app views and maybe see some processing performance increases.

Categories