How to cancel a SQL Server command from ASP.NET web page? - c#

I have a standard ASP.NET web page. It issues a query, using Ajax, to a SQL Server, and returns a table of results. The problem is, sometimes this table of results is very large and the query takes too long. (I don't have control over the SQL, this happens via a stored procedure.)
Is there a way to have a "Cancel Request" button on the page, so that when the user clicks the button the query on the SQL server is killed? If so, how would I do that? (I am new to ASP.NET/C#, but understand the architecture of web requests.) Thanks.

One approach:
Create the connection, and place it in a Dictionary, with a Guid.ToString() as key.
Run the query and return the key to your webpage, and save it somewhere.
If the query finishes the execution ok:
Find the connection, close it, and remove it from the dictionary.
If the user click on cancel query:
Send an ajax request to the web server with the key you saved.
Find the connection, close it, and remove it from the dictionary.
Make sure of locking the dictionary.
Make sure of catching exceptions.

I would think that would be extremely difficult to do, because you would need to know the specific spid for the exact request that issued the long running request. Something that will be coming from the same user as many more valid requests (if your site is set up like most).

I'm working on the same thing, but I'd like to offer some corrections to the above statements. The guy who said finding a SPID is difficult is incorrect. I've just implemented a system where when certain long running stored procedures run, they put (or update) a record into a table that I store that contains SPIDs, the User running the report, some report information, and the date started. Using ##SPID within the stored procedure gets me the SPID that I store in the table.
Also, the connection closing not ending the query is correct, you need a KILL statement.

Related

How to write a query so that it runs in server without getting cancelled out on page refresh?

Say I have a webpage. In that webpage, I have a button with a click event listener. When I click on the button, it makes an ajax call(XMLHttpRequest) to update some values in server side database. My database query takes much time to complete and as a result, response also takes much time to come back to the client side(because the code is synchronous). And in the mean time, if I refresh my webpage, the ajax request gets cancelled and the update query also gets stopped.
I want to make it work in such a way that I will make the ajax call, and it will initiate the database query but it will not wait for it to complete. And even if, I refresh the page, the database query should not be cancelled out.
In the backend(server side), I have C# asp.net framework and sql server database.
So the ajax call goes to my server side code which is written in C# and from there I call a stored procedure that contains the long query.
I want that the ajax call will initiate this long query and will not wait for completion of the query. And even if I refresh the page, the query will not get cancelled out.(Ajax call may get cancelled out But the query
should not stop running in my server)
What are ways available in C#, dapper, asp.net and sql server to implement this scenario?
If the query takes much time, you have a separate temp table for storing results from the query.
You create a stored procedure for the query and store the result from the query to the temp table with the username as a separate column.
You can initiate the stored procedure from the UI and you should be having a ajax method for retrieving the latest result from the temp table with respective to the particular user.
I hope you understood my explanation.
Updated:
Have a separate table for initiating the stored procedure. Now you can use sqldependency for running the stored procedure in the background for every onChange notification. Please view this answer for sqldependency implementation.
I hope this helps

LINQ Query not hitting the database

I'm having an issue with a really simple password hash retrieval LINQ query. The problem is if the user logs out, then tries to log back it, it just uses the cached values of the query without querying the database again. The query in question is the following:
using (var db = new DataModel.DatabaseContext())
{
return (from emp in db.Employees where emp.Username == username select emp.Password).SingleOrDefault();
}
But when I break, it seems that EF IS executing a reader on a separate thread! Then why do I think it isn't really querying the database? Well the execution time is just too short. It messes up my async methods, it basically doesn't leave enough time for a MessageBox to be shown (works properly when I call the method for the first time). Maybe the database itself has some transient options set up?
EDIT: I thought I found out what the problem is but this is just unreal. It executes the query on a remote server faster than a ping request. <0.001s I'm stumped
It is because the first time you create a DbContext in your AppDomain (maybe first call to new YourDbContext() in your application) there is going a lot of initialization and configuration under the hood so it takes some time the first time, but after that (while application is running) the process speeds up so you can't feel it.

Terminate query execute in oracle database

I have asp.net page which allow users to run select queries on oracle data base. then result is shown on web page as a data table.
Some time users may enter queries that impact badly on DB due to long and deep query execution.
Is there any possibility to mention time out for each query run on DB . If query runs longer than specified time execution must be stop on database.
You can solve this problem on the application or DB side:
Application Side: SQLQueryTimeout
You can set an SQLQueryTimeout generally for all Statements on the Driver, or Individually with CommandTimeout for a certain command. If a query takes longer than that a TRAP is invoked and you get an Exception whcih you can Handle.
The Second possibility is to set a Timeout for an Oracle-user. If your Web-Application connects to the database via a shared user (as it should be) you can use ALTER USER to set a User Profile which will enforce a MAX CPU LIMIT, which is in effect a time-limit on all SQL Statments issued by that user.
The beauty of the second solution is that the Database enforces the rule, so you don't have to worry about your application code where you might miss a line...

Why does a database query only go slow in the application?

I have a webpage that takes 10 minutes to run one query against a database, but the same query returns in less than a second when run from SQL Server Management Studio.
The webpage is just firing SQL at the database that is executing a stored procedure, which in turn is performing a pretty simple select over four tables. Again the code is basic ADO, setting the CommandText on an SqlCommand and then performing an ExecuteReader to get the data.
The webpage normally works quickly, but when it slows down the only way to get it speeded up is to defragment the indexes on the tables being queried (different ones different times), which doesn't seem to make sense when the same query executes so quickly manually.
I have had a look at this question but it doesn't apply as the webpage is literally just firing text at the database.
Does anyone have any good ideas why this is going slow one way and not the other?
Thanks
I would suspect parameter sniffing.
The cached execution plan used for your application's connection probably won't be usable by your SSMS connection due to different set options so it will generate a new different plan.
You can retrieve the cached plans for the stored procedure by using the query below. Then compare to see if they are different (e.g. is the slow one doing index seeks and bookmark lookups at a place where the other one does a scan?)
Use YourDatabase;
SELECT *
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
where sys.dm_exec_sql_text.OBJECTID=object_id('YourProcName')
and attribute='set_options'
Is there any difference between the command text of the query in the app and the query you are executing manually? Since you said that reindexing helps performance (which also updates statistics), it sounds like it may be getting stuck on a bad execution plan.
You might want to run a sql trace and capture the showplanxml event to see what the execution plan looks like, and also capture sql statement complete (though this can slow the server down if a lot of statements are coming through the system so be careful) to be sure the statement sent to SQL server is the same one you are running manually.

Exec Console Application by using TSQL SP?

I have a console application (in C#) to read data from SQL server (Microsoft SQL 2005) and write data to SQL server. What I need right now is to add a trigger to a table to exec the console application when any row of data is changed.
Not sure what SP available on Microsoft SQL server (2005) to launch a console application? and how can I pass the result from app back? Will it be run as sync or asych manor? Is there any permission issue I have to configure?
Don't launch external processes from a trigger, you'll bring the server onto its knees. Doesn't matter if is xp_cmdshell or CLR procedure. Instead use Service Broker and issue a SEND in your trigger to queue a message to a local service and rely on activation to do the external dependent processing, asynchronously and on a separate context.
The xp_cmdshell stored procedure can be used to launch an external process but this is often disabled for security reasons.
As the console application is already written in C#, perhaps it could be re-written as an SQLCLR stored procedure?
This seems a little shaky to me. Is there any chance of the trigger being called frequently, leading to many launches of the application? Also, I don't think the trigger will complete until the console application has finished or failed. Meantime, the operation that caused the trigger to fire will still be waiting.
Does the application need to run right away? If not, then maybe you could run this as a SQL Agent job periodically.
Trigger slow things down and in your case, this will lead to mass chaos. Triggers is not the route you want to take here.
You might want to consider having a console app that polls the database and every time it finds changes, it will display the changed rows for the users consumption.
You can track these changes using a field like [LastUpdatedDateTime] with a default of GetDate(), and don't send this value in your query. Therefore it will always have the latest timestamp of change. You can alternatively, have an audit table that gets filled by a trigger.
I agree with Ken, you might want to think about changing the architecture. Is the console application reading and writing data to the same SQL server that is invoking it? If so you are better off coding that logic into the trigger or stored procedure itself, and/or changing your database schema to make it so your logic doesn't have to be so complicated.

Categories