I have asp.net page which allow users to run select queries on oracle data base. then result is shown on web page as a data table.
Some time users may enter queries that impact badly on DB due to long and deep query execution.
Is there any possibility to mention time out for each query run on DB . If query runs longer than specified time execution must be stop on database.
You can solve this problem on the application or DB side:
Application Side: SQLQueryTimeout
You can set an SQLQueryTimeout generally for all Statements on the Driver, or Individually with CommandTimeout for a certain command. If a query takes longer than that a TRAP is invoked and you get an Exception whcih you can Handle.
The Second possibility is to set a Timeout for an Oracle-user. If your Web-Application connects to the database via a shared user (as it should be) you can use ALTER USER to set a User Profile which will enforce a MAX CPU LIMIT, which is in effect a time-limit on all SQL Statments issued by that user.
The beauty of the second solution is that the Database enforces the rule, so you don't have to worry about your application code where you might miss a line...
Related
Our production setup is that we have an application server with applications connecting to a SQL Server 2016 database. On the application server there is several IIS applications which run under a GMSA account. The GMSA account has db_datawriter and db_datareader privileges on the database.
Our team have db_datareader privileges on the same SQL Server database. We require this for production support purposes.
We recently had an incident where a team member invoked a query on SQL Server Management Studio on their local machine:
SELECT * FROM [DatabaseA].[dbo].[TableA] order by CreateDt desc;
TableAhas about 1.4m records and there are multiple blob type columns. CreateDt is a DATETIME2 type column.
We have RedGate SQL Monitor configured for the SQL Server Database Server. This raised a long-running query alert that ran for 1738 seconds.
At the same time one of our web applications (.NET 4.6) which exclusively inserts new records to TableA was experiencing constant query timeout errors:
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
These errors occurred for almost the exact same 1738 second period. This leads me to believe these are connected.
My understanding is that a SELECT query only creates a Shared lock and would not block access to this table for another connection. Is my understanding correct here?
My question is that is db_datareader safe for team members? Is there a lesser privilege that would allow reading data but absolutely no way for blocking behaviours to be created.
The presence of SELECT * (SELECT STAR) in a query, leads generally to do not use an index and make a SCAN of the table.
With many LOBs (BLOBs or CLOBS or NCLOBs) and many rows, the order by clause will take a long time to :
generate the entries
make a sort on CreateDt
So a read lock (shared lock) is put while reading all the data of the table. This lock accepts other shared locks but prohibit to put an exclusive lock to modify data (INSERT, UPDATE, DELETE). This may guarantee to other users that the data won't be modified.
This locking technics is well known as pessimistic lock. The locks are taken before beginning the execution of the query and relaxed at the end. So reader blocks writers and writers blocks all.
The other technic, that SQL Server can do, called optimistic locking, consists to use a copy of the data, without any locking and verify at the end of the execution that the data involved in writes has not been modified since the beginning. So the blocking is less...
To do a pessimistic locking you have the choise to allow or to force:
ALTER DATABASE CURRENT SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE CURRENT SET READ_COMMITTED_SNAPSHOT ON;
In SQL Server, writers block readers, and readers block writers.
This query doesn't have a where clause and will touch the entire table, probably starting with an IS (Intent Shared) and eventually escalating to a shared lock that updates/inserts/deletes can't access while the lock is there. This is likely held during that very long sort, the order by is causing.
It can be bypassed in several ways, but I don't assume you're actually after how, seeing as whoever ran the query was probably not really thinking straight anyway, and this is not a regular occurrence.
Nevertheless, here are some ways to bypass:
Read Committed Snapshot Isolation
With (nolock). But only if you don't really care about the data that is retrieved, as it can return rows twice, rows that were never committed and skip rows altogether.
Reducing the columns you return and reading from a non-clustered index instead.
But to answer your question, yes selects can block inserts.
I have a standard ASP.NET web page. It issues a query, using Ajax, to a SQL Server, and returns a table of results. The problem is, sometimes this table of results is very large and the query takes too long. (I don't have control over the SQL, this happens via a stored procedure.)
Is there a way to have a "Cancel Request" button on the page, so that when the user clicks the button the query on the SQL server is killed? If so, how would I do that? (I am new to ASP.NET/C#, but understand the architecture of web requests.) Thanks.
One approach:
Create the connection, and place it in a Dictionary, with a Guid.ToString() as key.
Run the query and return the key to your webpage, and save it somewhere.
If the query finishes the execution ok:
Find the connection, close it, and remove it from the dictionary.
If the user click on cancel query:
Send an ajax request to the web server with the key you saved.
Find the connection, close it, and remove it from the dictionary.
Make sure of locking the dictionary.
Make sure of catching exceptions.
I would think that would be extremely difficult to do, because you would need to know the specific spid for the exact request that issued the long running request. Something that will be coming from the same user as many more valid requests (if your site is set up like most).
I'm working on the same thing, but I'd like to offer some corrections to the above statements. The guy who said finding a SPID is difficult is incorrect. I've just implemented a system where when certain long running stored procedures run, they put (or update) a record into a table that I store that contains SPIDs, the User running the report, some report information, and the date started. Using ##SPID within the stored procedure gets me the SPID that I store in the table.
Also, the connection closing not ending the query is correct, you need a KILL statement.
Currently I have the need to create a reporting program that runs reports on many different tables within a SQL database. Multiple different clients require this functionality but some clients have larger databases than others. What I would like to know is whether it is possible to halt a query after a period of time if it has been taking 'too' long.
To give some context, some clients have tables with in excess of 2 million rows, although a different client may have only 50k rows in the same table. I want to be able to run the query for say 20 seconds and if it has not finished by then return a message to the user to say that the result set will be too large and the report needs to be generated outside of hours as we do not want to run resource intensive operations during the day.
Set the connection timeout on either your connection string or on the DataContext via the CommandTimeoutproperty. When the timeout expires, you will get a TimeoutException, and your query will be cancelled.
You cannot be sure that the query is cancelled on the server the very instant the timeout occurs, but in most cases it will cancel rather quickly. For details read the excellent article "There's no such thing as a query timeout...". The important part from there is:
A client signals a query timeout to the server using an attention
event. An attention event is simply a distinct type of TDS packet a
SQL Server client can send to it. In addition to connect/disconnect,
T-SQL batch, and RPC events, a client can signal an attention to the
server. An attention tells the server to cancel the connection's
currently executing query (if there is one) as soon as possible. An
attention doesn't rollback open transactions, and it doesn't stop the
currently executing query on a dime -- the server aborts whatever it
was doing for the connection at the next available opportunity.
Usually, this happens pretty quickly, but not always.
But remember, it will differ from provider to provider and it might even be subject to change between server versions.
You can do that easily if you run the quer on a background thread. Make the main thread start a timer and spawn a background thread that runs the query. If when 20 seconds are over the background thread hasn't returned a result, the main thread can cancel it.
I've wrote a Restaurant Management application.
I have a Database based on SQL Server 2005 which has one table named OrdersItems. Every 5 minutes I want to read all rows of this table and based on a specific criteria Update some fields.
I don't want to do that in my main application and I prefer to have an Alternative engine to perform this.
Which method is the best method to perform such task ? Also note that this Table (OrdersItems) is under process every time because main application must be always running and get new Restaurant Orders.
You can create a SQL Server Agent job that does the update every five minutes.
If you are using SQL Server Express edition, you can't use SQL Server Agent because it's only included in the "bigger" versions of SQL Server.
In this case, you can create your jobs manually using batch files and Windows Task Scheduler.
I definitely agree with Christian and dougajmcdonald's points about using SQL Task/ Maintenance. However since you included c# in your tags an alternative is to create a Windows Service.
The benefits of this approach
Can run on a machine that doesn't have the SQL Server Agent installed (including express editions)
Can be run outside the context of a user login.
Has standard stop start pause continue mechanism that's well understood.
If the service itself fails will likely result in an event log
This answer contains a template for a windows service that periodically gets data and executes. You may simply want to change the DoStuff method to execute a stored procedure
Create a dialog timer and let it activate a stored procedure. This has the advantage of being fully contained inside the database (no external process), it does not require SQL Agent (runs on Express) and is completely crash resilient at the point it will survive database detach/attach and backup/restore operations (the scheduled job will run after recovery on the new restored database).
I would expect a SQL Task / Maintenance plan would be the best for this.
You can set them up for whatever interval you want, specifying a SQL statement, maintenance task etc you want to run.
You can also setup alerts etc if you want to know when it fails for example.
Deploy a cron job on a server with access to the database which is started every 5 minutes and processes your data, using transactions. I see one problem there: If the amount of data to be processed is large, it could quite work more than five minutes.
I have a webpage that takes 10 minutes to run one query against a database, but the same query returns in less than a second when run from SQL Server Management Studio.
The webpage is just firing SQL at the database that is executing a stored procedure, which in turn is performing a pretty simple select over four tables. Again the code is basic ADO, setting the CommandText on an SqlCommand and then performing an ExecuteReader to get the data.
The webpage normally works quickly, but when it slows down the only way to get it speeded up is to defragment the indexes on the tables being queried (different ones different times), which doesn't seem to make sense when the same query executes so quickly manually.
I have had a look at this question but it doesn't apply as the webpage is literally just firing text at the database.
Does anyone have any good ideas why this is going slow one way and not the other?
Thanks
I would suspect parameter sniffing.
The cached execution plan used for your application's connection probably won't be usable by your SSMS connection due to different set options so it will generate a new different plan.
You can retrieve the cached plans for the stored procedure by using the query below. Then compare to see if they are different (e.g. is the slow one doing index seeks and bookmark lookups at a place where the other one does a scan?)
Use YourDatabase;
SELECT *
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
where sys.dm_exec_sql_text.OBJECTID=object_id('YourProcName')
and attribute='set_options'
Is there any difference between the command text of the query in the app and the query you are executing manually? Since you said that reindexing helps performance (which also updates statistics), it sounds like it may be getting stuck on a bad execution plan.
You might want to run a sql trace and capture the showplanxml event to see what the execution plan looks like, and also capture sql statement complete (though this can slow the server down if a lot of statements are coming through the system so be careful) to be sure the statement sent to SQL server is the same one you are running manually.