My scenario is common:
I have a stored procedure that need to update multiple tables.
if one of updates failed - all the updates should be rolled back.
the strait forward answer is to include all the updates in one transaction and just roll that back. however, in system like ours , this will cause concurrency issues.
when we break the updates into multiple short transactions - we get throughput of ~30 concurrent executions per second before and deadlocking issues start to emerge.
if we put it to one transaction which span all of them - we get concurrent ~2 per second before deadlock shows up.
in our case, we place a try-catch block after every short transaction, and manually DELETE/Update back the changes from the previous ones. so essentially we mimic the transaction behavior in a very expensive way...
It is working alright since its well written and dont get many "rollbacks"...
one thing this approach cannot resolve at all is a case of command timeout from the web server / client.
I have read extensively in many forms and blogs and scanned through the MSDN and cannot find a good solution. many have presented the problem but I am yet to see a good solution.
The question is this: is there ANY solution to this issue that will allow a stable rollback of update to multiple tables, without require to establish exclusivity lock on all of the rows for the entire duration of the long transaction.
Assume that it is not an optimization issue. The tables are almost at the max optimization probably, and can give a very high throughput as long as deadlock don't hit it. there are no table locks/page locks etc. all row locks on updates - but when you have so many concurrent sessions some of them need to update the same row...
it can be via SQL, client side C#, server side C# (extend the SQL server?).
Is there such solution in any book/blog that i have not found?
we are using SQL server 2008 R2, with .NET client/web server connecting to it.
Code example:
Create procedure sptest
Begin transaction
Update table1
Update table2
Commit transaction
In this case, if sptest is run twice, the second instance cannot update table 1 until instance 1 has committed.
Compared to this
Create sptest2
Update table1
Update table2
Sptest2 has a much higher throughput - but it has chance to corrupt the data.
This is what we are trying to solve. Is there even a theoretical solution to this?
Thanks,
JS
I would say that you should dig deeper to find out the reason why deadlock occurs. Possibly you should change the order of updates to avoid them. Maybe some index is "guilty".
You cannot roolback changes if other transactions can change data. So you need to have update lock on them. But you can use snapshot isolation level to allow consistent reads before update commits.
For all inner joined tables that are mostly static or with a high degree of probability not effect the query by using dirty data then you can apply:
INNER JOIN LookupTable (with NOLOCK) lut on lut.ID=SomeOtherTableID
This will tell the query that I do not care about updates made to SomeOtherTable
This can reduce your issue in most cases. For more difficult deadlocks I have implemented a deadlock graph that is generated and emailed when a deadlock occurs contains all the detailed info for the deadlock.
Related
Our production setup is that we have an application server with applications connecting to a SQL Server 2016 database. On the application server there is several IIS applications which run under a GMSA account. The GMSA account has db_datawriter and db_datareader privileges on the database.
Our team have db_datareader privileges on the same SQL Server database. We require this for production support purposes.
We recently had an incident where a team member invoked a query on SQL Server Management Studio on their local machine:
SELECT * FROM [DatabaseA].[dbo].[TableA] order by CreateDt desc;
TableAhas about 1.4m records and there are multiple blob type columns. CreateDt is a DATETIME2 type column.
We have RedGate SQL Monitor configured for the SQL Server Database Server. This raised a long-running query alert that ran for 1738 seconds.
At the same time one of our web applications (.NET 4.6) which exclusively inserts new records to TableA was experiencing constant query timeout errors:
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
These errors occurred for almost the exact same 1738 second period. This leads me to believe these are connected.
My understanding is that a SELECT query only creates a Shared lock and would not block access to this table for another connection. Is my understanding correct here?
My question is that is db_datareader safe for team members? Is there a lesser privilege that would allow reading data but absolutely no way for blocking behaviours to be created.
The presence of SELECT * (SELECT STAR) in a query, leads generally to do not use an index and make a SCAN of the table.
With many LOBs (BLOBs or CLOBS or NCLOBs) and many rows, the order by clause will take a long time to :
generate the entries
make a sort on CreateDt
So a read lock (shared lock) is put while reading all the data of the table. This lock accepts other shared locks but prohibit to put an exclusive lock to modify data (INSERT, UPDATE, DELETE). This may guarantee to other users that the data won't be modified.
This locking technics is well known as pessimistic lock. The locks are taken before beginning the execution of the query and relaxed at the end. So reader blocks writers and writers blocks all.
The other technic, that SQL Server can do, called optimistic locking, consists to use a copy of the data, without any locking and verify at the end of the execution that the data involved in writes has not been modified since the beginning. So the blocking is less...
To do a pessimistic locking you have the choise to allow or to force:
ALTER DATABASE CURRENT SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE CURRENT SET READ_COMMITTED_SNAPSHOT ON;
In SQL Server, writers block readers, and readers block writers.
This query doesn't have a where clause and will touch the entire table, probably starting with an IS (Intent Shared) and eventually escalating to a shared lock that updates/inserts/deletes can't access while the lock is there. This is likely held during that very long sort, the order by is causing.
It can be bypassed in several ways, but I don't assume you're actually after how, seeing as whoever ran the query was probably not really thinking straight anyway, and this is not a regular occurrence.
Nevertheless, here are some ways to bypass:
Read Committed Snapshot Isolation
With (nolock). But only if you don't really care about the data that is retrieved, as it can return rows twice, rows that were never committed and skip rows altogether.
Reducing the columns you return and reading from a non-clustered index instead.
But to answer your question, yes selects can block inserts.
I'm looking for a solution to a thorny problem.
Me and my colleagues have made an 'engine' (in C#) that performs different elaborations on a SQL Server database.
Initially, these elaborations were contained in many stored procedures called in series in a nightly batch. It was a system with many flaws.
Now we have extracted every single query from each stored procedure and, may sound strange, we have inserted the queries into the DB.
(Note: the reasons are different and I'm not listing them all, but you just need to know that, for business reasons, we do not have the opportunity to make frequent software releases... but we have a lot of freedom with SQL scripts).
Mainly, the logic behind our engine is:
there are Phases, called sequentially
each Phase contains several Step, then subdivided into Set
the Set is a set of Steps, that will be executed sequentially
the Sets, unless otherwise specified, start running parallel to each other
the Step that by default does not belong to any Set, will be embedded in a Set (created at runtime)
a Set before starting may have to wait the completion of one or more Steps
Step corresponds to atomic (or almost) SQL queries or C# methods to run
at start the engine queries the database, then composes the Phases, Step and Set (and related configurations)... which will be executed
We have created the engine, we have all the configurations... and everything works.
However, we have a need: some phases must have a transaction. If even a single step of that phase fails, we need to rollback the entire phase.
What creates problems is the management of the transaction.
Initially we created a single transaction and connection for the entire phase, but we soon realized that - because of multithreading - this is not thread-safe.
In addition, after several tests, we have had exceptions regarding the transaction. Apparently, when a phase contains a LOT of steps (= many database queries), the same transaction cannot execute any further statements.
So, now, we've made a change and made sure that each step in the phases that require a transaction opens a connection and a transaction on its own, and if all goes well, all commits (otherwise rollback).
It works. However, we have noticed a limitation: the use of temporary tables.
In a transactional phase, when I create a temporary temp table (#TempTable1) in a step x, I can't use #TempTable1 in the next step y (SELECT TOP 1 1 FROM #TempTable1).
This is logical: as it is a separate transaction and #TempTable1 is deleted at the end of the execution instance.
Then we tried to use a global temp table ##TempTable2, but, in step y, the execution of the SELECT is blocked until the timeout passes..
I also tried lowering the transaction isolation level, but nothing.
Now we are in the unfortunate situation of having to create real tables instead of using temporary tables.
I'm looking for a compromise between the use of transactions on a large number of steps and the use of temporary tables. I believe that the focus of the speech is the management of transactions. Suggestions?
I have code that carries out data retrieval - basically executes anything from 3 to 12 SQL (oracle) read statements to retrieve data about an object.
Unfortunantly its running slowly (no SQL statement in particular, its just the fact I have so many of them - and they take around 0.2 seconds per statement, which can mean over 2 secs for the code to complete).
I am looking into ways of improving the performance. One way is to merge some of the tables into a single query (which can reduce the combined results by 0.5 secs). However it doesn't make sense to merge the rest since there will only be data there under certain cicumstances, and trying to determine when there is data there to marshal could get tricky.
I am considering introducing threading into my program, so after the initial query, I would spawn a thread for each of the other queries, so they are executed at the same time. However I have never used threading and am wary of introducing deadlocks or other pit falls.
Currently the other queries marshal the results into different sections of the SAME object. Would this cause any issues (i.e. since we are accessing/updating the same object in different threads though different sections/fields within the object?). Would it be better to return the results and marshal into the object after all the threads have finished?
I know these types of questions are hard to answer since its more general advice, but I would appreciate if anyone thought it was a good idea, or had other suggestions?
If you are doing only reading (select from) - don't worry about deadlocks. Oracle readings are not blockable (mostly). The biggest problem with threading queries to oracle would be how to deal with connections. To create connection, run a query and close connection - is very very very bad. Connections are expensive. They are also limited, so you don't want to create one million connections to execute your logic.
As a result, you would use some sort of connection pool and put your queries in a queue.
Also, I hope you are using bind variables and not string concatenation to pass queries to oracle.
In general, I would collect all the data (better in one query) and only then update the object. You could also consider to brake your object into it sections.
Threading workss perfectly. 2 years ago I did a project that used a multi strage / multi threading approeach to push data into a oracle database (and pull some data out of it for updates).
I basicallly used a staged approach (a request would go through multiple stages, get consumed there and new data be pusehd to the next stage) and every stage used a configurable thread pool, which would take a message, process it and post the new messages.
We used I think at that time close to 200 threads to process about a million SQL statements per minute (hitting an Oracle Exadata that was really getting some work out of that).
So, multithreading "just works" - obviously if you know how to do it and you have to get your architecture and the sql statements nice and non blocking. Databases in general are perfectly calable of handling multiple threads.
Now, for details: THAT DEPENDS.
Example:
Currently the other queries marshal the results into different
sections of the SAME object. Would this cause any issues (i.e. since
we are accessing/updating the same object in different threads though
different sections/fields within the object?)
Absolutely no problem as long as:
You make suer all updates are finished before moving the object to the next phase and
The updates do not overlap or have a cardinality (1 must finish for 2 to have the required data).
These are implementation details and it is really hard to make a generic answer for those (totally impossible). Especially as this is multi threading 101 - and has nothing to do with any database access.
In general - you will also have to tune the number of threads. .NET can not do that itself - as it will see the CPU not busy and spawn up more threads, even if the database server is the bottleneck. This is why we went with multiple stages - so we could tune the number of threads depending what they do (and the last stage used bulk inserting to insert the aggregated data into temporary staging tables with a small number of threads, moving a lot of data in every statement - this will require some tuning possibilities to not totally overload the database side).
Have a project that uses the entity framework (v1 with .NET 3.5). It's been in use for a few years, but it's now being used by more people. Started getting timeout errors and have tracked it down to a few things. For simplicity sake let's say my database has three tables, product, part, and product_part. There are ~1400 parts and a handful of products.
The user has the ability to add any number of parts to a product. My problem is that when there are many parts added to the product the inserts take a long time. I think it's mostly due to network traffic/delay, but to insert all 1400 takes around a minute. If someone goes in and tries to view the details of a part while those records are being inserted I get a timeout and can see a block in the Activity Monitor of SQL Server.
What can I do to avoid this? My apologies if this has been asked before and I missed it.
Thanks,
Nick
I think the root problem is that your write transaction is taking so long. EF is not good at executing mass DML. It executes each insert in a separate network roundtrip and separate statement.
If you want to insert 1400 rows, and performance matters, do the insert in one single statement using TVP's (INSERT ... SELECT * FROM #tvp). Or switch to bulk-copy but I don't think that will be advantageous at only 1400 rows.
If your read transactions are getting blocked, and this is a problem, switch on snapshot isolation. That takes care of the readers 100% as they never block under snapshot isolation.
I have one BIG table(90k rows, size cca 60mb) which holds info about free rooms capacities for about 50 hotels. This table has very few updates/inserts per hour.
My application sends async requests to this(and joined tables) at max 30 times per sec.
When i start 30 threads(with default AppPool class at .NET 3.5 C#) at one time(with random valid sql query string), only few(cca 4) are processed asynchronously and other threads waits. Why?
Is it becouse of SQL SERVER 2008 table locking, or becouse of .NET core? Or something else?
If it is a SQL problem, can help if i split this big table into one table per each hotel model?
My goal is to have at least 10 threads servet at a time.
This table is tiny. It's doesn't even qualify as a "medium sized" table. It's trivial.
You can be full table scanning it 30 times per second, you can be copying the whole thing in ram, no server is going to be the slightest bit bothered.
If your data fits in ram, databases are fast. If you don't find this, you're doing something REALLY WRONG. Therefore I also think the problems are all on the client side.
It is more than likely on the .NET side. If it were table locking more threads would be processing, but they would be waiting on their queries to return. If I remember correctly there's a property for thread pools that controls how many actual threads they create at once. If there are more pending threads than that number, then they get in line and wait for running threads to finish. Check that.
Have you tried changing the transaction isolation level?
Even when reading from a table Sql Server will lock the table
try setting the isolation level to read uncommitted and see if that improves the situation,
but be advised that its feasible that you will read 'dirty' data make sure you understand the ramifications if this is the solution
this link explains what it is.
link text
Rather than ask, measure. Each of your SQL queries that is actually submitted by your application will create a request on the server, and the sys.dm_exec_requests DMV shows the state of each request. When the request is blocked the wait_type column shows a non-empty value. You can judge from this whether your requests are blocked are not. If they are blocked you'll also know the reason why they are blocked.