INSERT in one table but it locks another table [closed] - c#

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm now having a problem with SQL Server table lock. I'm developing in C#.
My queries run under 1 transactions.
I name it for the easiest way to recognize.
"setTransaction"
setTransaction is only for "INSERT / UPDATE / DELETE".
if I want to do SELECT. I'll use SqlDataAdapter.
If I want to do INSERT / UPDATE or DELETE, it's the time to use setTransaction.
here is the table structure of each ...
[LOG](
[log_id] [int] IDENTITY(1,1) NOT NULL,
[subject] [text] NOT NULL,
[query] [text] NOT NULL,
[log_datetime] [datetime] NOT NULL,
[user_id] [int] NOT NULL,
[emp_id] [int] NULL,
[old_value] [text] NULL
)
[RESERVATION_DETAIL](
[**reservation_detail_id**] [int] IDENTITY(1,1) NOT NULL,
[reservation_id] [int] NOT NULL,
[spa_program_id] [int] NULL,
[price] [int] NULL,
[oil] [int] NULL
)
[RESERVATION_THERAPIST](
[reservation_therapist_id] [int] IDENTITY(1,1) NOT NULL,
[**reservation_detail_id**] [int] NOT NULL,
[therapist_id] [int] NOT NULL,
[hours] [int] NULL,
[mins] [int] NULL
)
[LOG] is working independently.
[RESERVATION_DETAIL] are connected to [RESERVATION_THERAPIST] via reservation_detail_id
The problem is ....
BEGIN TRANSACTION.
I want to delete a record from "RESERVATION_DETAIL" with reservation_detail_id = 25
I select a record from "RESERVATION_DETAIL" with reservation_detail_id = 25
SELECT * FROM RESERVATION_DETAIL WHERE RESERVATION_DETAIL_ID = 25
I insert into table "LOG" with data from 2.
INSERT INTO LOG ( subject, query, log_datetime, user_id, emp_id, old_value ) VALUES (
'DELETE TEMP RESERVE FROM RES_DETAIL[RES_DETAIL_ID:25]',
'DELETE FROM RESERVATION_DETAIL WHERE RESERVATION_DETAIL_ID = 25',
CURRENT_TIMESTAMP,
1,
NULL, 'reservation_detail_id:25|reservation_id:25|spa_program_id:-1|price:|oil:'
)
now I delete from "RESERVATION_DETAIL" where reservation_detail_id = 25
Then I want to delete a record from "RESERVATION_THERAPIST" with reservation_detail_id = 25
DELETE FROM RESERVATION_DETAIL WHERE RESERVATION_DETAIL_ID = 25
I select a record from "RESERVATION_THERAPIST" with reservation_detail_id = 25 <----- I GOT THE LOCK HERE !!
SELECT * FROM RESERVATION_THERAPIST WHERE RESERVATION_DETAIL_ID = 25
I insert into table "LOG" with data from 5.
Finally I will delete from "RESERVATION_THERAPIST" where reservation_detaiil_id = 25
Above steps were run consequently.
the step 5 (which is about table "RESERVATION_THERAPIST") is now wait on step 3 (about table "LOG") to finished but It never finished.
I don't understand why I insert into table LOG but it put the lock onto the table B !? or this is not a lock !?
there were the queries before the above step that insert into LOG without any problem.
Now I can solve my problem.
The queries and steps are already OK.
But I forgot that the table "RESERVATION_DETAIL" has a trigger that will run right after the DELETE query is committed.
So the trigger will go to delete a record in "RESERVATION_THERAPIST" automatically and this step is under the transaction.
So "RESERVATION_THERAPIST" was locked up after "DELETE FROM RESERVATION_DETAIL" but before I can "SELECT * FROM RESERVATION_THERAPIST"

Several things could cause the table to be locked until the deletion is through:
You are directly deleting from the table you are trying to select from
You have cascade delete turned on for a parent table to the one that is locked.
You have a trigger on another table that you are taking an action from that also deletes from the table.
You are trying to delete the tables out of order (child table are always deleted first, then parent ones. When deleting from a parent table, it will check to see if child records exist. This of course takes longer than a straight delete and will cause a rollback if the child data exists which takes longer yet.
You have deadlock between two process running at the same time.

You got the lock on B because you deleted from it.

Related

Inserting 0.1 million records in Azure SQL Server is taking long time

I am trying to process an Excel document with 2 sheets. 1st sheet contains the meta data and the 2nd sheet the actual data.
The 1st sheet has 10 columns and the second sheet has a minimum of 50 columns & can vary. The process is before I push the data into the SQL Server database I have to validate each and every cell of the sheet. Some are just the data type validation for the sheet and some columns are needed to be validated against the database values and create the error object accordingly if failed.
On top of it the header sheet data has to be processed first followed by the actual data. The mappings for the header & data sheet are maintained in the database and are validated. Since there will not be more than 1000 records in the header I am processing and pushing the header data in the database in one go.
I am converting the data sheets columns as json object one for a row and then for every 5000 rows I am calling a stored procedure to insert these records into the database.
This whole process of validation, creation of error objects and pushing of the data for close to 0.1 million records is taking nearly 20 minutes of time.
Is there a way by which I can make it faster? In the future, we might get over 1 million of records in the sheet.
Since a lot of logic is also involved in stored procedure, the SqlBulkCopy option also seems to be not appropriate here.
CREATE PROCEDURE [dbo].[sp_AddData] (
#Table DataTable READONLY,
#DocumentType NVARCHAR(20)
)
AS
BEGIN
IF(#DocumentType = 'abcd')
BEGIN
INSERT INTO dbo.Table1 (
DMDId,
SN,
Name,
Date,
GP,
D,
NP,
Data,
IsLatest
)
SELECT
DMD.Id,
PDDT.SN,
PDDT.Name,
PDDT.Date,
PDDT.GP,
PDDT.D,
PDDT.NP,
PDDT.Data,
1
FROM #Table PDDT
INNER JOIN dbo.Table2 DMD
ON DMD.TY = PDDT.TY AND
DMD.TP = PDDT.TP AND
DMD.IsLatest = 1
INNER JOIN dbo.Table3 P
ON P.CPID = PDDT.CPID AND
DMD.PID = P.Id
END
END
GO
CREATE TYPE [dbo].[DataTable] AS TABLE
(
[Id] [INT] NULL,
[SN] [NVARCHAR](50) NOT NULL,
[Name] [NVARCHAR](50) NOT NULL,
[Date] [DATETIME] NOT NULL,
[GP] [NUMERIC](22,2) NOT NULL,
[D] [NUMERIC](22,2) NOT NULL,
[NP] [NUMERIC](22,2) NOT NULL,
[Data] [NVARCHAR](MAX) NOT NULL,
[TP] [INT] NOT NULL,
[TY] [NVARCHAR](20) NOT NULL,
[CPId] [NVARCHAR](30) NOT NULL
)
GO

SQL Server using SCOPE_IDENTITY [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I have 2 tables :
CREATE TABLE [dbo].[Accessoires1]
(
[ID] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL,
[ClientID] [int] NOT NULL,
[TotalPrice] [varchar](50) NOT NULL
)
CREATE TABLE [dbo].[Accessoires2]
(
[Accessoire1_ID] [int],
[Date] [datetime] NOT NULL,
[ClientID] [int] NOT NULL,
[TotalPrice] [varchar](50) NOT NULL
)
Then I have a stored procedure Proc1 that insert values into table Accessoire1, and I have Proc2 which adds data to accessoires2 - but it needs the a specific accessoire1_ID
In my C# code, I execute those procedures at the same exact time so I need the Accessoires1.ID to be inserted into accessoires2 - how can I manage to do that?
In your first procedure do this:
CREATE PROCEDURE nameOfSp
#col1 VARCHAR(20),
#newId INT OUTPUT
AS
BEGIN
SET NOCOUNT ON
-- your code
-- assign identity to variable
SELECT #newId = SCOPE_IDENTITY()
RETURN
END
Your c# code will need to call the first stored procedure and passing it an output parameter like this:
SqlParameter newId = new SqlParameter();
newId.ParameterName = "#newId";
newId.DbType = DbType.Int32;
newId.Direction = ParameterDirection.Output;
After the first procedure is executed, your C# code can get the value of newId returned like this:
int outValue = (int) newId.Value;
Now you can pass it to your 2nd procedure.

Simple dapper select query with 20 rows(most of columns are nvarchar(max)) taking too long- 15 seconds and more

My dapper code is given below with a select query:
const string Sql = #"SELECT [id]
,[groupName]
,[reqCap]
,[impCap]
,[player]
,[resumeDate]
,[whitelist]
,[blacklist]
,[Macros]
FROM [VideoServer].[dbo].[TagGroup]";
return await dc.Connection.QueryAsync<TagGroup>(Sql);
My table design is given below:
[id] [int] IDENTITY(1,1) NOT NULL,
[groupName] [varchar](500) NOT NULL,
[reqCap] [int] NULL CONSTRAINT [DF_TagGroup_reqCap] DEFAULT ((0)),
[impCap] [int] NULL CONSTRAINT [DF_TagGroup_impCap] DEFAULT ((0)),
[player] [varchar](500) NULL,
[resumeDate] [date] NULL,
[whitelist] [nvarchar](max) NULL,
[blacklist] [nvarchar](max) NULL,
[Macros] [nvarchar](max) NULL
When I run this select query in SQL Server Management Studio it is returning within 0 milliseconds. But the same query from dapper (above code) is taking too long.
Any ideas? Is this because of nvarchar(max)?
If I clear data in nvarchar(max) fields, it's returning data very fast.
You are trying to pull 600+Kb out of the database for every record. 20 rows makes that almost 6Mb at a minimum per query.
The reason it runs quickly in SQL Server Management Studio is that it doesn't actually return the full column, it returns only the first X characters, so not all 6+MB is being processed. When you are running through code (dapper in this case) then all 6+MB is being returned.
If you are storing files in the database, you need to stop doing that and store them in the filesystem, and use the database to store the the locations and metadata of the files.
I'm not anti storing of JSON/XML in the database, but it does give very big lumps of data to return sometimes which will take time to return - and more than in SSMS, which typically doesn't return the full information to you.
BUT - when you're returning this much data, it's important to filter. I doubt your application really needs all the fields or all the records for what it's trying to do; if you filter down to what you actually need in your query, you should get a faster result.

SQL Server: having separate tables vs adding columns (db design) [duplicate]

This question already has answers here:
How do you effectively model inheritance in a database?
(9 answers)
Closed 7 years ago.
I have this table:
CREATE TABLE [Entree]
(
[RowId] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](250) NOT NULL,
[Description] [nvarchar](250) NULL,
[FK_InspirationSource] [int] NOT NULL,
[Type] [int] NOT NULL,
[PriceRangeType] [int] NOT NULL,
[SoldCount] [int] NOT NULL,
[HolidaySpecial] [bit] NULL,
[DiscountApplicable] [bit] NULL,
[DateCreated] [datetime] NULL,
[LastModified] [datetime] NULL,
[Enabled] [tinyint] NOT NULL,
[BirthDate] [smalldatetime] NULL,
[IG_Int1] [int] NULL,
[IG_Int2] [int] NULL,
[IG_Int3] [int] NULL,
[IG_Int4] [int] NULL,
[IG_Int5] [int] NULL,
[IG_Int6] [int] NULL,
[IG_Int7] [int] NULL,
[IG_Int8] [int] NULL
and in C# code, corresponds to a Entree object with the respectable fields. IG_Int s specify bunch of other properties of the entree in cooking process.
Now, we want to have Derived_Entree objects. In the code, DerivedEntree is an Entree too. So DerivedEntree : Entree.
DerivedEntree has more columns. ParentEntreeId (FK to Entree), ExtraProcessingStep.
So for example, an entree would be "Snail Ravioli" and Derived Entree would be "Broiled snail ravioli".
If there were a separate table, it would be
CREATE TABLE [DerivedEntree]
(
[RowId] [int] IDENTITY(1,1) NOT NULL,
[FK_Entree] [int] NOT NULL,
[ExtraProcessingStep] [int] NOT NULL
)
and add FK_DerivedEntree in Entree table.
So whenever a new entree is entered, a row is inserted to the Entree table, and when a new DerivedEntree is entered, it is inserted to both tables. This is to satisfy the requirement that every Entree has to have a unique Id (which will be a RowId in Entree table)
Instead of adding a separate table, another option is to add those two columns (FK_Entree and ExtraProcessingStep) to the Entree tables and store them there.
What is a more standard practice? I thought about adding additional table because of FK_Entree but perhaps having a foreign key to itself is a common practice?
Melissa, this question is not answerable and depends heavily on your desired usage of the system.
Here are your two main design options:
http://en.wikipedia.org/wiki/Snowflake_schema
The snowflake schema is similar to the star schema. However, in the
snowflake schema, dimensions are normalized into multiple related
tables, whereas the star schema's dimensions are denormalized with
each dimension represented by a single table. A complex snowflake
shape emerges when the dimensions of a snowflake schema are elaborate,
having multiple levels of relationships, and the child tables have
multiple parent tables ("forks in the road").
http://en.wikipedia.org/wiki/Star_schema
The star schema separates business process data into facts, which hold
the measurable, quantitative data about a business, and dimensions
which are descriptive attributes related to fact data. Examples of
fact data include sales price, sale quantity, and time, distance,
speed, and weight measurements. Related dimension attribute examples
include product models, product colors, product sizes, geographic
locations, and salesperson names.
A star schema that has many dimensions is sometimes called a centipede
schema.[3] Having dimensions of only a few attributes, while simpler
to maintain, results in queries with many table joins and makes the
star schema less easy to use.

Insert into view returns 2 rows affected

In Sql Server 2005, I have two databases. In the first one I have a table like this:
CREATE TABLE [dbo].[SG](
[id] [int] IDENTITY(1,1) NOT NULL,
[sgName] [nvarchar](50) NOT NULL,
[active] [bit] NOT NULL,
[hiddenf] [int] NOT NULL
)
In the second, I have a view like this:
CREATE VIEW [dbo].[SG] AS
SELECT id,sgName, active
FROM [FirstDatabase].dbo.SG WHERE hiddenf = 1
with a trigger like this:
CREATE TRIGGER [dbo].[InsteadTriggerSG] on [dbo].[SG]
INSTEAD OF INSERT AS BEGIN
INSERT INTO [FirstDatabase].dbo.SG(sgName,active,hiddenf)
SELECT sgName,COALESCE (active,0), 1 FROM inserted
END
When I insert into the view:
using (SqlConnection connection = new SqlConnection(
connectionString))
{
SqlCommand command = new SqlCommand("INSERT INTO SG(sgName, active) VALUES('Test', 1)", connection);
var affectedRows = command.ExecuteNonQuery();
Assert.AreEqual(1, affectedRows);
}
I get affectedRows equal to two, while my expected value is 1.
This sort of question usually makes me think "triggers".
I have just created an exact copy of your scenario (thanks for the detailed instructions) and I am kind of seeing similar results.
By kind of, I meant that when I execute the insert, SSMS outputs
(1 row(s) affected)
(1 row(s) affected)
but when I checked the original database, only one row had been added.
To solve your problem, do this:
ALTER TRIGGER [dbo].[InsteadTriggerSG] on [dbo].[SG]
INSTEAD OF INSERT AS BEGIN
SET NOCOUNT ON -- adding this in stops it from reporting from in here
INSERT INTO [TEST].dbo.SG(sgName,active,hiddenf)
SELECT sgName,COALESCE (active,0), 1 FROM inserted
END
The issue is that both the trigger and the actual table on the original database are reporting that they've updated a row. If you remove this reporting from the trigger, but leave it in the original database, you will always get a true answer, whether you update via the view or straight into the original table directly.

Categories