TransactionScope fails check constraint - c#

I am having a issue using TransactionScope and a check constraint in SQL Server.
I want to insert into the table as such:
Col A | Col B
------------
Dave | 0
Fred | 1
The table has a check constraint that there must always be an entry in Col B with '0'. The first row is inserting fine but the second row fails the constraint.
command.CommandText = #"INSERT INTO MyTable (ColA, ColB) VALUES(#ColA, #ColB)";
foreach (var row in model.Rows)
{
command.Parameters["#ColA"].Value = model.ColA;
command.Parameters["#ColB"].Value = model.ColB;
command.ExecuteNonQuery();
}
The check constraint calls the following function
IF EXISTS (SELECT * FROM mytable WHERE ColB = 0) RETURN 1
RETURN 0
Could this be because the constraint is only looking at committed data and if so how can it be told to look at uncommitted data as well

I don't think Check Constraints are suitable for a scenario like yours.You should use a instead of update/insert trigger to check that there's at least one row (in the table and /or in inserted values)
You have a inserted table in a trigger that contains all the rows that will be inserted so you can write something like this :
IF NOT EXISTS (SELECT * FROM mytable a UNION inserted WHERE ColB = 0) RIASEERROR("At least one row with ColB=0 should exist")

Related

JET/Esent: SeekEQ finds no matches with multi-column index and wildcards

I have a 5-column table with a 2-column primary index. Let's say the index is defined as (Col1, Col2).
In the the following snippet, Api.TrySeek returns false and I'm not sure why:
Api.JetSetCurrentIndex(session, table, null);
// this should match on Col1 = colVal1 and Col2 = *
Api.MakeKey(session, table, colVal1, MakeKeyGrbit.NewKey | MakeKeyGrbit.FullColumnStartLimit);
if (Api.TrySeek(session, table, SeekGrbit.SeekEQ)) // why is this false??
{
Api.MakeKey(session, table, colVal1, MakeKeyGrbit.NewKey | MakeKeyGrbit.FullColumnEndLimit);
if (Api.TrySetIndexRange(session, table, SetIndexRangeGrbit.RangeUpperLimit | SetIndexRangeGrbit.RangeInclusive))
{
<loop through entries in index range>
However, it returns true if I use SeekGrbit.SeekGE. Could someone explain why? Does SeekEQ not work with wildcard columns but SeekGE does?
In the loop, I've double-checked that the entries all have Col1 == colVal1, to rule out the possibility that it's just finding entries where Col1 > colVal1.

Unable to add new user claim due to constraint conflict of primary key

I have a net core web app net core 3.1 using Oracle database. The tables are generated using MS.EntityFrameworkCore.Tools ('add-migration' & 'update-database' commands).
The auto-generated tables specifically, AspNetUserClaims & AspNetRoleClaims both uses Oracle's auto identity generation.
The issue I have is with the given code, I am just not able to add a new user claim due to constraint conflict.
OracleException: ORA-00001: unique constraint (SYSTEM.PK_AspNetUserClaims) violated.
SQL for the table:
CREATE TABLE "SYSTEM"."AspNetUserClaims"
(
"Id" NUMBER(10,0) GENERATED BY DEFAULT AS IDENTITY MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE NOKEEP NOSCALE NOT NULL ENABLE,
"UserId" NVARCHAR2(450) NOT NULL ENABLE,
"ClaimType" NVARCHAR2(2000),
"ClaimValue" VARCHAR2(3000 CHAR),
CONSTRAINT "PK_AspNetUserClaims" PRIMARY KEY ("Id")
......
)
C# code:
IdentityUserClaim<string> userClaim = new IdentityUserClaim<string> { UserId = user.Id, ClaimType = "WT", ClaimValue = JsonConvert.SerializeObject(jsonWebToken) };
//// var last = await _context.UserClaims.LastOrDefaultAsync(); // dumb workaround
//// userClaim.Id = last == null ? 1 : last.Id + 1;
await _context.UserClaims.AddAsync(userClaim);
GENERATED ALWAYS, attempting to insert a value will result in error, but id is an integer with value 0 as default.
GENERATED DEFAULT, generated if no value is provided, same as above, an integer is 0 as default.
GENERATED BY DEFAULT ON NULL, integer is not nullable.
How can I proceed to create a user claim from here on?
Please ignore the mixed case, system schema, quotations or what not.
Generated by default, while providing its value manually:
SQL> create table test(id number generated by default as identity primary key, name varchar2(10));
Table created.
SQL> insert into test (id, name) values (1, 'Little');
1 row created.
SQL> insert into test (id, name) values (1, 'Foot');
insert into test (id, name) values (1, 'Foot')
*
ERROR at line 1:
ORA-00001: unique constraint (DP_4005.SYS_C001685565) violated
If you omit ID:
SQL> insert into test (name) values ('Foot');
insert into test (name) values ('Foot')
*
ERROR at line 1:
ORA-00001: unique constraint (DP_4005.SYS_C001685571) violated
Generated always; ID value provided manually:
SQL> create table test(id number generated always as identity primary key, name varchar2(10));
Table created.
SQL> insert into test (id, name) values (1, 'Little');
insert into test (id, name) values (1, 'Little')
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column
So - don't provide the ID, let Oracle handle it:
SQL> insert into test (name) values ('Little');
1 row created.
SQL> insert into test (name) values ('Foot');
1 row created.
SQL> select * From test;
ID NAME
---------- ----------
1 Little
2 Foot
Conclusion? Let database handle columns which are to be generated; don't provide those values manually.
Apart from that, the first line you posted:
CREATE TABLE "SYSTEM"."AspNetUserClaims"
has 2 "errors" you'd want to avoid:
SYSTEM, as well as SYS, are special users in an Oracle database, they own it and should not be used except by DBAs while maintaining the database. Creating your own objects in any of those schemas is/can be a HUGE mistake. If you do something wrong, you might destroy the database.
double quotes (Caius Jardas already commented): don't use them in Oracle. You can use any letter case you want, but object names will default to uppercase.
See a few examples:
SQL> create table TesT (nAME varCHAr2(10));
Table created.
SQL> select table_name, column_name from user_tab_columns where table_name = 'TEST';
TABLE_NAME COLUMN_NAME
-------------------- --------------------
TEST NAME
SQL> select NamE from TEst;
no rows selected
But, with double quotes, you have to match letter case every time (and, of course, use double quotes every time):
SQL> create table "TesT" ("nAME" varchar2(10));
Table created.
SQL> select table_name, column_name from user_tab_columns where table_name = 'TesT';
TABLE_NAME COLUMN_NAME
-------------------- --------------------
TesT nAME
SQL> select * From test;
select * From test
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select name from "test";
select name from "test"
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select name from "TesT";
select name from "TesT"
*
ERROR at line 1:
ORA-00904: "NAME": invalid identifier
SQL> select "nAME" from "TesT";
no rows selected
SQL>
So - forget about double quotes while in Oracle.

Return the row, when Unique key voliation happens

Is it possible to return the record, that causes the unique key violation in MSSQL, When inserting data?
Try this schema
select * from
(
--query used for your insert
) f1
where exists
(
select * from tablewhereyouwantinsert f2
where f1.key1=f2.key1 and f1.key2=f2.key2 ---- keys used into your unique key violation
)
You can use MERGE to conditionally insert or retrieve a row from the database using a single statement.
Unfortunately, to get the retrieval action, we do have to touch the existing row, I'm assuming that's acceptable and that you'll be able to construct a low impact "No-Op" UPDATE as below:
create table T (ID int not null primary key, Col1 varchar(3) not null)
insert into T(ID,Col1) values (1,'abc')
;merge T t
using (values (1,'def')) s(ID,Col1)
on t.ID = s.ID
when matched then update set Col1 = t.Col1
when not matched then insert (ID,Col1) values (s.ID,s.Col1)
output inserted.*,$action;
This produces:
ID Col1 $action
----------- ---- ----------
1 abc UPDATE
Including the $action column helps you know that this was an existing row rather than the insert of (1,def) succeeding.

Best way to avoid adding duplicates in database

I have a SQL Server table with three columns:
Table1
col1 int
col2 int
col3 string
I have a unique constraint defined for all three columns (col1, col2, col3)
Now, I have a .csv file from which I want to add records in this table and the *.csv file can have duplicate records.
I have searched for various options for avoiding duplicates in above scenario. Below are the three options which are working well for me. Please have a look and throw some ideas on pros/cons of each method so I can choose the best one.
Option#1 :
Avoiding duplicates in the first place i.e. while adding objects to the list from csv file. I have used HashSet<T> for this and overridden below methods for type T:
public override int GetHashCode()
{
return col1.GetHashCode() + col2.GetHashCode() + col3.GetHashCode();
}
public override bool Equals(object obj)
{
var other = obj as T;
if (other == null)
{
return false;
}
return col1 == other.col1
&& col2 == other.col2
&& col3 == other.col3;
}
option #2
Having List<T> instead of HashSet<T>.
Removing duplicates after all the objects are added to List<T>
List<T> distinctObjects = allObjects
.GroupBy(x => new {x.col1, x.col2, x.col3})
.Select(x => x.First()).ToList();
option #3
Removing duplicates after all the objects are added to DataTable.
public static DataTable RemoveDuplicatesRows(DataTable dataTable)
{
IEnumerable<DataRow> uniqueRows = dataTable.AsEnumerable().Distinct(DataRowComparer.Default);
DataTable dataTable2 = uniqueRows.CopyToDataTable();
return dataTable2;
}
Although I have not compared their running time, but I prefer option#1 as I am removing duplicates as a first step - so moving ahead only with what is required.
Please share your views so I can choose the best one.
Thanks a lot!
I like option 1: the HashSet<T> provides a fast way of avoiding duplicates before ever sending them to the DB. You should implement a better GetHashCode, e.g. using Skeet's implementation from What is the best algorithm for an overridden System.Object.GetHashCode?
But there's a problem: what if the table already contains data that can be a duplicate of your CSV? You'd have to copy the whole table down first for a simple HashSet to really work. You could do just that, but to solve this, I might pair option 1 with a temporary table and an insert statement like Skip-over/ignore duplicate rows on insert's:
INSERT dbo.Table1(col1, col2, col3)
SELECT col1, col2, col3
FROM dbo.tmp_holding_Table1 AS t
WHERE NOT EXISTS (SELECT 1 FROM dbo.Table1 AS d
WHERE col1 = t.col1
AND col2 = t.col2
AND col3 = t.col3);
With this combination, the volume of data transferred to/from your DB is minimized.
Another solution could be the IGNORE_DUP_KEY = { ON | OFF } option when creating / rebuilding an index. This solution will prevent getting errors with inserting duplicate rows. Instead, SQL Server will generate warnings: Duplicate key was ignored..
CREATE TABLE dbo.MyTable (Col1 INT, Col2 INT, Col3 INT);
GO
CREATE UNIQUE INDEX IUN_MyTable_Col1_Col2_Col3
ON dbo.MyTable (Col1,Col2,Col3)
WITH (IGNORE_DUP_KEY = ON);
GO
INSERT dbo.MyTable (Col1,Col2,Col3)
VALUES (1,11,111);
INSERT dbo.MyTable (Col1,Col2,Col3)
SELECT 1,11,111 UNION ALL
SELECT 2,22,222 UNION ALL
SELECT 3,33,333;
INSERT dbo.MyTable (Col1,Col2,Col3)
SELECT 2,22,222 UNION ALL
SELECT 3,33,333;
GO
/*
(1 row(s) affected)
(2 row(s) affected)
Duplicate key was ignored.
*/
SELECT * FROM dbo.MyTable;
/*
Col1 Col2 Col3
----------- ----------- -----------
1 11 111
2 22 222
3 33 333
*/
Note: Because you have an UNIQUE constraint if you try to change index options with ALTER INDEX
ALTER INDEX IUN_MyTable_Col1_Col2_Col3
ON dbo.MyTable
REBUILD WITH (IGNORE_DUP_KEY = ON)
you will get following error:
Msg 1979, Level 16, State 1, Line 1
Cannot use index option ignore_dup_key to alter index 'IUN_MyTable_Col1_Col2_Col3' as it enforces a primary or unique constraint.`
So, if you choose this solution the options are:
1) Create another UNIQUE index and to drop the UNIQUE constraint (this option will require more storage space but will be a UNIQUE index/constraint active all time) or
2) Drop the UNIQUE constraint and create an UNIQUE index with WITH (IGNORE_DUP_KEY = ON) option (I wouldn't recommend this last option).

Auto-incrementing a number that is part of a string value in a SQL Server database

How can I auto-increment a number that is part of a string value in a SQL Server database?
For example, here is my table:
EMP_ID EMPNAME EMPSECTION
EMP_1 ROSE S-11
EMP_2 JANE R-11
When I add a new record, what I would like to do is automatically increment the number that follows EMP_. For example, EMP_3, EMP_4, etc.
one option is to have a table that has an autoincrement id field. Then you can write a trigger on this table that on insert, fires an insert on the autoincrement table and fetches the current value. Then concat that value on to the end of EMP_
By C# It's very to do , each time you want to insert a new row before inserting that row you should generate the key by following these steps :
1- get a list of your ID field
2- Do a for each loop to find tha maximum key value , something like this :
int maxID=1;
for each(var l in list)
{
if(int.Parse(l.ID.Replace("EMP_",""))>maxID)
{
maxID=int.Parse(l.ID.Replace("EMP_",""));
}
}
maxID=maxID+1;
string ID="EMP_"+maxID.Tostring();
And ID is your new ID !
but if your application is accessed by multiple programs (example : consider It's a website) I really don't suggest you to do something like this cause : 1. It's time consuming 2. In some condition same key value from multiple clients might be generated and you will have error while inserting .
You can have identity column in your table and display 'EMP_' appended to its value in your user interface. If you want to do it custom way, you'll need a sequence table
Create a sequence table
Sequence
-------------------
Seq_Name | Seq_Val
-------------------
EMPLOYEE | 0
Then you need a Stored Procedure to perform this
BEGIN
declare #curVal int
Select #curVal = Seq_Val+1 From Sequence Where Seq_Name='EMPLOYEE'
UPDATE Sequence SET Seq_Val = Seq_Val+1 Where Seq_Name='EMPLOYEE'
Insert into Employee Values ('EMP_'+Cast(#curVal As Varchar), 'Rose', 'S-11')
END
You can do something like:
create table dbo.foo
(
id int not null identity(1,1) , -- actual primary key
.
.
.
formatted_id as 'emp_' + convert(varchar,id) , -- surrogate/alternate key
constraint foo_PK primary key ( id ) ,
constraint foo_AK01 unique ( formatted_id ) ,
)
But I can't for the life of me think of just why one might want to do that.

Categories