I have a 5-column table with a 2-column primary index. Let's say the index is defined as (Col1, Col2).
In the the following snippet, Api.TrySeek returns false and I'm not sure why:
Api.JetSetCurrentIndex(session, table, null);
// this should match on Col1 = colVal1 and Col2 = *
Api.MakeKey(session, table, colVal1, MakeKeyGrbit.NewKey | MakeKeyGrbit.FullColumnStartLimit);
if (Api.TrySeek(session, table, SeekGrbit.SeekEQ)) // why is this false??
{
Api.MakeKey(session, table, colVal1, MakeKeyGrbit.NewKey | MakeKeyGrbit.FullColumnEndLimit);
if (Api.TrySetIndexRange(session, table, SetIndexRangeGrbit.RangeUpperLimit | SetIndexRangeGrbit.RangeInclusive))
{
<loop through entries in index range>
However, it returns true if I use SeekGrbit.SeekGE. Could someone explain why? Does SeekEQ not work with wildcard columns but SeekGE does?
In the loop, I've double-checked that the entries all have Col1 == colVal1, to rule out the possibility that it's just finding entries where Col1 > colVal1.
Related
Is it possible to return the record, that causes the unique key violation in MSSQL, When inserting data?
Try this schema
select * from
(
--query used for your insert
) f1
where exists
(
select * from tablewhereyouwantinsert f2
where f1.key1=f2.key1 and f1.key2=f2.key2 ---- keys used into your unique key violation
)
You can use MERGE to conditionally insert or retrieve a row from the database using a single statement.
Unfortunately, to get the retrieval action, we do have to touch the existing row, I'm assuming that's acceptable and that you'll be able to construct a low impact "No-Op" UPDATE as below:
create table T (ID int not null primary key, Col1 varchar(3) not null)
insert into T(ID,Col1) values (1,'abc')
;merge T t
using (values (1,'def')) s(ID,Col1)
on t.ID = s.ID
when matched then update set Col1 = t.Col1
when not matched then insert (ID,Col1) values (s.ID,s.Col1)
output inserted.*,$action;
This produces:
ID Col1 $action
----------- ---- ----------
1 abc UPDATE
Including the $action column helps you know that this was an existing row rather than the insert of (1,def) succeeding.
I am having a issue using TransactionScope and a check constraint in SQL Server.
I want to insert into the table as such:
Col A | Col B
------------
Dave | 0
Fred | 1
The table has a check constraint that there must always be an entry in Col B with '0'. The first row is inserting fine but the second row fails the constraint.
command.CommandText = #"INSERT INTO MyTable (ColA, ColB) VALUES(#ColA, #ColB)";
foreach (var row in model.Rows)
{
command.Parameters["#ColA"].Value = model.ColA;
command.Parameters["#ColB"].Value = model.ColB;
command.ExecuteNonQuery();
}
The check constraint calls the following function
IF EXISTS (SELECT * FROM mytable WHERE ColB = 0) RETURN 1
RETURN 0
Could this be because the constraint is only looking at committed data and if so how can it be told to look at uncommitted data as well
I don't think Check Constraints are suitable for a scenario like yours.You should use a instead of update/insert trigger to check that there's at least one row (in the table and /or in inserted values)
You have a inserted table in a trigger that contains all the rows that will be inserted so you can write something like this :
IF NOT EXISTS (SELECT * FROM mytable a UNION inserted WHERE ColB = 0) RIASEERROR("At least one row with ColB=0 should exist")
I have a SQL Server table with three columns:
Table1
col1 int
col2 int
col3 string
I have a unique constraint defined for all three columns (col1, col2, col3)
Now, I have a .csv file from which I want to add records in this table and the *.csv file can have duplicate records.
I have searched for various options for avoiding duplicates in above scenario. Below are the three options which are working well for me. Please have a look and throw some ideas on pros/cons of each method so I can choose the best one.
Option#1 :
Avoiding duplicates in the first place i.e. while adding objects to the list from csv file. I have used HashSet<T> for this and overridden below methods for type T:
public override int GetHashCode()
{
return col1.GetHashCode() + col2.GetHashCode() + col3.GetHashCode();
}
public override bool Equals(object obj)
{
var other = obj as T;
if (other == null)
{
return false;
}
return col1 == other.col1
&& col2 == other.col2
&& col3 == other.col3;
}
option #2
Having List<T> instead of HashSet<T>.
Removing duplicates after all the objects are added to List<T>
List<T> distinctObjects = allObjects
.GroupBy(x => new {x.col1, x.col2, x.col3})
.Select(x => x.First()).ToList();
option #3
Removing duplicates after all the objects are added to DataTable.
public static DataTable RemoveDuplicatesRows(DataTable dataTable)
{
IEnumerable<DataRow> uniqueRows = dataTable.AsEnumerable().Distinct(DataRowComparer.Default);
DataTable dataTable2 = uniqueRows.CopyToDataTable();
return dataTable2;
}
Although I have not compared their running time, but I prefer option#1 as I am removing duplicates as a first step - so moving ahead only with what is required.
Please share your views so I can choose the best one.
Thanks a lot!
I like option 1: the HashSet<T> provides a fast way of avoiding duplicates before ever sending them to the DB. You should implement a better GetHashCode, e.g. using Skeet's implementation from What is the best algorithm for an overridden System.Object.GetHashCode?
But there's a problem: what if the table already contains data that can be a duplicate of your CSV? You'd have to copy the whole table down first for a simple HashSet to really work. You could do just that, but to solve this, I might pair option 1 with a temporary table and an insert statement like Skip-over/ignore duplicate rows on insert's:
INSERT dbo.Table1(col1, col2, col3)
SELECT col1, col2, col3
FROM dbo.tmp_holding_Table1 AS t
WHERE NOT EXISTS (SELECT 1 FROM dbo.Table1 AS d
WHERE col1 = t.col1
AND col2 = t.col2
AND col3 = t.col3);
With this combination, the volume of data transferred to/from your DB is minimized.
Another solution could be the IGNORE_DUP_KEY = { ON | OFF } option when creating / rebuilding an index. This solution will prevent getting errors with inserting duplicate rows. Instead, SQL Server will generate warnings: Duplicate key was ignored..
CREATE TABLE dbo.MyTable (Col1 INT, Col2 INT, Col3 INT);
GO
CREATE UNIQUE INDEX IUN_MyTable_Col1_Col2_Col3
ON dbo.MyTable (Col1,Col2,Col3)
WITH (IGNORE_DUP_KEY = ON);
GO
INSERT dbo.MyTable (Col1,Col2,Col3)
VALUES (1,11,111);
INSERT dbo.MyTable (Col1,Col2,Col3)
SELECT 1,11,111 UNION ALL
SELECT 2,22,222 UNION ALL
SELECT 3,33,333;
INSERT dbo.MyTable (Col1,Col2,Col3)
SELECT 2,22,222 UNION ALL
SELECT 3,33,333;
GO
/*
(1 row(s) affected)
(2 row(s) affected)
Duplicate key was ignored.
*/
SELECT * FROM dbo.MyTable;
/*
Col1 Col2 Col3
----------- ----------- -----------
1 11 111
2 22 222
3 33 333
*/
Note: Because you have an UNIQUE constraint if you try to change index options with ALTER INDEX
ALTER INDEX IUN_MyTable_Col1_Col2_Col3
ON dbo.MyTable
REBUILD WITH (IGNORE_DUP_KEY = ON)
you will get following error:
Msg 1979, Level 16, State 1, Line 1
Cannot use index option ignore_dup_key to alter index 'IUN_MyTable_Col1_Col2_Col3' as it enforces a primary or unique constraint.`
So, if you choose this solution the options are:
1) Create another UNIQUE index and to drop the UNIQUE constraint (this option will require more storage space but will be a UNIQUE index/constraint active all time) or
2) Drop the UNIQUE constraint and create an UNIQUE index with WITH (IGNORE_DUP_KEY = ON) option (I wouldn't recommend this last option).
Lets say I have a DataTable (myDatable) whose first line is a row of headers and whose subsequent rows are simply numerical data. For example:
| WaterPercent | Ethylene | Toluene |
|1.0312345 | 74.1323 | 234.000 |
|56.054657 | 18.6540 | 234.000 |
|37.57000 | 94.6540 | 425.000 |
At this point, all of its data contained within its myDataTable.Columns and myDataTable.Rows are Strings.
using this query:
var results = from row in myDataTable.AsEnumerable()
select row.Field<string>("Ethylene");
I can get all of the values in the Ethylene column, but I want to filter my query with a "where" clause such that I can retrieve just one value at the intersection of a specific row index and a column like "Ethylene".
Consequently it doesn't look like (unless I am missing something) that i can get access to the index of the rows collection using 'row' in a Linq query. Even if I had this, I am not sure how to form the "where" clause of my query to get what I want.
What do I need for my query to be able to filter the result down to the intersection of a specific row and a column?
For example I want the value 18.6540 which exists at the row index of 2 and the column of Ethylene.
If you know the specific row index, then you can specify row index directly using .Rows collection, like you would index an array or collection (since it's 0-based indexing, row 2 would be index 1):
var result = myDataTable.Rows[1].Field<String>("Ethylene")
Is there a reason you do not do it in the result?
var results = (from row in myDataTable.AsEnumerable()
select row.Field<string>("Ethylene")).ToArray();
then just index
var myVal = results[2];
Otherwise, you will want to use Skip() and Take().
String result = (from row in myDataTable.AsEnumerable()
select row.Field<string>("Ethylene")).Skip(2).Take(1).Single();
I have this sequence generation query that gets the current sequence and increment it to next value. But the increment is not updating. The nextval is always returning 1, the default value from the database
Entity | StartVal | Increment | CurVal | NextVal
----------------------------------------------------
INVOICE | 0 | 1 | 0 | 1
The nextval should be 3, 5, 7 and so on
int nextVal = 0;
using (var db = new MedicStoreDataContext())
{
DAL.LINQ.Sequence seq = (from sq in db.Sequences
where sq.Entity == entity
select sq).SingleOrDefault();
if (seq != null)
{
nextVal = seq.NextVal.HasValue ? seq.NextVal.Value : 0;
seq.NextVal = nextVal + 2;
db.SubmitChanges();
}
}
Have I left something undone?
UPDATE:
Answer: I needed to set the Primary Key and update Sequence class field to include the Primary Key
Usually this is because it hasnt found the unique identifier (or Primary Key) for a table.
In your data descriptions are you sure the table correctly picked up the unique item? - when I first tried this although I had a unique key etc, the table description in c# didnt mark it as unique, so the linq quietly didnt updated it as I had expected, no errors no warnings. Once I corrected the data table in c#, it all went well.
Isn't that the correct behaviour? wouldn't you expect nextVal to be 1 if CurVal is 0? I may be missing something here but it seems like your overcomplicating it a bit. Isn't what you want to do basically
using (var db = new MedicStoreDataContext())
{
DAL.LINQ.Sequence seq = (from sq in db.Sequences
where sq.Entity == entity
select sq).SingleOrDefault();
if (seq != null)
{
seq.CurVal += seq.Increment;
db.SubmitChanges();
}
}
I don't see why you need the whole nextVal bit at all. Please feel free to correct me if I'm wrong.