I have an Order table, and a Coverage Table.
CREATE TABLE Order (
`id` INT NOT NULL PRIMARY KEY,
);
CREATE TABLE Coverage (
`label` VARCHAR(255) NOT NULL PRIMARY KEY,
`value` DECIMAL(9,2) NOT NULL,
`orderId` NOT NULL PRIMARY KEY --Which is also a foreign key
);
In my front-end, I have a list of values, which represents coverages data, which is sent to the back end to be saved in the Coverage table.
Now my question is how should I handle the list of data? Since I send the list of data, if I delete a row in the front-end, how my back-end will know that I deleted a row? Should I delete all records and inserting the new list (I don't think that's the best choice).
Can I make a primary key like 'c0001, c0002' and for supplier 's0001, s0002' in one table?
The idea in database design, is to keep each data element separate. And each element has its own datatype, constraints and rules. That c0002 is not one field, but two. Same with XXXnnn or whatever. It is incorrect , and it will severely limit your ability to use the data, and use database features and facilities.
Break it up into two discrete data items:
column_1 CHAR(1)
column_2 INTEGER
Then set AUTOINCREMENT on column_2
And yes, your Primary Key can be (column_1, column_2), so you have not lost whatever meaning c0002 has for you.
Never place suppliers and customers (whatever "c" and "s" means) in the same table. If you do that, you will not have a database table, you will have a flat file. And various problems and limitations consequent to that.
That means, Normalise the data. You will end up with:
one table for Person or Organisation containing the common data (Name, Address...)
one table for Customer containing customer-specific data (CreditLimit...)
one table for Supplier containing supplier-specific data (PaymentTerms...)
no ambiguous or optional columns, therefore no Nulls
no limitations on use or SQL functions
.
And when you need to add columns, you do it only where it is required, without affecting all the other sues of the flat file. The scope of effect is limited to the scope of change.
My approach would be:
create an ID INT IDENTITY column and use that as your primary key (it's unique, narrow, static - perfect)
if you really need an ID with a letter or something, create a computed column based on that ID INT IDENTITY
Try something like this:
CREATE TABLE dbo.Demo(ID INT IDENTITY PRIMARY KEY,
IDwithChar AS 'C' + RIGHT('000000' + CAST(ID AS VARCHAR(10)), 6) PERSISTED
)
This table would contain ID values from 1, 2, 3, 4........ and the IDwithChar would be something like C000001, C000002, ....., C000042 and so forth.
With this, you have the best of both worlds:
a proper, perfectly suited primary key (and clustering key) on your table, ideally suited to be referenced from other tables
your character-based ID, properly defined, computed, always up to date.....
Yes, Actually these are two different questions,
1. Can we use varchar column as an auto increment column with unique values like roll numbers in a class
ANS: Yes, You can get it right by using below piece of code without specifying the value of ID and P_ID,
CREATE TABLE dbo.TestDemo
(ID INT IDENTITY(786,1) NOT NULL PRIMARY KEY CLUSTERED,
P_ID AS 'LFQ' + RIGHT('00000' + CAST(ID AS VARCHAR(5)), 5) PERSISTED,
Name varchar(50),
PhoneNumber varchar(50)
)
Two different increments in the same column,
ANS: No, you can't use this in one table.
I prefer artificial primary keys. Your requirements can also be implemented as unique index on a computed column:
CREATE TABLE [dbo].[AutoInc](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Range] [varchar](50) NOT NULL,
[Descriptor] AS ([range]+CONVERT([varchar],[id],(0))) PERSISTED,
CONSTRAINT [PK_AutoInc] PRIMARY KEY ([ID] ASC)
)
GO
CREATE UNIQUE INDEX [UK_AutoInc] ON [dbo].[AutoInc]
(
[Descriptor] ASC
)
GO
Assigning domain meaning to the primary key is a practice that goes way, way back to the time when Cobol programmers and dinosaurs walked the earth together. The practice survives to this day most often in legacy inventory systems. It is mainly a way of eliminating one or more columns of data and embedding the data from the eliminated column(s) in the PK value.
If you want to store customer and supplier in the same table, just do it, and use an autoincrementing integer PK and add a column called ContactType or something similar, which can contain the values 'S' and 'C' or whatever. You do not need a composite primary key.
You can always concatenate these columns (PK and ContactType) on reports, e.g. C12345, S20000, (casting the integer to string) if you want to eliminate the column in order to save space (i.e. on the printed or displayed page), and everyone in your organization understands the convention that the first character of the entity id stands for the ContactType code.
This approach will leverage autoincrementing capabilities that are built into the database engine, simplify your PK and related code in the data layer, and make your program and database more robust.
First let us state that you can't do directly. If you try
create table dbo.t1 (
id varchar(10) identity,
);
the error message tells you which data types are supported directly.
Msg 2749, Level 16, State 2, Line 1
Die 'id'-Identitätsspalte muss vom
Datentyp 'int', 'bigint', 'smallint',
'tinyint' oder 'decimal' bzw.
'numeric' mit 0 Dezimalstellen sein
und darf keine NULL-Werte zulassen.
BTW: I tried to find this information in BOL or on MSDN and failed.
Now knowing that you can't do it the direct way, it is a good choice to follow #marc_s proposal using computed columns.
Instead of doing 'c0001, c0002' for customers and 's0001, s0002' for suppliers in one table, proceed in the following way:
Create one Auto-Increment field "id" of Data Type "int (10) unsigned".
Create another field "type" of Data Type "enum ('c', 's')" (where c=Customer, s=Supplier).
As "#PerformanceDBA" pointed out, you can then make the Primary Key Index for two fields "id" & "type", so that your requirement gets fulfilled with the correct methodology.
INSERT INTO Yourtable (yourvarcharID)
values('yourvarcharPrefix'+(
SELECT CAST((SELECT CAST((
SELECT Substring((
SELECT MAX(yourvarcharID) FROM [Yourtable ]),3,6)) AS int)+1)
AS VARCHAR(20))))
Here varchar column is prefixed with 'RX' then followed by 001, So I selected substring after that prefix of it and incremented the that number alone.
We can add Default Constraint Function with table definition to achieve this.
First create table -
create table temp_so (prikey varchar(100) primary key, name varchar(100))
go
Second create new User Defined Function -
create function dbo.fn_AutoIncrementPriKey_so ()
returns varchar(100)
as
begin
declare #prikey varchar(100)
set #prikey = (select top (1) left(prikey,2) + cast(cast(stuff(prikey,1,2,'') as int)+1 as varchar(100)) from temp_so order by prikey desc)
return isnull(#prikey, 'SB3000')
end
go
Third alter table definition to add default constraint -
alter table temp_so
add constraint df_temp_prikey
default dbo.[fn_AutoIncrementPriKey_so]() for prikey
go
Fourth insert new row into table without specifying value for primary column-
insert into temp_so (name) values ('Rohit')
go 4
Check out data in table now -
select * from temp_so
OUTPUT -
prikey name
SB3000 Rohit
SB3001 Rohit
SB3002 Rohit
SB3003 Rohit
you may try below code:
SET #variable1 = SUBSTR((SELECT id FROM user WHERE id = (SELECT MAX(id) FROM user)), 5, 7)+1;
SET #variable2 = CONCAT("LHPL", #variable1);
INSERT INTO `user`(`id`, `name`) VALUES (#variable2,"Jeet");
1st line to get last inserted Id by removing four character than increase one value and set to a variable1
2nd line to make complete id with four character prefix and assign to variable2
insert new value with generated new primary key = variable2
you should have minimum one data in this table to work above SQL
No. If you really need this, you will have to generate ID manually.
I currently have a table as defined as follows:
CREATE TABLE (
ID int identity(1,1) primary key not null,
field_one nvarchar(max) not null
)
I am using Entity framework 5 to insert records into this table. This aspect works and I am inserting records correctly.
field_one is a value based on the current id. For example if I have records where the max ID is 5, the next record would be inserted as (6,ABC6), and then (7, ABC7) and so on.
What I am running into is when the table is empty. The identity is at 7, and so the next record should be (8,ABC8). I Need to get the id of the auto_increment prior to the insert in order for me to apply the calculations I need when the table is empty.
How would I be able to do this since I know that the id is generated after the insert and the property is in my entity gets updated after.
You cannot get the correct identity before you perform SaveChanges. This is due to the fact that it is set by Entity Framework on insert.
However, there is a workaround to your problem.
Consider this:
// wrap in a transaction so that the entire operation
// either succeeds or fails as a whole
using(var scope = new TransactionScope())
using(var context = new DbContext()) {
var item = new Item();
context.Items.Add(item);
context.SaveChanges();
// item.ID now contains the identifier
item.field_one = string.Format("abc{0}", item.ID);
// perform another save.
context.SaveChanges();
// commit transaction
scope.Complete();
}
Yes, there are two calls to the database but there's no other way unless you are ready to go deeper than Entity Framework.
This may seem a common question but I googled to find the right answer that can fix my problem and failed to do so.
I have multiple tables connected to each other by ProductID and I wish to delete all data from them when the product from main table has been deleted. i.e.
Products : ProductID - Vender - Description
ProductRatings : ProductID - Rating - VisitorsCount
ProductComments : ProductID - VisitorName - Comment
I read that for such situation a SQL trigger is used but I have no idea about it besides I might be mentioning my DataSource in ASCX.CS file in some cases and in some cases I might simply use SqlDatasoruce in ASCX file. Is there any query or stored procedure that can be used?
The easiest way to do this is to implement a foreign key relationship to ProductID and set on delete cascade. This is a general idea:
create table ProductRatings
(
ProductID int not null
foreign key references Products(ProductID) on delete cascade,
Rating int not null,
VisitorsCount int not null
)
What that does is when you delete a primary key value from the Products table, that causes SQL Server to delete all records that have a foreign key constraint to that primary key value. If you do this with your ProductComments table as well, problem solved. No need to explicitly call a DELETE on any records in the referencing tables.
And if you aren't using referential integrity...you should.
EDIT: this also holds true for UPDATEs on the primary key. You just need to specify on update cascade, and the foreign key references will update as the primary key did to ensure RI.
I am using TableAdapter to insert records in table within a loop.
foreach(....)
{
....
....
teamsTableAdapter.Insert(_teamid, _teamname);
....
}
Where TeamID is the primary key in the table and _teamID inserts it. Actually i am extracting data from XML file which contains unique teamId
After first run of this loop, Insert throws Duplicate Primary Key found Exception. To handle this, i have done this
foreach(....)
{
....
....
try
{
_teamsTableAdapter.Insert(_teamid, _teamname);
}
catch (System.Data.SqlClient.SqlException e)
{
if (e.Number != 2627)
MessageBox.Show(e.Message);
}
....
....
}
But using try catch statement is costly, how to avoid this exception. I am working in VS2010 and INSERT ... ON DUPLICATE KEY UPDATE does not work.
I want to avoid try catch statements and handle it without the use of try catch statements.
Based on your comments to other answers, I would suggest that TeamID be changed from the primary key (if possible) and a new Idx column set up as the primary key. You can then set a trigger on your DB that, when a new record is inserted with a duplicate TeamID will update the original record and delete the new one.
If that is not possible, I would modify the stored procedure which is inserting the record so that, instead of just inserting, it first checks for a duplicate TeamID. If there isn't a duplicate id, the record can insert, Else it will just 'Select 0.'
pseudo-code example:
Declare #Count int
Set #Count = (Select Count(TeamId) From [Table] where TeamId = #TeamId)
If(#Count > 0)
Begin
Select 0
End
Else
--Insert Logic Here
Then, your Insert Method in code can, instead of being ExecuteNonQuery(), be ExecuteScalar(). Your code would handle that this way
If(_teams.TableAdapter.Insert(_teamId, _teamName) == 0)
{
_teams.TableAdapter.Update(_teamId, _teamName)
}
Alternatively, if you just wanted to handle it all in SQL (so your C# code doesn't have to change) you could do something like this:
Declare #Count int
Set #Count = Select Count(TeamId) from [Table] Where TeamId = #TeamId
If(#Count > 0)
Begin
//Update Logic
End
Else
Begin
//Insert Logic
End
But, again, I'd just modify the table if that's an option.
Does the table you're using have a primary key? If not, you should create one as it will prevent duplicate records, and might make it easier to access keys for other parts of your program.
Usually this is done with an Identity Column, or something similar. (Which it looks like you might already have in terms of TeamID, in which case you only need to change it to primary key in either SQL-MS or VS2010).
Edit: To designate a primary key as an identity column (teamID in your example) using Visual Studio:
Go to the server explorer. Navigate to the relevant table. Right-click "Open Table Definition". Click on the primary key column. Scroll the properties window until you reach "identity specification". Change this to "yes" (you can set the increment / seed to whatever you wish. Usually 1,1 is fine). Now, all you have to do is insert a Team Name into the table, and the TeamID is automatically generated.
There are clearly duplicates in your data. Either you need to eliminate them first or use some type of merge statment to do an isert if new or an update if not new.
To see what data is casueing the problem, run profiler while you run the loop from your application and see waht statments are actually being sent. That shoudl point you towards which record(s) are duplicated.
If this is a large file, bulk insert (after cleaning the dups) will be faster than row-by-row processing.