Can I make a primary key like 'c0001, c0002' and for supplier 's0001, s0002' in one table?
The idea in database design, is to keep each data element separate. And each element has its own datatype, constraints and rules. That c0002 is not one field, but two. Same with XXXnnn or whatever. It is incorrect , and it will severely limit your ability to use the data, and use database features and facilities.
Break it up into two discrete data items:
column_1 CHAR(1)
column_2 INTEGER
Then set AUTOINCREMENT on column_2
And yes, your Primary Key can be (column_1, column_2), so you have not lost whatever meaning c0002 has for you.
Never place suppliers and customers (whatever "c" and "s" means) in the same table. If you do that, you will not have a database table, you will have a flat file. And various problems and limitations consequent to that.
That means, Normalise the data. You will end up with:
one table for Person or Organisation containing the common data (Name, Address...)
one table for Customer containing customer-specific data (CreditLimit...)
one table for Supplier containing supplier-specific data (PaymentTerms...)
no ambiguous or optional columns, therefore no Nulls
no limitations on use or SQL functions
.
And when you need to add columns, you do it only where it is required, without affecting all the other sues of the flat file. The scope of effect is limited to the scope of change.
My approach would be:
create an ID INT IDENTITY column and use that as your primary key (it's unique, narrow, static - perfect)
if you really need an ID with a letter or something, create a computed column based on that ID INT IDENTITY
Try something like this:
CREATE TABLE dbo.Demo(ID INT IDENTITY PRIMARY KEY,
IDwithChar AS 'C' + RIGHT('000000' + CAST(ID AS VARCHAR(10)), 6) PERSISTED
)
This table would contain ID values from 1, 2, 3, 4........ and the IDwithChar would be something like C000001, C000002, ....., C000042 and so forth.
With this, you have the best of both worlds:
a proper, perfectly suited primary key (and clustering key) on your table, ideally suited to be referenced from other tables
your character-based ID, properly defined, computed, always up to date.....
Yes, Actually these are two different questions,
1. Can we use varchar column as an auto increment column with unique values like roll numbers in a class
ANS: Yes, You can get it right by using below piece of code without specifying the value of ID and P_ID,
CREATE TABLE dbo.TestDemo
(ID INT IDENTITY(786,1) NOT NULL PRIMARY KEY CLUSTERED,
P_ID AS 'LFQ' + RIGHT('00000' + CAST(ID AS VARCHAR(5)), 5) PERSISTED,
Name varchar(50),
PhoneNumber varchar(50)
)
Two different increments in the same column,
ANS: No, you can't use this in one table.
I prefer artificial primary keys. Your requirements can also be implemented as unique index on a computed column:
CREATE TABLE [dbo].[AutoInc](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Range] [varchar](50) NOT NULL,
[Descriptor] AS ([range]+CONVERT([varchar],[id],(0))) PERSISTED,
CONSTRAINT [PK_AutoInc] PRIMARY KEY ([ID] ASC)
)
GO
CREATE UNIQUE INDEX [UK_AutoInc] ON [dbo].[AutoInc]
(
[Descriptor] ASC
)
GO
Assigning domain meaning to the primary key is a practice that goes way, way back to the time when Cobol programmers and dinosaurs walked the earth together. The practice survives to this day most often in legacy inventory systems. It is mainly a way of eliminating one or more columns of data and embedding the data from the eliminated column(s) in the PK value.
If you want to store customer and supplier in the same table, just do it, and use an autoincrementing integer PK and add a column called ContactType or something similar, which can contain the values 'S' and 'C' or whatever. You do not need a composite primary key.
You can always concatenate these columns (PK and ContactType) on reports, e.g. C12345, S20000, (casting the integer to string) if you want to eliminate the column in order to save space (i.e. on the printed or displayed page), and everyone in your organization understands the convention that the first character of the entity id stands for the ContactType code.
This approach will leverage autoincrementing capabilities that are built into the database engine, simplify your PK and related code in the data layer, and make your program and database more robust.
First let us state that you can't do directly. If you try
create table dbo.t1 (
id varchar(10) identity,
);
the error message tells you which data types are supported directly.
Msg 2749, Level 16, State 2, Line 1
Die 'id'-Identitätsspalte muss vom
Datentyp 'int', 'bigint', 'smallint',
'tinyint' oder 'decimal' bzw.
'numeric' mit 0 Dezimalstellen sein
und darf keine NULL-Werte zulassen.
BTW: I tried to find this information in BOL or on MSDN and failed.
Now knowing that you can't do it the direct way, it is a good choice to follow #marc_s proposal using computed columns.
Instead of doing 'c0001, c0002' for customers and 's0001, s0002' for suppliers in one table, proceed in the following way:
Create one Auto-Increment field "id" of Data Type "int (10) unsigned".
Create another field "type" of Data Type "enum ('c', 's')" (where c=Customer, s=Supplier).
As "#PerformanceDBA" pointed out, you can then make the Primary Key Index for two fields "id" & "type", so that your requirement gets fulfilled with the correct methodology.
INSERT INTO Yourtable (yourvarcharID)
values('yourvarcharPrefix'+(
SELECT CAST((SELECT CAST((
SELECT Substring((
SELECT MAX(yourvarcharID) FROM [Yourtable ]),3,6)) AS int)+1)
AS VARCHAR(20))))
Here varchar column is prefixed with 'RX' then followed by 001, So I selected substring after that prefix of it and incremented the that number alone.
We can add Default Constraint Function with table definition to achieve this.
First create table -
create table temp_so (prikey varchar(100) primary key, name varchar(100))
go
Second create new User Defined Function -
create function dbo.fn_AutoIncrementPriKey_so ()
returns varchar(100)
as
begin
declare #prikey varchar(100)
set #prikey = (select top (1) left(prikey,2) + cast(cast(stuff(prikey,1,2,'') as int)+1 as varchar(100)) from temp_so order by prikey desc)
return isnull(#prikey, 'SB3000')
end
go
Third alter table definition to add default constraint -
alter table temp_so
add constraint df_temp_prikey
default dbo.[fn_AutoIncrementPriKey_so]() for prikey
go
Fourth insert new row into table without specifying value for primary column-
insert into temp_so (name) values ('Rohit')
go 4
Check out data in table now -
select * from temp_so
OUTPUT -
prikey name
SB3000 Rohit
SB3001 Rohit
SB3002 Rohit
SB3003 Rohit
you may try below code:
SET #variable1 = SUBSTR((SELECT id FROM user WHERE id = (SELECT MAX(id) FROM user)), 5, 7)+1;
SET #variable2 = CONCAT("LHPL", #variable1);
INSERT INTO `user`(`id`, `name`) VALUES (#variable2,"Jeet");
1st line to get last inserted Id by removing four character than increase one value and set to a variable1
2nd line to make complete id with four character prefix and assign to variable2
insert new value with generated new primary key = variable2
you should have minimum one data in this table to work above SQL
No. If you really need this, you will have to generate ID manually.
I have a table in SQL Server:
select * from TaskSave
TaskID SaveTypeID ResultsPath PluginName PluginConfiguration
---------------------------------------------------------------------
92 1 NULL NULL NULL
92 7 NULL RGP_MSWord.WordDocumentOutput www|D:\Users\Peter\Documents\Temp
92 7 NULL RGP_WC.WCOutput wcwc|D:\Users\Peter\Documents\Temp|.docx|123|456|789
which I am trying to read with C# / Entity Framework:
public static List<TaskSave> GetSavesPerTask(Task task)
{
using (Entities dbContext = new Entities())
{
var savesPerTask = from p in dbContext.TaskSaves.Include("SaveType").Where(q => q.TaskID == task.TaskID) select p;
//return savesPerTask.ToList();
// DEBUG
var x = savesPerTask.ToList();
foreach (var y in x)
{
Console.WriteLine("SaveTypeID {0}, Plugin Name {1}", y.SaveTypeID, y.PluginName);
}
return x;
}
}
The business rule states that TaskID + SaveTypeID + PluginName are unique, i.e. a task can have more than one plug-in. If the save type is not ‘plugin’, then TaskID + SaveTypeID must be unique.
My problem is that the GetSavesPerTask method returns the wrong results. It retrieves three rows, but row 2 is duplicated – I get 2 rows with PluginName of RGP_MSWord.WordDocumentOutput and not the RGP_WC.WCOutput row. The debug print shows:
SaveTypeID 1, Plugin Name
SaveTypeID 7, Plugin Name RGP_MSWord.WordDocumentOutput
SaveTypeID 7, Plugin Name RGP_MSWord.WordDocumentOutput
Both the debugger and the ultimate user of the data agree that the third row is absent.
I have tried removing the include clause, but that makes no difference to the result set. Here is the SQL (from the simpler case) as reported by the debugger:
savesPerTask {SELECT
[Extent1].[TaskID] AS [TaskID],
[Extent1].[SaveTypeID] AS [SaveTypeID],
[Extent1].[ResultsPath] AS [ResultsPath],
[Extent1].[PluginName] AS [PluginName],
[Extent1].[PluginConfiguration] AS [PluginConfiguration]
FROM (SELECT
[TaskSave].[TaskID] AS [TaskID],
[TaskSave].[SaveTypeID] AS [SaveTypeID],
[TaskSave].[ResultsPath] AS [ResultsPath],
[TaskSave].[PluginName] AS [PluginName],
[TaskSave].[PluginConfiguration] AS [PluginConfiguration]
FROM [dbo].[TaskSave] AS [TaskSave]) AS [Extent1]
WHERE [Extent1].[TaskID] = #p__linq__0}
I have copy and pasted the SQL from the debugger into SSMS and it gets the correct results. I have tried deleting and recreating those two table in the EF model. I have recompiled many times as I’ve tried different things (adding debug, refreshing the model, moving the return outside of the using block, etc.).
The TaskSave table does not have a primary key, so I get Error 6002 during compilation (which I ignore as it does not seem to affect anything else and I can’t find any work-around on the internet)
'Database.dbo.TaskSave' does not have a primary key defined. The key has been inferred and the definition was created as a read-only table/view.
How can I fix the GetSavesPerTask method to return the correct rows? What silly mistake have I made? I have many other tables and CRUD operations, which seem to work as expected.
Here are my versions
.NET 4.6.01055
C# 2015
SQL Server 2014
SQL Server data tools 14.0.50730.0
Most likely, the trouble just simply is the lack of an explicit primary key. Every table in your database ought to have a primary key anyway...
What happens here is: since there's no primary key, EF will just use all non-nullable columns from the table (or the view) as a "replacement" PK. Not sure which those are in your case - possibly (TaskID, SaveTypeID) ??
And when EF reads the data, it will go:
read the row
check the primary key (or the "stand-in" PK in this case)
if it has already read a row with that PK - it just duplicates that row that it already has - it will disregard any non-PK columns from the table/view!
So in your case, once it's read a first row with the "stand-in" PK being TaskID = 92, SaveTypeID = 7, any further rows with those two values will just get the already read values - no matter what's stored in the database!
SOLUTION: as I said before: EVERY table OUGHT TO HAVE a proper primary key to uniquely identify each individual row!
In your case, I'd recommend to just add a surrogate key like this:
ALTER TABLE dbo.TaskSave
ADD TaskSaveID INT IDENTITY(1,1) NOT NULL
ALTER TABLE dbo.TaskSave
ADD CONSTRAINT PK_YourTableName
PRIMARY KEY CLUSTERED(TaskSaveID)
Now, each row has its own, unique TaskSaveID, and EF can detect the proper PK, and all your troubles should be gone.
I have a table (tOrder) that has the following structure in SQL Server 2008
orderID (int) - this is currently the primary key and the identity field.
name(varchar)
address(varchar)
groupID (int) - now this field i need to also auto increment, but at the same time i want to be able to insert values into.
My data would look something like:
1 - john - address1 - 1
2 - mary - address2 - 1
3 - mary -address3 - 2
4 - jane - address4 - 3
where order IDs 1 and 2 share the same group , while 3 and 4 are in their own.
Many orders can have same groupID, but when I insert an order of a new group, I would like the groupID to be auto populated with the next sequence number automatically, while at the same time allowing me to insert duplicate groupID for different orders if I need to.
Hope this makes sense.
How do I go about doing this? (I'm using c# in the back end, if that makes any difference)
I would create a new "groups" table with an identity to ensure uniqueness as follows:
create table tOrders(
orderID int PRIMARY KEY IDENTITY,
name varchar(30),
address varchar(60),
fkGroup int
);
create table tGroups(
groupID int PRIMARY KEY IDENTITY,
description varchar(50)
);
ALTER TABLE tOrders
ADD FOREIGN KEY (fkGroup) REFERENCES tGroups(groupID);
You would, of course have to either supply a groupID for the IDENTITY of a newly inserted tGroup (groupID) value.
This SQL Fiddle Example demonstrates one way of populating the tables.
One option would be to create a trigger on your torder table (not a fan of triggers, but given your criteria, can't think of another option).
CREATE TRIGGER tOrder_trigger
ON tOrder
AFTER INSERT
AS
UPDATE tOrder
SET groupid = (SELECT COALESCE(MAX(groupid),0) + 1 FROM tOrder)
FROM INSERTED AS I
WHERE I.groupid IS NULL
AND tOrder.orderid = I.orderid;
SQL Fiddle Demo
This checks if the inserted record has a NULL groupid using INSERTED, and if so, updates the table to the MAX(groupid) + 1, using COALESCE to check for NULL.
A database exists with two tables
Data_t : DataID Primary Key that is
Identity 1,1. Also has another field
'LEFT' TINYINT
Data_Link_t : DataID PK and FK where
DataID MUST exist in Data_t. Also has another field 'RIGHT' SMALLINT
Coming from a microsoft access environment into C# and sql server I'm looking for a good method of importing a record into this relationship.
The record contains information that belongs on both sides of this join (Possibly inserting/updating upwards 5000 records at once). Bonus to process the entire batch in some kind of LINQ list type command but even if this is done record by record the key goal is that BOTH sides of this record should be processed in the same step.
There are countless approaches and I'm looking at too many to determine which way I should go so I thought faster to ask the general public. Is LINQ an option for inserting/updating a big list like this with LINQ to SQL? Should I go record by record? What approach should I use to add a record to normalized tables that when joined create the full record?
Sounds like a case where I'd write a small stored proc and call that from C# - e.g. as a function on my Linq-to-SQL data context object.
Something like:
CREATE PROCEDURE dbo.InsertData(#Left TINYINT, #Right SMALLINT)
AS BEGIN
DECLARE #DataID INT
INSERT INTO dbo.Data_t(Left) VALUES(#Left)
SELECT #DataID = SCOPE_IDENTITY();
INSERT INTO dbo.Data_Link_T(DataID, Right) VALUES(#DataID, #Right)
END
If you import that into your data context, you could call this something like:
using(YourDataContext ctx = new YourDataContext)
{
foreach(YourObjectType obj in YourListOfObjects)
{
ctx.InsertData(obj.Left, obj.Right)
}
}
and let the stored proc handle all the rest (all the details, like determining and using the IDENTITY from the first table in the second one) for you.
I have never tried it myself, but you might be able to do exactly what you are asking for by creating an updateable view and then inserting records into the view.
UPDATE
I just tried it, and it doesn't look like it will work.
Msg 4405, Level 16, State 1, Line 1
View or function 'Data_t_and_Data_Link_t' is not updatable because the modification affects multiple base tables.
I guess this is just one more thing for all the Relational Database Theory purists to hate about SQL Server.
ANOTHER UPDATE
Further research has found a way to do it. It can be done with a view and an "instead of" trigger.
create table Data_t
(
DataID int not null identity primary key,
[LEFT] tinyint,
)
GO
create table Data_Link_t
(
DataID int not null primary key foreign key references Data_T (DataID),
[RIGHT] smallint,
)
GO
create view Data_t_and_Data_Link_t
as
select
d.DataID,
d.[LEFT],
dl.[RIGHT]
from
Data_t d
inner join Data_Link_t dl on dl.DataID = d.DataID
GO
create trigger trgInsData_t_and_Data_Link_t on Data_t_and_Data_Link_T
instead of insert
as
insert into Data_t ([LEFT]) select [LEFT] from inserted
insert into Data_Link_t (DataID, [RIGHT]) select ##IDENTITY, [RIGHT] from inserted
go
insert into Data_t_and_Data_Link_t ([LEFT],[RIGHT]) values (1, 2)
I have 2 tables that I import to EF model.
First table has a property [section] that acts as foreign key to the second table.
When I map this property in model to the table and try to compile I get this error:
Problem in Mapping Fragments starting
at lines 158, 174: Non-Primary-Key
column(s) [Section] are being mapped
in both fragments to different
conceptual side properties - data
inconsistency is possible because the
corresponding conceptual side
properties can be independently
modified.
If i remove this property from the model it passes, but when I query the data I don't have the section field.
I know that I can get it by using the navigation field and reading this property from the second table, but to make it work I must include the other table in my query.
var res = from name in Context.Table1.Include("Table2")...
Why do I need to include the association just for one field?
UPDATE
To make it more clear:
Table 1 has fields:
ItemId - key
section - foreign key
title
Table 2 has fields:
SectionId - key
Name
When I set the associations the section property from the first table must be removed.
What are your Primary Keys and is one Store Generated? I suspect you are missing a PK or an Identity somewhere.
Tip: One alternative when having mapping problems is to create the model you want in the EDMX designer and then ask it to create the database for you. Compare what it creates to what you have made in SQL and it's often easy to spot the mistakes.
In EF 4 you can use FK associations for this.
In EF 1 the easiest way to get one field from a related table is to project:
var q = from t1 in Context.Table1
where //...
select new
{
T1 = t1,
Section = t1.Section.SectionId
};
var section = q.First().Section;
If it's a key property, you can get the value via the EntityKey:
var t1 = GetT1();
var section = (int)t1.SectionReference.EntityKey.Values[0].Value;
Generally, I don't like this last method. It's too EF-specific, and fails if your query MergeOption is set to NoTracking.