I have a table schema where the primary key is a uniqueidentifier, and the clustered index is an identity column of type bigint.
The idea is that the Guid index is likely going to be fragmented and if it's going to be fragmented I prefer that it's not the clustered index because then it would really slow down insert. Ie I want the row inserted sequentially as much as possible.
However I do NOT want the clustered index propagated to the conceptual layer in EF. The clustered index is simply the physical location of the so said record and the programmers don't need to know anything about it. As far as they are concerned they are only dealing with the Guid PK. So I removed the property from the models.
The project compilation complains however that the clustered index column is non nullable, and has no default value, either of which is nonsensical considering it is an identity column and can neither have a default value or be nullable.
What can I do to get the project to compile?
Note: I do not want a debate about Guid vs. Sequential Guid vs. Int Id. This system must be able to scale out and that means Guid PK where I'm concerned.
You should check that the property's EntityKey value is set to true in the EDMX.
Related
I am using Entity Framework database First approach
I have a table having composite primary key on
ID(int ,identity increment),
HashKey (binary) auto generated based on multiple columns using sql hashbytes.
Following is EF Column Mapping
ID storeGeneratedPattern="Identity" and
hashkey(binary) storeGeneratedPattern="Computed".
When i try to save using EF save changes method it is throwing below exception.
"Modifications to tables where a primary key column has property 'StoreGeneratedPattern' set to 'Computed' are not supported. Use 'Identity' pattern instead. Key column: 'HashKey'. Table"
I have applied composite primary key on these columns(Id,Hashkey) to make search faster as it contains cluster index. But not sure whether EF supports this.
I have seen below link. But i am not sure about the solution.
Property with StoreGeneratedPattern set to Identity is not updted after SaveChanges()
Can anybody help on this to resolve the issue.
'Computed' means EF expects SQL to generate the value after every insert/update. Therefore it doesn't make sense for it to be part of the PK.
You can just leave the identity as the PK and still create a clustered index with columns(id, hash).
Having said that, it also doesn't make sense to include a computed column in a clustered index. Every time the computed column is changed, the entire row needs to be moved to the new position.
Used SQL Server = MySQL
Programming language = irrelevant, but I stick to Java and C#
I have a theoretical question regarding the best way to go about primary key generation for SQL databases which are then used by another program that I write, (let's assume it is not web-based.)
I know that a primary key must be unique, and I prefer primary keys where I can also immediately tell where they are coming from when I see them, either in my eclipse or windows console when I use a database, as well as in relationship tables. For that reason, I generally create my own primary key as an alphanumeric string unless a specific unique value is available such as an ISBN or SS num. For a table Actors, a primary key could then look like a1, and in a table Movies m1020 (Assuming titles are not unique such as different versions of the movie 'Return to witch Mountain').
So my question then is, how is a primary key best generated (in my program or in the db itself as a procedure)? For such a scheme, is it best to use two columns, one with a constant string such as 'a' for actors and a single running count? (In that case i need to research how to reference a table whose PK spans multiple columns) What is the most professional way of handling such a task?
Thank you for your time.
A best practice is to let your DB engine generate the primary key as an auto-increment NUMBER. Alphanumeric string are not a good way, even if it seems too "abstract" for you. Then, you don't have to worry about your primary key in your program (Java, C#, anything else) ; at each line inserted in your Database, an unique primary key is automatially inserted
By the way, with your solution, I'm not sure you manage the case where two rows are inserted simultaneously... Are you sure in absolutely no case, your primary key can be duplicated ?
Your first line says:-
SQL Server = MySQL
Thats not true. They are different.
how is a primary key best generated (in my program or in the db itself
as a procedure)?
Primary keys are generated by MYSQL when you specify the column with primary key constraint on it. The primary keys are automatically generated and they are automatically incremented.
If you want your primary key as alphanumeric(which I personally will not recommend) then you may try like this:-
CREATE TABLE A(
id INT NOT NULL AUTO_INCREMENT,
prefix CHAR(30) NOT NULL,
PRIMARY KEY (id, prefix),
I would recommend you to have Primary key as Integer as that would help you to make your selction a bit easier and optimal.For MyIsam tables you can create a multi-column index and put auto_increment field on secondary column
For MySQL there's a best way - set AUTO_INCREMENT property for your primary key table field.
You can get the generated id later with last_insert_id function or it's java or c# analog.
I don't know why you would use "alphanumeric" values - why not just a plain number?
Anyway, use whatever auto-increment functionality is available in whichever DB-system you are using, and stick with that. Do not create primary keys outside of the DB - you can't know when / how two systems might access the DB at the same time, which could cause problems if the two create the same PK value, and attempt to insert it.
Also, in my view, a PK should just be an ID (in a single column) for a specific row, and nothing more - if you need a field indicating that a record concerns data of type "actor" for instance, then that should be a separate field, and have nothing to do with the primary key (why would it?)
This question already has answers here:
What is the difference between Primary Key and unique key constraint?
(5 answers)
Closed 9 years ago.
My company is currently in the process of rewriting an application that we recently acquired. We chose to use ASP.net mvc4 to build this system as well as using the Entity Framework as our ORM. The previous owner of the company we acquired is very adamant that we use their old database and not change anything about it so that clients can use our product concurrently with the old system while we are developing the different modules.
I found out that the old table structures does not have a Primary key, rather, it uses a Unique Index to serve as their primary key. Now when using Entity framework I have tried to match their tables in structure but have been unable to do so as the EF generates a Primary key instead of a unique index.
When I contacted the previous owner, and explained it, he told me that "the Unique key in every table is the Primary Key. They are synonyms to each other."
I am still relatively new to database systems so I am not sure if this is correct. Can anyone clarify this?
his table when dumped to SQL generates:
-- ----------------------------
-- Indexes structure for table AT_APSRANCD
-- ----------------------------
CREATE UNIQUE INDEX [ac_key] ON [dbo].[AT_APSRANCD]
([AC_Analysis_category] ASC, [AC_ANALYSI_CODE] ASC)
WITH (IGNORE_DUP_KEY = ON)
GO
however my system generates:
-- ----------------------------
-- Primary Key structure for table AT_APSRANCD
-- ----------------------------
ALTER TABLE [dbo].[AT_APSRANCD] ADD PRIMARY KEY ([AC_Analysis_category])
GO
EDIT:
Follow up question to this is how would I go about designing the Models for this? I am only used to using the [Key] annotation which defines it as a primary key, and without it, EF will not generate that table.
so something like this:
[Table("AT_APSRANCD")]
public class Analysis
{
[Key]
public string AnalysisCode { get; set; }
public string AnalysisCategory { get; set; }
public string ShortName { get; set; }
public string LongName { get; set; }
}
From SQL UNIQUE Constraint
The UNIQUE constraint uniquely identifies each record in a database
table.
The UNIQUE and PRIMARY KEY constraints both provide a
guarantee for uniqueness for a column or set of columns.
A PRIMARY
KEY constraint automatically has a UNIQUE constraint defined on it.
Note that you can have many UNIQUE constraints per table, but only one
PRIMARY KEY constraint per table.
Also, from Create Unique Indexes
You cannot create a unique index on a single column if that column
contains NULL in more than one row. Similarly, you cannot create a
unique index on multiple columns if the combination of columns
contains NULL in more than one row. These are treated as duplicate
values for indexing purposes.
Whereas from Create Primary Keys
All columns defined within a PRIMARY KEY constraint must be defined as
NOT NULL. If nullability is not specified, all columns participating
in a PRIMARY KEY constraint have their nullability set to NOT NULL.
They're definitely different. As mentioned in other answers:
Unique key is used just to test uniqueness and nothing else
Primary key acts as an identifier of the record.
Also, what's important is that the primary key is usually the clustered index. This means that the records are physically stored in the order defined by the primary key. This has a big consequences for performance.
Also, the clustered index key (which is most often also the primary key) is automatically included in all other indexes, so getting it doesn't require a record lookup, just reading the index is enough.
To sum up, always make sure you have a primary key on your tables. Indexes have a huge impact on performance and you want to make sure you get your indexes right.
They are most certainly not the same thing.
A primary key must be unique, but that is just one of the its requirements. Another one would be that it cannot be null, which is not required of a unique constraint.
Also, while, in a way, unique constraints can be used as a poor man's primary keys, using them with IGNORE_DUP_KEY = ON is plainly wrong. That setting means that if you try to insert a duplicate, the insertion will fail silently.
Well, they are very similar but here are the differences.
Only one primary key is allowed on a table but multiple unique indexes can be added up to the maximum allowed number of indexes for the table (SQL Server = 250 (1 x clustered, 249 x non clustered) and SQL 2008 and SQL 2012 = 1000 (1 x clustered, 999 x non clustered)).
Primary keys cannot contain nullable columns but unique indexes can. Note, that only one NULL is allowed. If the index is created across multiple columns, each combination of values and NULL’s must be unique.
By default, unless you specify otherwise in the create statement and providing that a clustered index does not already exists, the primary key is created as a clustered index. Unique indexes however are created by default as non clustered indexes unless you specify otherwise and providing that a clustered index does not already exist.
Following link will really help you.just go with it
HERE
Yes, a composite and unique key, like you have here, will give you an index very much like the primary key. One of the advantages of these are that the data is contained in the index, so it does not have to do a look up in the table if you are only querying for the fields in the key.
This is also possible in Entity Framework. It would go something like this.
public class AT_APSRANCD
{
[Column(Order = 0), Key, ForeignKey("AC_Analysis_category")]
public int AC_Analysis_category{ get; set; }
[Column(Order = 1), Key, ForeignKey("AC_ANALYSI_CODE")]
public int AC_ANALYSI_CODE{ get; set; }
}
primary key not contain any null value.
but in case of unique null value can insert in table.
any number of null value can be insert
definition of primary key PRIMARY_KEY=UNIQUE+NOT_NULL
I have a requirement to store the list of services for multiple computers. I thought I would create one table to hold a list of all possible tables, a table for all possible computers and then a table to link a service to a computer.
I was thinking to keep the full services list unique, I could possibly use a hash of the executable as the primary key for the service, but i'm not sure if there would be any downsides to this (note that the hashing is only for identification. Not for any types of security purposes). I was thinking rather than using a binary field as the primary/foreign key, that I would store the value as a base 64 encoded sha512, and using an nvarchar(88). Something similar to this:
CREATE TABLE Services
(
ServiceHash nvarchar(88) NOT NULL,
ServiceName nvarchar(256) NOT NULL,
ServiceDescription nvarchar(256),
PRIMARY KEY (ServiceHash)
)
Is there any inherent problems with this solution? (I will be using a SQL 2008 database and generally accessing it via C#.Net).
The problem is that a hash is per definition NOT UNIQUE. It is unlikely you get a collision, but it IS possible. As a result, you can not use the hash only, which means the whole hash id is a dead end.
Use a normal ID field, use a unique constraint with index on the ServiceName.
From a performance point of view, having a non-incremental primary key would cause your clustered index to get fragmented rather quickly.
I recommend either:
Use an INT or BIGINT surrogate PK, with auto-increment.
Use a sequential GUID as a PK. Not as fast for indexing as an INT but incremental, therefore low fragmentation in time.
You can then play with non-clustered indexes on your other columns, including the one storing the hashes. Being VARCHAR you can also full-text index it and then do an exact matching when looking for a specific hash.
But, if possible, use a numerical hash instead and make a non-clustered index on it.
And of course, consider what #TomTom mentioned below.
What is the best approach when generating a primary key for a table?
That is, when the data received by the database is not injective and can't be used as a primary key.
In the code, what is the best way to manage a primary key for the table rows?
Thanks.
First recommendation stay away from uniqueidentifier for any primary key. Although it has some interesting easy ways to generate it client side, it makes it almost impossible to have any idexes on the primary key that may be useful. If I could go back in time and ban uniqueidentifiers from 99% of the places that they have been used, this would have saved more than 3 man years of dba/development time in the last 2 years.
Here is what I would recommend, using the INT IDENTITY as a primary key.
create table YourTableName(
pkID int not null identity primary key,
... the rest of the columns declared next.
)
where pkID is the name of your primary key column.
This should do what you are looking for.
AUTO_INCREMENT in mysql, IDENTITY in SQL Server..
IDENTITY in SQL Server
and if you need to get know what you new ID was while INSERT-ing data, use OUTPUT clause of INSERT statement - so the copy of new rows is put to table-type param.
If for some reason generating unique ID at SQL is not suitable for you, generate GUID's at your app - GUID has a very hight level of uniquness (but it's not guaranteed in fact). And SQL Server has dedicated GUID type for column - it's called uniqueidentifier.
http://msdn.microsoft.com/en-us/library/ms187942.aspx