BACKGROUND TO THE DOMAIN
I have a .NET application with a SQL Server database underneath. Each customer has their own database.
In that database, I have a table called Label which is empty on new installations, and gets populated when the user creates their own labels.
NEW REQUIREMENT
There is a new requirement that when the database is patched for the next release, we should add some default labels that come with the system.
These default labels must have the same ID in every customer database that they are added to (via SQL patching on upgrade), i.e. it must be 'static data'
ATTEMPT 1
I tried giving these 'default labels' an enormous ID number, e.g. above 200,000, since we 'know' to a decent degree of certainty that all customers will have no more than 5 - 10 rows in this table, so since the ID column is of type identity (1, 1), no customers will have already used this ID in this table.
But I've been told we want to avoid this because it seems like bad practice to mix static and dynamic data in one table.
ATTEMPT 2
I tried adding a new identical table called StaticLabel, also with an ID column of type int identity(1, 1).
I added a corresponding entity in the application (StaticLabelEntity.cs), an entity map (StaticLabelEntity.cs, using fluent NHibernate) and a repository (StaticLabelRepository.cs).
Now, in the existing LabelService.cs code that retrieves labels from the database (via the repository), I tell it to get both the Labels and StaticLabels, and combine them into one list of ILabel.
This works fine when viewing the labels config in my application
When it comes to assigning labels to an author (an author can have many labels), there's already an AuthorLabel table with a foreign key to the Author table (AuthorId) and one to the Label table (LabelId).
I guess we need to add another column to AuthorLabel, which is a foreign key to StaticLabel, and also add a check constraint so that only LabelId OR StaticLabelId is populated.
QUESTION
Does what I've done in Attempt #2 sound like a good idea? It seems a bit weird and like there might be some better way that I haven't heard of due to lack of experience
It starts to get weird in the code if I do this, because I end up with two properties on the AuthorLabel entity (a Label and a StaticLabel) where one is always null.
This will then propagate through the code and I'll end up with lots of 'if staticlabel property is not null, then x, else y' etc - it feels a bit messy.
Related
I have a .NET App connected to a Postgres DB using Npgsql and I am trying to import data into two tables, say Users and Todos. A user has many todos. The User table has an id column that is automatically set by the DB, and the Todos table has a foreign key to the Users table called user_id.
Now, I know how to insert Users, and I know how to insert Todos, but I do not know how to set the user_id for those Todos since the id column from User is only known after the users are inserted into the DB. Any idea?
This depends on how you are importing and which tool you are using. If you are using raw INSERT statements, PostgreSQL has a RETURNING clause which will send you back the ID of the inserted statements (see the docs).
If you are using binary COPY (which is the most efficient way to bulk-import data), there's no such option. This case, one good way is to "allocate" all the ids in one go, by incrementing the sequence backing the ID column, and then sending the IDs when you're importing. This means the database is longer generating those IDs - you're sending them explicitly like any other field.
In practical terms, say you have 100 users (and any number of todos). You can do one call to setval to increment the sequence by 100, and then you can import your users, explicitly setting their IDs to those 100 values. This allows you to also specify the user IDs on the todos. However, if you do this, be mindful of concurrency issues if someone else modifies the sequence at the same time.
My problem is the following : I map my view to an object through Entity Fluent API. I needed a view containing an few left joins, an there were no unique identifier in the tables, therefore Entity always returned the same set of object. In a few different threads / blogs, I saw a solution consisting of add a column with
ROW_NUMBER() OVER (ORDER BY Id))
I then tried to map it in Entity :
in my class I add a property
public long Row { get; set; }
and in my configuration class I add
HasKey(imc => imc.Row).HasColumnName("Row")
Apparently, the mapping works. What doesn't work is that, when I query the objects with linq, even a Count() will timeout ; however the request itself only returns about 200 lines when used in a SQL Management Studio environement.
Has anyone ever seen this issue ?
EDIT:
I have been able to bypass the problem by replacing the "row_number()" with a newid() in the MS SQL View, but I'm still afraid it might be a problem later on.
Your query is slow which causes the timeout. About 1 million people have seen this before. You would need to analyze the query plan. Computing a row number over the whole table if unindexed can be slow. Also, a row number cannot be used as a key because it's values changes when you change the underlying data. EF does not support changing keys.
If you use newid() as the "key" in the view then you get fresh IDs each time. I think you might not be aware of the fact that a view is merely a shortcut for that particular query. It's contents are not stored anywhere.
Introduce a column that can be used as a key. For example an IDENTITY column.
I have a Form Windows program in C# that adds a record to the database and can remove it.
In the database i have ID (which is Auto Number), but if i delete a record and if i want to add another record instead, the Auto Number increases and doesn't add the missing numbers.
I mean that if i have 9 records in my Access Database and i want to remove a record, it will be 8, but when i add a new record, i get 10 instead of 9. like this picture:
Is there any solution for that?
If it's an auto number, the database will generate a number greater than the last one used - this is how relational databases are supposed to work. Why would there be solution for this? Imagine deleting 5, what would you want to do then, have the auto number create the next record as 5? If you are displaying an id in your C# app - bad idea - then change this to some other value that you can control as you wish.
However what you are trying to achieve does not make sense.
if i delete a record and if i want to add another record instead, the Auto Number increases and doesn't add the missing numbers.
[...]
Is there any solution for that?
The short answer is "No". Once used, AutoNumber values are typically never re-used, even if the deleted record had the largest AutoNumber value in the table. This is due (at least in in part) to the fact that the Jet/Ace database engine has to be able to manage AutoNumber values in a multi-user environment.
(One exception to the above rule is if the Access database is compacted then the next available AutoNumber value for a table with a sequential AutoNumber field is reset to Max(current_value)+1.)
For more details on how AutoNumber fields work, see my other answer here.
In MS access, there is no any solutions for this. But in case of sql server you can create your own function rather using Identity column.
I have an application running that has entities that might be: CustomerType1, CustomerType2, and CustomerType3.
All three CustomerType entities might have completely different information, but they all have a CustomerID field which is an integer.
I am trying to figure out how to set things up so that no matter which type is created, the CustomerID will always be unique across all three types, and remain an integer.
For example, creating the following would result in the following CustomerID
CustomerType1 - 1
CustomerType1 - 2
CustomerType1 - 3
CustomerType2 - 4
CustomerType1 - 5
CustomerType3 - 6
CustomerType1 - 7
What is the best way to approach this?
2 possible approaches:
Use a single table for all of your customer types with a "discriminator" field to track each customer type, and include the CustomerID as its identity.
Use an external table to manage creating the CustomerID as an identity field.
The first approach has the advantage of having direct support in the Entity Framework for as outlined in the following tutorial:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-inheritance-with-the-entity-framework-in-an-asp-net-mvc-application
Note that although your customer types may contain different data, this approach could end up being cheaper in the long run in terms of scalability despite the wasted database space. In your business layer, the customer types could simply ignore the fields that don't pertain to them.
The second approach would probably be best suited to adding onto existing applications that are too difficult to change. In the long run, there is more work involved with keeping track of the IDs this way. For one, your business layer will need to fetch an ID from one table in order to insert into another table, which can be expensive for large datasets. Depending on the requirements of the business layer, there may also be scenarios where you have to discard an unused CustomerID and it would simply not exist in the system (you would skip from CustomerID 58 to CustomerID 60 for example).
Your approach is similar to having the same entity type in network databases, with unique IDs.
Usually, many developers use a UID or related type. Its a type that has from 64, 128, or more digits, and its generated randomly and automatically. If you use the same database in different machines, its almost imposible to get the same value.
Some databases store that value as a string, instead of an integer.
If you only had a single table with integer keys, how do you generate the primary key value ? Automatic ? Do you generate the value in code, and later assigned to the primary key field ?
Solution 1
If your database supports U.I.D. or O.I.D. or Unique identifiers, that are generated automatically, as an integers, use them.
Solution 2
If your database supports U.I.D. or O.I.D. as varchar / string, or the database engine, or program that you use, have a function that generates U.I.D., you may use that function, cast the result value from string to integer, (stripping separators like "-"), and stored in an integer primary key field.
Summary
Many developers prefer to let the database engine generate the primary key automatically, when inserting a new record. In cases, likes this, its better, to generate the primary key in code, and assign it directly. Since you are using "Entity Framework", I ignore how does that library handles primary keys.
Cheers.
I have several tables within my database that contains nothing but "metadata".
For example we have different grouptypes, contentItemTypes, languages, ect.
the problem is, if you use automatic numbering then it is possible that you create gaps.
The id's are used within our code so, the number is very important.
Now I wonder if it isn't better not to use autonumbering within these tables?
Now we have create the row in the database first, before we can write our code. And in my opinion this should not be the case.
What do you guys think?
I would use an identity column as you suggest to be your primary key(surrogate key) and then assign your you candidate key (identifier from your system) to be a standard column but apply a unique constraint to it. This way you can ensure you do not insert duplicate records.
Make sense?
if these are FK tables used just to expand codes into a description or contain other attributes, then I would NOT use an IDENTITY. Identity are good for ever inserting user data, metadata tables are usually static. When you deploy a update to your code, you don't want to be suprised and have an IDENTITY value different than you expect.
For example, you add a new value to the "Languages" table, you expect the ID will be 6, but for some reason (development is out of sync, another person has not implemented their next language type, etc) the next identity you get is different say 7. You then insert or convert a bunch of rows having using Language ID=6 which all fail becuase it does not exist (it is 7 iin the metadata table). Worse yet, they all actuall insert or update because the value 6 you thought was yours was already in the medadata table and you now have a mix of two items sharing the same 6 value, and your new 7 value is left unused.
I would pick the proper data type based on how many codes you need, how often you will need to look at it (CHARs are nice to look at for a few values, helps with memory).
for example, if you only have a few groups, and you'll often look at the raw data, then a char(1) may be good:
GroupTypes table
-----------------
GroupType char(1) --'M'=manufacturing, 'P'=purchasing, 'S'=sales
GroupTypeDescription varchar(100)
however, if there are many different values, then some form of an int (tinyint, smallint, int, bigint) may do it:
EmailTypes table
----------------
EmailType smallint --2 bytes, up to 32k different positive values
EmailTypeDescription varchar(100)
If the numbers are hardcoded in your code, don't use identity fields. Hardcode them in the database as well as they'll be less prone to changing because someone scripted a database badly.
I would use an identity column as the primary key also just for simplicity sake of inserting the records into the database, but then use a column for type of metadata, I call mine LookUpType(int), as well as columns for LookUpId (int value in code) or value in select lists, LookUpName(string), and if those values require additional settings so to speak use extra columns. I personally use two extras, LookUpKey for hierarchical relations, and LookUpValue for abbreviations or alternate values of LookUpName.
Well, if those numbers are important to you because they'll be in code, I would probably not use an IDENTITY.
Instead, just make sure you use a INT column and make it the primary key - in that case, you will have to provide the ID's yourself, and they'll have to be unique.