We have many lookup tables in the system and if it's already referred by some other tables, we shouldn't be allowed to update or delete the look-up table "value" column. eg: EnrollStatusName in below table.
Eg:
Lookup table: EnrollStatus
ID
EnrollStatusName
1
Pending
2
Approved
3
Rejected
Other table: UserRegistration
URID
EnrollStatusID(FK)
11
1
12
1
13
2
In this now I can edit Lookup table row 3 since it's not referring anywhere.
The solution which comes to my mind is to add a read-only column to look up the table and whenever there is a DML to the UserRegistration table, update the read-only column to true. Is there any other best approach to this? It can be either handling in application code or in SQL hence I'm tagging c# also to know the possibilities.
Delete is easy; just establish a foreign key relationship to some other table, and don't cascade or setnull. It's no longer possible to delete the in-use row because it has dependent rows in other tables
Update is perhaps trickier. You can use the same mechanism and I think it's neatest, instead of doing the update as an update, do it as a delete and insert - if the row is in use the foreign key will prevent the delete..
Belayer pointed out in the comments that you can use UPDATE also; you'll have to include the PK column in the list of columns you set and you can't set it to the same value it already is, nor to a value that is already in use. You'll probably need a strategy like two updates in a row if you want to have a controlled list of IDs
UPDATE EnrollStatus SET id=-id, EnrollStatusName='whatever' WHERE id=3
UPDATE EnrollStatus SET id=-id WHERE id=-3
A strategy of flipping it negative then back positive will work out only if it's not in use. If it is used then it will error out on the first statement.
If you don't care that your PKs end up a mix of positives and negatives (and you shouldn't, but people do seem to care more than they should about what values PKs have) you can forego the second update; you can always insert new values as positive incrementing and flipflop them while they're being edited before being brought into use..
Related
I am updating a single column in a table using Linq, take fictitious table below.
MyTable (PKID, ColumnToUpdate, SomeRandomColumn)
var row = (from x in DataContext.MyTable
where b.PKID == 5
select x).FirstOrDefault();
row.ColumnToUpdate = 20;
DataContext.SubmitChanges();
This updates the column to as expected, no surprises here. However when I inspect the SQL commands which are generated, it does this:
UPDATE [dbo].[MyTable ]
SET [ColumnToUpdate ] = #p2
WHERE ([PKID] = #p0) AND ([SomeRandomColumn] = #p1)
This is performing the update, but only if all columns have matched the values of what Entity expects them to be, rather than referencing the Primary Key column on it's own.
If a database column is changed by another process, which is very feasible in this particular project; eg. There is a window between getting the row you want to manipulate, calculating the changes you would like to set the value to, and issuing the update command as a batch of rows. In this situation the query will cause an exception, causing a partial update, unless I trap, reload the data and resend individual queries. It also has a downside that the row information can be quite large (ie, containing HTML mark up for instance), and the whole thing gets passed to SQL and slows the system down when larger batches are processed.
Is there a way of making Linq / Entity to issue update commands based only on the PK column in the Where clause?
I never used LINQ-to-SQL for production projects and I never were aware of it applying optimistic concurrency1 by default.
This is the default behavior:
If a table doesn't have a Timestamp/Rowversion column2, all columns have "Update Check" set to "Always" in the DBML (except primary key columns and computed columns, i.e. all updateable columns).
If a table does have a Timestamp/Rowversion column, this column has "Time Stamp" set to "True" in the DBML and all columns have "Update Check" = "Never".
Either "Update Check" or "Time Stamp" mark a column as concurrency token. That's why in update statements you see these additional predicates on (not so) "random" columns. Apparently, the tables in your model didn't have Timestamp/Rowversion columns, hence an update checks the values of all updateable columns in the table.
1 Optimistic concurrency: no exclusive locks are set when updating records, but existing values of all or selected columns are checked while updating. If one of those column value was changed by another user between getting the data and saving them, an update exception occurs.
2 A column of data type Timestamp or Rowversion is automatically incremented when a record is updated and therefore detects all concurrent changes to this record.
In my database, there is a table which essentially contains questions with their options and answers. The first field is ~questionid~ and is the primary key, as expected (I've disabled AUTOINCREMENT for now). It's possible that my client wants to delete some questions. This leaves me with two options:
All subsequent questions move up so that there is no empty row. This option implies that those questions will have their question id changed
Leave it as it is so there will be empty rows. If a there's a new entry, it should fill the first empty row.
How do I go about implementing any of them? I prefer the second, actually, but if anyone has a different opinion, it's welcome.
I'm using a MySQL database and C#.
You are using a database so you don't have to worry about these issues.
There is no concept of "empty" row in a SQL table (well, one could say if all the columns are NULL then the row is empty, but that is not relevant here). Rows in a SQL table are not inherently ordered.
The rows themselves are stored on pages, which may or may not have extra space for more rows. This may be what you are thinking of when you think of an empty row.
When a row is deleted, the data is not rearranged. There is just some additional space on the page in case a new row is added later. If you add in a new row with a primary key between two existing rows, and the page is full, then the database "splits" the page into two. The two other pages have extra space.
The important point, though, is not how this works. One reason you are using a relational database for your application is so you can add and delete rows without having to worry about their actual physical storage.
If you have a database that has lots of transactions -- deletions and insertions -- then you may want to periodically rearrange the data so it fits better on the pages. Such optimizations though are usually necessary only when there is a high volume of such transactions.
One thing, though. Your application should not depend on the primary keys being sequential, so it can handle deletes correctly.
I am not sure how you have implemented. I would have done it in this way,
questions
question_id - pk
question
answers
answer_id - pk
answers
question_answer
question_id
answer_id
This will give more advantage. many questions will have same answer. if a question can be deleted then delete them along with their answers from question_answer table
I have a unique constraint on a Navigations table's column called Index. I have two Navigation entities and I want to swap their Index values.
When I call db.SaveChanges it throws an exception indicating that a unique constraint was violated. It seems EF is updating one value and then the other, thus violating the constraint.
Shouldn't it be updating them both in a transaction and then trying to commit once the values are sorted out and not in violation of the constraint?
Is there a way around this without using temporary values?
It is not problem of EF but the problem of SQL database because update commands are executed sequentially. Transaction has nothing to do with this - all constrains are validated per command not per transaction. If you want to swap unique values you need more steps where you will use additional dummy values to avoid this situation.
You could run a custom SQL Query to swap the values, like this:
update Navigation
set valuecolumn =
case
when id=1 then 'value2'
when id=2 then 'value1'
end
where id in (1,2)
However, Entity Framework cannot do that, because it's outside the scope of an ORM. It just executes sequential update statements for each altered entity, like Ladislav described in his answer.
Another possibility would be to drop the UNIQUE constraint in your database and rely on the application to properly enforce this constraint. In this case, the EF could save the changes just fine, but depending on your scenario, it may not be possible.
There are a few approaches. Some of them are covered in other answers and comments but for completeness, I will list them out here (note that this is just a list that I brainstormed and it might not be all that 'complete').
Perform all of the updates in a single command. See W0lf's answer for an example of this.
Do two sets of updates - one to swap all of the values to the negative of the intended value and then a second to swap them from negative to positive. This is working on the assumptions that negative values are not prevented by other constraints and that they are not values that records other than those in a transient state will have.
Add an extra column - IsUpdating for example - set it to true in the first set of updates where the values are changed and then set it back to false in a second set of updates. Swap the unique constraint for a filtered, unique index which ignores records where IsUpdating is true.
Remove the constraint and deal with duplicate values.
I need to set the maximum number of rows in SQL Server Compact 3.5 Database tables: the database consists of some tables and each table should have a different maximum number of rows.
If the answer is yes, what is the default rule when the tables are full? Can I set a custom rule (for example I would delete the row with minimum ID, where ID is a column)?
Without triggers, you could have a table with one column of integers 1 through max row#, and then in the table where you want to only allow up to max row# define a row number column with a UNIQUE constraint and a foreign key constraint pointing to the table with 1 through max row#. You would need to handle keeping the next row number at the right value, and you would need to write code to handle the foreign key constraint exception once you reach the max row#. Obviously, this is a pretty big hack and will not be good to maintain.
IMHO the answer is no - SQL Server does not have a configuration option for what you ask.
With SQL Server editions supporting triggers you could achieve what you want with BEFORE INSERT triggers (one per table)... which would allow for implementing whatever you need as a rule for "table full" situation...
Since SQL Server Compact does NOT support triggers I don't see a way to implement what you ask...
so I have an old database that I'm migrating to a new one. The new one has a slightly different but mostly-compatible schema. Additionally, I want to renumber all tables from zero.
Currently I have been using a tool I wrote that manually retrieves the old record, inserts it into the new database, and updates a v2 ID field in the old database to show its corresponding ID location in the new database.
for example, I'm selecting from MV5.Posts and inserting into MV6.Posts. Upon the insert, I retrieve the ID of the new row in MV6.Posts and update it in the old MV5.Posts.MV6ID field.
Is there a way to do this UPDATE via INSERT INTO SELECT FROM so I don't have to process every record manually? I'm using SQL Server 2005, dev edition.
The key with migration is to do several things:
First, do not do anything without a current backup.
Second, if the keys will be changing, you need to store both the old and new in the new structure at least temporarily (Permanently if the key field is exposed to the users because they may be searching by it to get old records).
Next you need to have a thorough understanding of the relationships to child tables. If you change the key field all related tables must change as well. This is where having both old and new key stored comes in handy. If you forget to change any of them, the data will no longer be correct and will be useless. So this is a critical step.
Pick out some test cases of particularly complex data making sure to include one or more test cases for each related table. Store the existing values in work tables.
To start the migration you insert into the new table using a select from the old table. Depending on the amount of records, you may want to loop through batches (not one record at a time) to improve performance. If the new key is an identity, you simply put the value of the old key in its field and let the database create the new keys.
Then do the same with the related tables. Then use the old key value in the table to update the foreign key fields with something like:
Update t2
set fkfield = newkey
from table2 t2
join table1 t1 on t1.oldkey = t2.fkfield
Test your migration by running the test cases and comparing the data with what you stored from before the migration. It is utterly critical to thoroughly test migration data or you can't be sure the data is consistent with the old structure. Migration is a very complex action; it pays to take your time and do it very methodically and thoroughly.
Probably the simplest way would be to add a column on MV6.Posts for oldId, then insert all the records from the old table into the new table. Last, update the old table matching on oldId in the new table with something like:
UPDATE mv5.posts
SET newid = n.id
FROM mv5.posts o, mv6.posts n
WHERE o.id = n.oldid
You could clean up and drop the oldId column afterwards if you wanted to.
The best you can do that I know is with the output clause. Assuming you have SQL 2005 or 2008.
USE AdventureWorks;
GO
DECLARE #MyTableVar table( ScrapReasonID smallint,
Name varchar(50),
ModifiedDate datetime);
INSERT Production.ScrapReason
OUTPUT INSERTED.ScrapReasonID, INSERTED.Name, INSERTED.ModifiedDate
INTO #MyTableVar
VALUES (N'Operator error', GETDATE());
It still would require a second pass to update the original table; however, it might help make your logic simpler. Do you need to update the source table? You could just store the new id's in a third cross reference table.
Heh. I remember doing this in a migration.
Putting the old_id in the new table makes both the update easier -- you can just do an insert into newtable select ... from oldtable, -- and the subsequent "stitching" of records easier. In the "stitch" you'll either update child tables' foreign keys in the insert, by doing a subselect on the new parent (insert into newchild select ... (select id from new_parent where old_id = oldchild.fk) as fk, ... from oldchild) or you'll insert children and do a separate update to fix the foreign keys.
Doing it in one insert is faster; doing it in a separate step meas that your inserts aren't order dependent, and can be re-done if necessary.
After the migration, you can either drop the old_id columns, or, if you have a case where the legacy system exposed the ids and so users used the keys as data, you can keep them to allow use lookup based on the old_id.
Indeed, if you have the foreign keys correctly defined, you can use systables/information-schema to generate your insert statements.
Is there a way to do this UPDATE via INSERT INTO SELECT FROM so I don't have to process every record manually?
Since you wouldn't want to do it manually, but automatically, create a trigger on MV6.Posts so that UPDATE occurs on MV5.Posts automatically when you insert into MV6.Posts.
And your trigger might look something like,
create trigger trg_MV6Posts
on MV6.Posts
after insert
as
begin
set identity_insert MV5.Posts on
update MV5.Posts
set ID = I.ID
from inserted I
set identity_insert MV5.Posts off
end
AFAIK, you cannot update two different tables with a single sql statement
You can however use triggers to achieve what you want to do.
Make a column in MV6.Post.OldMV5Id
make a
insert into MV6.Post
select .. from MV5.Post
then make an update of MV5.Post.MV6ID