Check if it is safe to delete a row - c#

I want to be able to check if deleting a row from a table in SQL Server 2008 will fail because of a Foreign Key violation without attempting to delete it.
Basically, I don't want to show the user a delete button if they are not going to be able delete it because the key is used elsewhere.
I am needing this in many places in the application so don't really want to have to write the checks manually to see if it safe to delete the row. Any suggestions on the best way to achieve this?
I am using entity framework for access to the data.

There is no quick and easy way to check this. You could probably build something dynamic using information_schema, but it will undoubtedly be ugly and not very fast.
The absolute best choice is the few lines of custom code that it takes to verify per location.
Another option would be to start a transaction, try the delete. If it fails, then you know. If it succeeds, roll back the transaction, and you know the delete is possible. This is still ugly and is using transactions in a somewhat broken way, but it would work. Make sure cascade deletes aren't on for the table, though.

When you query, do a LEFT JOIN to the child table. Use CanDelete computed value to decide if the button should be shown. The COUNT here removed duplicates if you more than 1 child row per parent row.
SELECT
Col1, Col2, Col3, ...,
CASE C.Existence WHEN 0 THEN 1 ELSE 0 END AS CanDelete
FROM
ParentTable P
LEFT JOIN
(
SELECT COUNT(*) AS Existence, FKColumn
FROM Childtable GROUP BY FKColumn
) C ON P.FKColumn = C.FKColumn
WHERE
P.Col = ...
Another way might be
SIGN(C.Existence) AS HasChildRows

I've done this sort of thing in prior applications. I created a function named something like TryDelete(). Inside the method, I attempted to delete the desired row. If I got a FK exception, I caught it and returned false. In either case, true or false, I encapsulated the delete in a transaction and then rolled it back.

You could add in a partial class of your entity a method that would check if the referenced objects exist.
For example, lets say you have Entity1 which has collections of Entity2. Basically, in each of the entity partial classes you'd write a property IsReferenced that would:
For Entity1 return true if Entity1 has any item in Entity2
For Entity2 return ture if there's a reference to Entity1
As you're guessing, you'll need to make sure that you include referenced values always in your fetch, or, if you're working attached to context, you could use .Load() in IsReferenced to fetch entities before checking. It is an overhead, it just depends if you're willing to 'pay' for it.
Then, you can show/hide the 'delete' button based on that element property wherever needed thus avoiding having to repeat the checks each time.

I think you have 2 possible choices here.Since you cannot garantee that all relations will be mapped in your OM, you would have to check it on the database.
You can either try an actual delting inside a transaction that is rolled back afterwards, but this would also work if you have to contraint configured with cascading deletes...
Another way would be extracting all contraints from the sysobjects table, and verify that each table has no records. But that would require some dynamic SQL, which can also get quite messy.

if you're at the database level I would join all the tables where a conflict could exist.
any records that return can not be deleted which means the remaining set can be.

Assuming that the database is used by multiple users (which the vast majority are) - there's going to be a window of opportunity between "checking" the delete is possible, and the user possibly deciding to delete the row, during which someone else might perform some activity that negates the result of the test.
This means that you might display the Delete button, but by the time you attempt the delete, it's no longer possible. Also, you might not display a Delete button, but by the time the user has decided they want to delete the row (but can't find the button), they should be allowed to.
There's no way to avoid these kind of races. I'd just let people attempt the delete if they want to, but be prepared to deal with failures due to Foreign Keys.

Related

Linq to SQL, Update a lot of Data before One Insert

Before insert new value to table, I need change one field in all rows of that table.
What the best way to do this? in c# code, ore use trigger? if C# can you show me the code?
UPD
*NEW VERSION of Question*
Hello. Before insert new value to table, I need change one field in all rows of that table with specific ID( It is FK to another table).
What the best way to do this? in c# code, ore use trigger? if C# can you show me the code?
You should probably consider changing your design this doesn't sound like it will scale well, i would probably do it with a trigger if it is always required, but if not, id use ExecuteCommand.
var ctx = new MyDataContext();
ctx.ExecuteCommand("UPDATE myTable SET foo = 'bar'");
Looking at your comment on Paul's answer, I feel like I should chime in here. We have a few tables where we need to keep a history of each entry in that table. We implement this by creating a separate table for each. For example, we may have a Comment table, and then a CommentArchive table with a foreign key reference to the CommentId in the Comment table.
A trigger on the Comment table ensures that each time certain fields in the Comment table are updated, the "old" version (which is accessible via the deleted table in the trigger) gets pushed to the CommentArchive table. Obviously, this means several CommentArchive entries may exist for each Comment, but if you're only looking for the "active" comments, you just look in the Comment table. And if you need information about the history of a comment, you can easily use LINQ to SQL to jump from the Comment you're interested in to the CommentArchives that reference it.
Because the triggers we use in the above example only insert a single value into the Archive table for each update, they run very quickly and we get good performance. We had issues recently where I tried making the triggers more complex and we started getting dead-locks with as few as 15 concurrent transactions. So the lesson is that you should make these triggers simple, and make them touch as few rows in as few tables as possible.

Defining Status of data via Enum or a relation table

I have an application which has rows of data in a relation database the table needs a status which will always be either
Not Submitted, Awaiting Approval, Approved, Rejected
Now since these will never change I was trying to decide the best way to implement them I can either think of a Status enum with the values and an int assigned where the int is placed into the status column on the table row.
Or a status table that linked to the table and the user select one of these as the current status.
I can't decide which is the better option as I currently have a enum in place with these values for the approval pages to populate the dropdown etc and setup the sql (as it currently using to bool Approved and submitted for approval but this is dirty for various reasons and needs changed).
Wondering what your thought on this were and whether I should go for one or the other.
If it makes any difference I am using Entity framework.
I would go with the Enum if it never changes since this will be more performant (no join to get the status). Also, it's the simpler solution :).
Now since these will never change...
You can count on this assumption being false, and sooner than you think.
I would use a lookup table. It's far easier to add or change values in a lookup table than to change the definition of an enum.
You can use a natural primary key in the lookup table so you don't need to do a join to get the value. Yes a string takes a bit more space than an integer id, but if your goal is to avoid the join this will accomplish that goal.
I use Enums and use the [Description("asdf")] attribute to bind meaningful sentences or other things that aren't allowed in Enums. Then use the Enum text itself as a value in drop downs and the Description as the visible text.

How Do SQL Transactions Work?

I have not been working in SQL too long, but I thought I understood that by wrapping SQL statements inside a transaction, all the statements completed, or none of them did. Here is my problem. I have an order object that has a lineitem collection. The line items are related on order.OrderId. I have verified that all the Ids are set and are correct but when I try to save (insert) the order I am getting The INSERT statement conflicted with the FOREIGN KEY constraint "FK_OrderItemDetail_Order". The conflict occurred in database "MyData", table "dbo.Order", column 'OrderId'.
psuedo code:
create a transaction
transaction.Begin()
Insert order
Insert order.LineItems <-- error occurs here
transaction.Commit
actual code:
...
entity.Validate();
if (entity.IsValid)
{
SetChangedProperties(entity);
entity.Install.NagsInstallHours = entity.TotalNagsHours;
foreach (OrderItemDetail orderItemDetail in entity.OrderItemDetailCollection)
{
SetChangedOrderItemDetailProperties(orderItemDetail);
}
ValidateRequiredProperties(entity);
TransactionManager transactionManager = DataRepository.Provider.CreateTransaction();
EntityState originalEntityState = entity.EntityState;
try
{
entity.OrderVehicle.OrderId = entity.OrderId;
entity.Install.OrderId = entity.OrderId;
transactionManager.BeginTransaction();
SaveInsuranceInformation(transactionManager, entity);
DataRepository.OrderProvider.Save(transactionManager, entity);
DataRepository.OrderItemDetailProvider.Save(transactionManager, entity.OrderItemDetailCollection); if (!entity.OrderVehicle.IsEmpty)
{
DataRepository.OrderVehicleProvider.Save(transactionManager, entity.OrderVehicle);
}
transactionManager.Commit();
}
catch
{
if (transactionManager.IsOpen)
{
transactionManager.Rollback();
}
entity.EntityState = originalEntityState;
}
}
...
Someone suggested I need to use two transactions, one for the order, and one for the line items, but I am reasonably sure that is wrong. But I've been fighting this for over a day now and I need to resolve it so I can move on even if that means using a bad work around. Am I maybe just doing something stupid?
I noticed that you said you were using NetTiers for your code generation.
I've used NetTiers myself and have found that if you delete your foreign key constraint from your table, add it back to the same table and then run the build scripts for NetTiers again after making your changes in the database might help reset the data access layer. I've tried this on occasion with positive results.
Good luck with your issue.
Without seeing your code, it is hard to say what the problem is. It could be any number of things, but look at these:
This is obvious, but your two insert commands are on the same connection (and the connection stays open the whole time) that owns the transaction right?
Are you retrieving your ID related to the constraint after the first insert and writing it back into the data for second insert before executing the command?
The constraint could be set up wrong in the DB.
You definitely do not want to use two transactions.
Looks like your insert statement for the lineItems is not correctly setting the value for the order .. this should be a result of the Insert order step. Have you looked (and tested) the individual SQL statements?
I do not think your problem has anything to do with transaction control.
I have no experience with this, but it looks like you might have specified a key value that is not available in the parent table. Sorry, but I cannot help you more than this.
The problem is how you handle the error. When an error occurs, a transaction is not automatically rolled back. You can certainly (and probably should) choose to do that, but depending on your app or where you are you may still want to commit it. And in this case, that's exactly what you're doing. You need to wrap some error handling code around there to rollback your code when the error occurs.
The error looks like that the LineItems are not being given the proper FK OrderId that was autogenerated by the the insert of the Order to the Order Table. You say you have checked the Ids, Have you checked the FKs in the order details as well ?

Orphaned entries in aspnetdb

After calling method
_membershipProvider.DeleteUser(user.UserName, false);
where where the second parameter (false) is deleteAllRelatedData, orphaned entries are left in the database (aspnet_Users table and probably more). What is the best practice for cleaning these up?
EDIT: The user management code is already changed to now use true as the second param, but it's left a db full of junk entries. I'm wondering how best to clean these up. I'm currently looking at the sp provided with the database dbo.aspnet_Users_DeleteUser puzzling over the parameter #TablesToDeleteFrom int wondering exactly what it means. Looks like some sort of bitmask.
I guess you'd have a choice of Cascade delete or write something that runs as a job periodically.
Or better yet do as stated in Bob's comment!
Update- as it sounds like you have now stopped this from occuring, just write a SQL Script to detect the orphaned records, then turn it into a DELETE statement.
If you want to leave no orphan entries then you should set the second parameter (deleteAllRelatedData) to true. It will remove all related and child data.
http://msdn.microsoft.com/en-us/library/system.web.security.membershipprovider.deleteuser.aspx

Best way to save a ordered List to the Database while keeping the ordering

I was wondering if anyone has a good solution to a problem I've encountered numerous times during the last years.
I have a shopping cart and my customer explicitly requests that it's order is significant. So I need to persist the order to the DB.
The obvious way would be to simply insert some OrderField where I would assign the number 0 to N and sort it that way.
But doing so would make reordering harder and I somehow feel that this solution is kinda fragile and will come back at me some day.
(I use C# 3,5 with NHibernate and SQL Server 2005)
Thank you
Ok here is my solution to make programming this easier for anyone that happens along to this thread. the trick is being able to update all the order indexes above or below an insert / deletion in one update.
Using a numeric (integer) column in your table, supported by the SQL queries
CREATE TABLE myitems (Myitem TEXT, id INTEGER PRIMARY KEY, orderindex NUMERIC);
To delete the item at orderindex 6:
DELETE FROM myitems WHERE orderindex=6;
UPDATE myitems SET orderindex = (orderindex - 1) WHERE orderindex > 6;
To swap two items (4 and 7):
UPDATE myitems SET orderindex = 0 WHERE orderindex = 4;
UPDATE myitems SET orderindex = 4 WHERE orderindex = 7;
UPDATE myitems SET orderindex = 7 WHERE orderindex = 0;
i.e. 0 is not used, so use a it as a dummy to avoid having an ambiguous item.
To insert at 3:
UPDATE myitems SET orderindex = (orderindex + 1) WHERE orderindex > 2;
INSERT INTO myitems (Myitem,orderindex) values ("MytxtitemHere",3)
Best solution is a Doubly Linked list. O(1) for all operations except indexing. Nothing can index SQL quickly though except a where clause on the item you want.
0,10,20 types fail. Sequence column ones fail. Float sequence column fails at group moves.
Doubly Linked list is same operations for addition, removal, group deletion, group addition, group move. Single linked list works ok too. Double linked is better with SQL in my opinion though. Single linked list requires you to have the entire list.
FWIW, I think the way you suggest (i.e. committing the order to the database) is not a bad solution to your problem. I also think it's probably the safest/most reliable way.
How about using a linked list implementation? Having one column the will hold the value (order number) of the next item. I think it's by far the easiest to use when doing insertion of orders in between. No need to renumber.
Unfortunately there is no magic bullet for this. You cannot guarentee the order of any SELECT statement WITHOUT an order by clause. You need to add the column and program around it.
I don't know that I'd recommend adding gaps in the order sequence, depending on the size of your lists and the hits on the site, you might gain very little for the over head of handling the logic (you'd still need to cater for the occasion where all the gaps have been used up). I'd take a close look to see what benifits this would give you in your situation.
Sorry I can't offer anything better, Hope this helped.
I wouldn't recommend the A, AA, B, BA, BB approach at all. There's a lot of extra processing involved to determine hierarchy and inserting entries in between is not fun at all.
Just add an OrderField, integer. Don't use gaps, because then you have to either work with a non-standard 'step' on your next middle insert, or you will have to resynchronize your list first, then add a new entry.
Having 0...N is easy to reorder, and if you can use Array methods or List methods outside of SQL to re-order the collection as a whole, then update each entry, or you can figure out where you are inserting into, and +1 or -1 each entry after or before it accordingly.
Once you have a little library written for it, it'll be a piece of cake.
I would just insert an order field. Its the simplest way. If the customer can reorder the fields or you need to insert in the middle then just rewrite the order fields for all items in that batch.
If down the line you find this limiting due to poor performance on inserts and updates then it is possible to use a varchar field rather than an integer. This allows for quite a high level of precision when inserting. eg to insert between items 'A' and 'B' you can insert an item ordered as 'AA'. This is almost certainly overkill for a shopping cart though.
On a level of abstraction above the cart Items let's say CartOrder (that has 1-n with CartItem) you can maintain a field called itemOrder which could be just a comma - separated list of id(PK) of cartItem records relevant . It will be at application layer that you require to parse that and arrange your item models accordingly . The big plus for this approach will be in case of order reshufflings , there might not be changes on individual objects but since order is persisted as an index field inside the order item table rows you will have to issue an update command for each one of the rows updating their index field.
Please let me know your criticisms on this approach, i am curious to know in which ways this might fail.
I solved it pragmatically like this:
The order is defined in the UI.
The backend gets a POST request that contains the IDs and the corresponding Position of every item in the list.
I start a transaction and update the position for every ID.
Done.
So ordering is expensive but reading the ordered list is super cheap.
I would recommend keeping gaps in the order number, so instead of 1,2,3 etc, use 10,20,30... If you need to just insert one more item, you could put it at 15, rather than reordering everything at that point.
Well, I would say the short answer is:
Create a primary key of autoidentity in the cartcontents table, then insert rows in the correct top-down order. Then by selecting from the table with order by the primary key autoidentity column would give you the same list. By doing this you have to delete all items and reinsert then in case of alterations to the cart contents. (But that is still quite a clean way of doing it) If that's not feasible, then go with the order column like suggested by others.
When I use Hibernate, and need to save the order of a #OneToMany, I use a Map and not a List.
#OneToMany(fetch = FetchType.EAGER, mappedBy = "rule", cascade = CascadeType.ALL)
#MapKey(name = "position")
#OrderBy("position")
private Map<Integer, RuleAction> actions = LazyMap.decorate(new LinkedHashMap<>(), FactoryUtils.instantiateFactory(RuleAction.class, new Class[] { Rule.class }, new Object[] { this }));
In this Java example, position is an Integer property of RuleAction so the order is persisted that way. I guess in C# this would look rather similar.

Categories