How to clear this table: __EFMigrationsHistories
I don't want to delete my migration or something, I explicitly want to clear this table by code.
Edit:
I would try to explain a little bit why i want to do this! I want to call on every startup the same (and the only one) migration.
And this migration loops trough all my models and call's the onUpdateMethod, so every model can handle his update by itself.
If you want to Clear Data within SQL here is the Query:
DELETE FROM [TableName]
If you want to clear Data Within your application run this query using Entity Framework like below:
context.Database.ExecuteSqlCommand("TRUNCATE TABLE [TableName]");
The TRUNCATE TABLE statement is a fast, efficient method of deleting all rows in a table. TRUNCATE TABLE is similar to the DELETE statement without a WHERE clause. However, TRUNCATE TABLE is faster and uses fewer system and transaction log resources.
Related
I am using a code-first approach in EF core, and I am in a situation where want to move a column from one table to another.
My approach is to insert data using migration builder sql
migrationBuilder.Sql("Insert query to new table"); and drop the column in the first table
migrationBuilder.DropColumn(name: "FirstName", table: "Customer"); .
Is there any better approach to migrate data from one table to another and drop the column from the first table?
I've used the same approach as what you're suggesting before and it has worked well for us. One of its benefits is that queries executed with migrationBuilder.Sql will be wrapped in the same transaction as the migration, so if the query fails or anything goes wrong then all migration changes are rolled back and you don't end up with a corrupted database.
I have an existing table for Users in my database that contains around one million record . my database is MS sql server 2016 . I am working code first . Now I will add an additional column for that table which will contain an invitaion code .
I need to update the value of all existing users to have a unique invitaion code for that new column.
I need to make something that will run one time only (may be in the seed method) as this data will be updated and any new user will be created in future will have its invitaion code while registering .
So this will be applied for old users , what is the best way to do that regarding the performance and speed and i have too much records in that table , is it seed method that will run at start of the app or what ?
I think a script SQL would be the best solution, you run it ones and you are done.
A seed method would be a solution if you are in early development mode, so you may delete records and re-run the seed method, but if your database is stable a script would be better in my opinion.
1) Create a migration with the added column (you'll have to mark it as nullable)
2) Create another migration where you
Populate the column with whatever values you want.
You do this by adding a custom sql statement
Sql(#"UPDATE YourTableName set YourColumnName = insert logic here")
3) Create another migration modifying the column to be non-nullable.
You may be able to add the custom sql statement at the end of the first migration or the beginning of the last migration.
There is mssql table in an external customer network. The aim is to create and reflect the same table in the local server. External mssql table, of course, can be changed (data) every hour and somehow I have to check for changes and reflect that changes in local table when new rows are added/deleted or updated. Is there any efficient way to do it? Additionaly i know that this table will have thousands of records. First of all, I thought about some windows service application but have no idea what approach to do, I do not think datatable/dataset with regards to so much records is fine as i remember memory out of exception in past. Any ideas?
The way I would go about it is to create triggers on the existing tables that upon insert, update and delete would insert into a new sync table (or a sync table per existing table) which would mark the change pending synchronization. Your C# code would read from this table on a schedule, apply changes to the local DB and delete the rows from the 'pending' table.
For example, this is how Azure SQL Data Sync works; it creates a table per existing table in the source table and then checks all these tables. I think, depending on how many tables you have and the structure etc, you could write something like JSON in just the one table instead, and it would be easier to check one table than plenty (obviously this depends on how many actual tables we're talking about).
I'm about to start a new project and I'd like to use Entity Framework Code First migrations; i.e., write the database in code and have it all auto-generated for me and the schema updated etc.
However, my stumbling block is that I have one lookup table which I need to import and has over 2 million records (it's a post code lookup table).
My question is, how do you deal with such large pre-populated lookup tables within Entity Framework Code First migrations?
Your migration doesn't actually have to drop/recreate the whole table (and won't unless you specify that it should). Normally, the migrations just do the Up/Down methods to alter the table with additional columns, etc.
Do you actually need to drop the table? If so, do you really need to seed it from EF? The EF cost for doing 2 million inserts will be astounding, so if you could do it as a manual step with something more efficient (that will use bulk inserts), it would be very much preferred.
If I had to do that many inserts, I would probably break it out into SQL files and do something like what's mentioned here: EF 5 Code First Migration Bulk SQL Data Seeding
I need to update about 250k rows on a table and each field to update will have a different value depending on the row itself (not calculated based on the row id or the key but externally).
I tried with a parametrized query but it turns out to be slow (I still can try with a table-value parameter, SqlDbType.Structured, in SQL Server 2008, but I'd like to have a general way to do it on several databases including MySql, Oracle and Firebird).
Making a huge concat of individual updates is also slow (BUT about 2 times faster than making thousands of individual calls (roundtrips!) using parametrized queries)
What about creating a temp table and running an update joining my table and the tmp one? Will it work faster?
How slow is "slow"?
The main problem with this is that it would create an enormous entry in the database's log file (in case there's a power failure half-way through the update, the database needs to log each action so that it can rollback in the event of failure). This is most likely where the "slowness" is coming from, more than anything else (though obviously with such a large number of rows, there are other ways to make the thing inefficient [e.g. doing one DB roundtrip per update would be unbearably slow], I'm just saying once you eliminate the obvious things, you'll still find it's pretty slow).
There's a few ways you can do it more efficiently. One would be to do the update in chunks, 1,000 rows at a time, say. That way, the database writes lots of small log entries, rather than one really huge one.
Another way would be to turn off - or turn "down" - the database's logging for the duration of the update. In SQL Server, for example, you can set the Recovery Model to "simple" or "bulk update" which would speed it up considerably (with the caveat that you are more at risk if there's a power failure or something during the update).
Edit Just to expand a little more, probably the most efficient way to actually execute the queries in the first place would be to do a BULK INSERT of all the new rows into a temporary table, and then do a single UPDATE of the existing table from that (or to do the UPDATE in chunks of 1,000 as I said above). Most of my answer was addressing the problem once you've implemented it like that: you'll still find it's pretty slow...
call a stored procedure if possible
If the columns updated are part of indexes you could
drop these indexes
do the update
re-create the indexes.
If you need these indexes to retrieve the data, well, it doesn't help.
You should use the SqlBulkCopy with the KeepIdentities flag set.
As part of a SqlTransaction do a query to SELECT all the records that need updating and then DELETE THEM, returning those selected (and now removed) records. Read them into C# in a single batch. Update the records on the C# side in memory, now that you've narrowed the selection and then SqlBulkCopy those updated records back, keys and all. And don't forget to commit the transaction. It's more work, but it's very fast.
Here's what I would do:
Retrieve the entire table, that is, the columns you need in order to calculate/retrieve/find/produce the changes externally
Calculate/produce those changes
Run a bulk insert to a temporary table, uploading the information you need server-side in order to do the changes. This would require the key information + new values for all the rows you intend to change.
Run SQL on the server to copy new values from the temporary table into the production table.
Pros:
Running the final step server-side is faster than running tons and tons of individual SQL, so you're going to lock the table in question for a shorter time
Bulk insert like this is fast
Cons:
Requires extra space in your database for the temporary table
Produces more log data, logging both the bulk insert and the changes to the production table
Here are things that can make your updates slow:
executing updates one by one through parametrized query
solution: do update in one statement
large transaction creates big log entry
see codeka's answer
updating indexes (RDBMS will update index after each row. If you change indexed column, it could be very costly on large table)
if you can, drop indices before update and recreate them after
updating field that has foreign key constraint - for each inserted record RDBMS will go and look for appropriate key
if you can, disable foreign key constraints before update and enable them after update
triggers and row level checks
if you can, disable triggers before update and enable them after