Entity 6 + PostgreSQL - horrible performance - c#

I'm using Entity 6 with PostgreSQL database (with Npgsql connector). Everything works fine, except for poor performance of this setup. When I try to insert not-so-large amount of objects to database (about 20k records), it takes much more time than it should. As this is my first time using Entity Framework, I was rather confused why inserting 20k records to database on my local machine would take more than 1 minute.
In order to optimize inserts I followed every tip I found. I tried to set AutoDetectChangesEnabled to false, call SaveChanges() every 100 or 1000 records, re-creating database context object and use DbContextTransaction objects (by calling dbContext.Database.BeginTransaction() and commiting transaction at the end of the operation or every 100/1000 records). Nothing improved inserts performance even a little bit.
By loging SQL queries generated by Entity I was finally able to discover that no matter what I do, every object is inserted separatedly and every insert takes 2-4 ms. Without re-creating DB context objects and without transactions, there is just one commit after over 20k inserts. When I use transactions and commit every few records, there are more commits and new transaction creations (same when I re-create DB context object, just with connection being re-established as well). If I use transactions and commit them every a few records, I should notice a performance boost, no? But in the end there is no difference in performance, no matter if I use multiple transactions or not. I know transactions won't improve performance drastically, but they should help at least a little bit. Instead, every insert still takes at least 2ms to execute on my local DB.
Database on local machine is one thing, but performing creation of 20k objects on remote database takes much, much, MUCH longer than one minute - logs indicate that single insert can take even 30ms (!), with transactions being commited and created again every 100 or 1000 records. On the other hand, if I execute a single insert manually (taking it straight from the log), it takes less than 1ms to execute. It seems like Entity takes its sweet time inserting every single object to database, even though it uses transactions to wrap larger amount of inserts together. I don't really get it...
What can I do to speed it up for real?

In case anyone's interested, I found a solution to my problem. Entity Framework 6 is unable to provide fast bulk inserts without additional third-party libraries (as mentioned in comments to my question), which are either expensive or not supporting other databases than SQL Server. Entity Framework Core, on the other hand, is another story. It supports fast bulk insertions and can replace EF 6 in project with just a bunch of changes in code: https://learn.microsoft.com/pl-pl/ef/core/index

Related

Entity Framework first query is much slower than second

In my website, I am using ASP.NET MVC 5 with EF6.
I experience slow performance with only 20K records for the first call.
For example, I need to get all rows from the Person table.
First invocation: 7500 ms (after that in the second row it takes only 1000ms)
List<Person> persons = await _context.Person.ToListAsync(); // Time : 7500ms
List<Person> persons2 = await _context.Person.ToListAsync(); // Time : 1000ms
What I tried:
Canceled the lazyload from edmx schema
Refresh the schema
The same query in SQL Server Management Studio takes 400 ms (and it's a really simple query without joins and conditions)
and it happens every time client goes to the person page
I would have posted this in a comment, but it's too long.
There are many things that can factor into that time difference, in order of less likely/impactful to more likely/impactful:
The first query, once in SQL Server (if that's the underlying engine) has to "Warm Up" SQL sometimes. I doubt that this is the actual problem since SQL Server probably hasn't enough time to go "Down" between your tries. Also, the execution plan shouldn't be too problematic for that query.
The first query has to open the communication channel. For example, if it has to route through VPNs, or simply open an SQL connection, it adds up a delay.
Migrations: Unless you manually force migrations, when you create the DbContext, EF6 doesn't run the migrations (and Seeding) at that moment. It waits for the first time it actually has to query, then builds the configurations and execute migrations.
If you want to investigate, put a breakpoint in the OnModelCreating method and see when it's called. You can also add another query before these two queries to an unrelated entity and you'll see that it's not because of caching (AFAIK, the Caching is only used when using DbSet<T>.Find(...))

I have roughly 30M rows to Insert Update in SQL Server per day what are my options?

I have roughly 30M rows to Insert Update in SQL Server per day what are my options?
If I use SqlBulkCopy, does it handle not inserting data that already exists?
In my scenario I need to be able to run this over and over with the same data without duplicating data.
At the moment I have a stored procedure with an update statement and an insert statement which read data from a DataTable.
What should I be looking for to get better performance?
The usual way to do something like this is to maintain a permanent work table (or tables) that have no constraints on them. Often these might live in a separate work database on the same server.
To load the data, you empty the work tables, blast the data in via BCP/bulk copy. Once the data is loaded, you do whatever cleanup and/or transforms are necessary to prep the newly loaded data. Once that's done, as a final step, you migrate the data to the real tables by performing the update/delete/insert operations necessary to implement the delta between the old data and the new, or by simply truncating the real tables and reloading them.
Another option, if you've got something resembling a steady stream of data flowing in, might be to set up a daemon to monitor for the arrival of data and then do the inserts. For instance, if your data is flat files get dropped into a directory via FTP or the like, the daemon can monitor the directory for changes and do the necessary work (as above) when stuff arrives.
One thing to consider, if this is a production system, is that doing massive insert/delete/update statements is likely to cause blocking while the transaction is in-flight. Also, a gigantic transaction failing and rolling back has its own disadvantages:
The rollback can take quite a while to process.
Locks are held for the duration of the rollback, so more opportunity for blocking and other contention in the database.
Worst, after all that happens, you've achieved no forward motion, so to speak: a lot of time and effort and you're right back where you started.
So, depending on your circumstances, you might be better off doing your insert/update/deletes in smaller batches so as to guarantee that you achieve forward progress. 30 million rows over 24 hours works out to be c. 350 per second.
Bulk insert into a holding table then perform either a single Merge statement or an Update and an Insert statement. Either way you want to compare your source table to your holding table to see which action to perform

what can affect nhibernate bulk insert performance?

I have various large data modification operations in a project built on c# and Fluent NHibernate.
The DB is sqlite (on disk rather than in memory as I'm interested in performance)
I wanted to check performance of these so I created some tests to feed in large amounts of data and let the processes do their thing. The results from 2 of these processes have got me pretty confused.
The first is a fairly simple case of taking data supplied in an XML file doing some light processing and importing it. The XML contains around 172,000 rows and the process takes a total of around 60 seconds to run with the actual inserts taking around 40 seconds.
In the next process, I do some processing on the same set of data. So I have a DB with approx 172,000 rows in one table. The process then works through this data, doing some heavier processing and generating a whole bunch of DB updates (inserts and updates to the same table).
In total, this results in around 50,000 rows inserted and 80,000 updated.
In this case, the processing takes around 30 seconds, which is fine, but saving the changes to the DB takes over 30 mins! and it crashes before it finishes with an sqlite 'disk or i/o error'
So the question is: why are the inserts/updates in the second process so much slower? They are working on the same table of the same database with the same connection. In both cases, IStatelessSession is used and ado.batch_size is set to 1000.
In both cases, the code looks that does the update like this:
BulkDataInsert((IStatelessSession session) =>
{
foreach (Transaction t in transToInsert) { session.Insert(t); }
foreach (Transaction t in transToUpdate) { session.Update(t); }
});
(although the first process has no 'transToUpdate' line as it's only inserts - Removing the update line and just doing the inserts still takes almost 10 minutes.)
The transTo* variables are List with the objects to be updated/inserted.
BulkDataInsert creates the session and handles the DB transaction.
I didn't understand your second process. However, here are some things to consider:
Are there any clustered or non-clustered indexes on the table?
How many disk drives do you have?
How many threads are writing to the DB in the second test?
It seems that you are experiencing IO bottlenecks that can be resolved by having more disks, more threads, indexes, etc.
So, assuming a lot of things, here is what I "think" is happening:
In the first test your table probably has no indexes, and since you are just inserting data, it is a sequential insert in a single thread which can be pretty fast - especially if you are writing to one disk.
Now, in the second test, you are reading data and then updating data. Your SQL instance has to find the record that it needs to update. If you do not have any indexes this "find" action is basically a table scan, which will happen for each one of those 80,000 row updates. This will make your application really really slow.
The simplest thing you could probably do is add a clustered index on the table for a unique key, and the best option is to use the columns that you are using in the where clause to "update" those rows.
Hope this helps.
DISCLAIMER: I made quite a few assumptions
The problem was due to my test setup.
As is pretty common with nhibernate based projects, I had been using in-memory sqlite databases for unit testing. These work great but one downside is that if you close the session, it destroys the database.
Consequently, my unit of work implementation contains a 'PreserveSession' property to keep the session alive and just create new transactions when needed.
My new performance tests are using on-disk databases but they still use the common code for setting up test databases and so have PreserveSession set to true.
It seems that having several sessions all left open (even though they're not doing anything) starts to cause problems after a while including the performance drop off and the disk IO error.
I re-ran the second test with PreserveSession set to false and immediately I'm down from over 30 minutes to under 2 minutes. Which is more where I'd expect it to be.

Entity Framework timeout error due to database block

Have a project that uses the entity framework (v1 with .NET 3.5). It's been in use for a few years, but it's now being used by more people. Started getting timeout errors and have tracked it down to a few things. For simplicity sake let's say my database has three tables, product, part, and product_part. There are ~1400 parts and a handful of products.
The user has the ability to add any number of parts to a product. My problem is that when there are many parts added to the product the inserts take a long time. I think it's mostly due to network traffic/delay, but to insert all 1400 takes around a minute. If someone goes in and tries to view the details of a part while those records are being inserted I get a timeout and can see a block in the Activity Monitor of SQL Server.
What can I do to avoid this? My apologies if this has been asked before and I missed it.
Thanks,
Nick
I think the root problem is that your write transaction is taking so long. EF is not good at executing mass DML. It executes each insert in a separate network roundtrip and separate statement.
If you want to insert 1400 rows, and performance matters, do the insert in one single statement using TVP's (INSERT ... SELECT * FROM #tvp). Or switch to bulk-copy but I don't think that will be advantageous at only 1400 rows.
If your read transactions are getting blocked, and this is a problem, switch on snapshot isolation. That takes care of the readers 100% as they never block under snapshot isolation.

ADO.net SqlTransaction improves performance

I'm doing some work that involves inserting a batch of records into a Sql database. The size of the batch will vary but for arguments sake we can say 5000 records every 5 secs. It is likely to be less though. Multiple processes will be writing to this table, nothing is reading from it.
What I have noticed during a quick test is that using a SqlTransaction around this whole batch insert seems to improve performance.
e.g.
SqlTransaction trans = Connection.BeginTransaction()
myStoredProc.Transaction = trans;
sampleData.ForEach(ExecuteNonQueryAgainstDB);
transaction.Commit();
I'm not interested in having the ability to rollback my changes so I wouldn't have really considered using a transaction except it seems to improve performance. If I remove this Transaction code my inserts go from taking 300ms to around 800ms!
What is the logic for this? Because my understanding is the transaction still writes the data to the DB but locks the records until it is committed. I would have expected this to have an overhead...
What I am looking for is the fastest way to do this insert.
The commit is what costs time. Without your explicit transaction, you have one transaction per query executed. With the explicit transaction, no additional transaction is created for your queries. So, you have one transaction vs. multiple transactions. That's where the performance improvement comes from.
If you are looking for a fast wqay to insert/load data have a look at SqlBulkCopy Class
What you're getting is perfectly normal.
If your working with a usual isolation level (let's say commited or snapshot) then when you don't use transactions the database engine has to check for conflicts every time you make an insert. That is, it has to make sure that whenever someone reads from that table (with a SELECT *) for example, it doesn't get dirty reads, that is, mantain the insertion so that while the insertion itself it's taking place noone else is reading.
That will mean, lock, insert row, unlock, lock, insert row, unlock and so on.
When you encapsulate all that in a transaction what you're effectively achieving is reducing that series of "lock" and "unlock" into just one in the commit phase.
I've just finished writing a blog post on the performance gains you can get by explicitly specifying where transactions start and finish.
With Dapper i have observed transactions cutting batch insert down to 1/2 the original time and batch update times down to 1/3 of the original time

Categories