I have tried to profile my wpf application concentrating on non-visual parts that do some calculations and evaluations, I have used Visual Studio 2012 bulit in profiler.
There is quite a lot of code (tens of thousands of lines) in that application, so I was surprised that it showed 46.3% time spent on a single line:
db.Entry(qzv.ZkouskaVzorku).Collection(p => p.VyhodnoceniZkouskies).Load();
This line should just explicitly load related entities as specified here.
I have checked this line using SQL Express profiler and it showed only this SQL command:
exec sp_executesql N'SELECT
[Extent1].[VyhodnoceniZkouskyID] AS [VyhodnoceniZkouskyID],
[Extent1].[Kontext] AS [Kontext],
[Extent1].[NormaVlastnostiID] AS [NormaVlastnostiID],
[Extent1].[ZkouskaVzorkuID] AS [ZkouskaVzorkuID],
[Extent1].[ZkouskaTypuID] AS [ZkouskaTypuID],
[Extent1].[JeShodaITT] AS [JeShodaITT],
[Extent1].[JeITT] AS [JeITT],
[Extent1].[JeStorno] AS [JeStorno]
FROM [dbo].[VyhodnoceniZkousky] AS [Extent1]
WHERE [Extent1].[ZkouskaVzorkuID] = #EntityKeyValue1',N'#EntityKeyValue1 int',#EntityKeyValue1=1816601
go
And this command executes very quickly in 0 ms as it is just selecting several rows using primary clustered index.
Using Entity framework 6.1.0 with SQL Server LocalDB 2014.
I have commented this line as it is important only for ViewModels and the calculations really work cca 2x faster.
What could be the issue and is there any workaround to fix it?
Related
I am querying for values from a database in AWS sydney, (I am in new zealand), using stopwatch i measured the query time, it is wildly inconsistent, sometimes in the 10s of milliseconds and sometimes in the hundreds of milliseconds, for the exact same query. I have no idea why.
Var device = db.things.AsQueryable().FirstOrDefault(p=>p.ThingName == model.thingName);
things table only has 5 entries, I have tried it without the asqueryable and it seems to make no difference. I am using visual studio 2013, entity framework version 6.1.1
EDIT:
Because this is for a business, I cannot put a lot of code up, another time example is that it went from 34 ms to 400 ms
thanks
This can be related to cold-warm query execution.
The very first time any query is made against a given model, the Entity Framework does a lot of work behind the scenes to load and validate the model. We frequently refer to this first query as a "cold" query. Further queries against an already loaded model are known as "warm" queries, and are much faster.
You can find more information about this in the following article:
https://msdn.microsoft.com/en-us/library/hh949853(v=vs.113).aspx
One way to make sure this is the problem is to write a Stored Procedure and get data by it(using Entity Framework) to see if the problem is in the connection or in the query(Entity Framework) itself.
So following on from this question, I was looking into why I was seeing poor query performance with SQL Server Compact that was blocking the UI thread. I have LazyLoading disabled and explicitly load the data that I need ahead of time.
To isolate this a little, I ran a test using the following query on the Northwind database to load all Orders, and for each Order load the associated OrderDetails, and for each OrderDetail load the Product and Supplier:
entities.Orders.Include(o => o.Order_Details
.Select(od => od.Products.Suppliers))
.Load();
It took around 9 seconds to execute!
As a comparison I ran this against the Northwind database on my local SQL Server Express and it completed in < 0.1 seconds.
What is wrong with the .Select() that is causing the issue in this query? Why is this select causing such a long execution time for SQL Server Compact, but is fine when ran against SQL Server?
I'm having troubles with the high cost on initial data model first load over 500+ tables. I've elaborated a little testing program to evidence this.
For the proof, the database is AdventureWorks with 72 tables where the largest table with row count is [Sales].[SalesOrderDetail] (121,317 records):
On EF5 without pre-generated views performing a basic query (select * from SalesOrderDetails where condition) the result is: 4.65 seconds.
On EF5 with pre-generated views performing (select * from SalesOrderDetails where condition) the result is: 4.30 seconds.
Now, on EF6 without pre-generated views performing the same query (select * from SalesOrderDetails where condition) the result is: 6.49 seconds.
Finally, on EF6 with pre-generated views performing the same query (select * from SalesOrderDetails where condition) the result is: 4.12 seconds.
The source code has been uploaded in my TFS online at [https://diegotrujillor.visualstudio.com/DefaultCollection/EntityFramework-PerformanceTest], please let me know the user(s) or email(s) to grant all access and proceed to download it to examine. The test was performed on a local server pointing to .\SQLEXPRESS.
So far, we can see some subtle differences and the picture doesn't look very daunting, however, the same scenario in a real prod environment with 538 tables definitely goes in a wrong direction. It is impossible to me to attach the original code and a database backup due to size and privacity stuff (i can send some pictures or even have a conference sharing my desktop to show running live). I've executed hundreds of queries attempting to compare the generated output trace on sql server profiler, and then paste and execute the sentence and the time consumed is 0.00 seconds in sql server editor.
On the live env, EF5 without pre-generated views it can take up 259.8 seconds in a table with 8049 records and 104 columns executing a very similar query respect to the mentioned above. This goes better with pre-generated views: 21.9 seconds, one more time, the statement generated in sql server profiler takes 0.00 seconds to execute.
Nevertheless, on the live env, EF6 it can take up 49.3 seconds executing the same query without pre-generated views, with pre-generated: 47.9 seconds. It looks like the pre-generated views doesn't cause any effect in EF6 or it already have pre-generated views, from core functionality or something else, don't know.
Thus, i had to perform a downgrade to EF5 as mentioned in my recent post [http://blogs.msdn.com/b/adonet/archive/2014/05/19/ef7-new-platforms-new-data-stores.aspx?CommentPosted=true#10561183].
I've already performed the same tests over database first and code first with same results. Actually i'm using the "Entity Framework Power Tools" add-in to pre-generate views, both projects, the real and testing are in .NET Framework 4.0, Visual Studio 2013 is the IDE and SQL Server 2008 R2 SP2 the DBMS.
Any help would be appreciated. Thanks in advance.
I have a legacy .Net 4 project (name it "A") which uses Linq-to-SQL to query a database and
I have another .Net 4 project (name it "B") with similiar but not the same code which queries the same database as "A".
Both projects:
are C# projects {FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}
use the same assemblies (version v4.0.30319, same folder)
System.dll
System.Data.dll
System.Data.Linq.dll
The auto-generated DataContext is specific for each project but instantiated the same way:
same connection string using SQL authentication
both DataContext set their CommandTimeout from the default to 60 seconds
all other configuration options for the DataContext are the defaults
The way the Linq query is constructed is not exactly the same for the projects but the resulting Linq query is the same.
The generated (T-)SQL select statement is the same as well! (monitored and verified the SQL handles on the db server)
The database server is:
Microsoft SQL Server Enterprise 2005 x64 (9.00.4035.00)
Operating System: Microsoft Server 2003 R2 SP2 x64
If ran the monitored CPU time (on db server) increased drastically for the query of project "A" and a command timeout exception was thrown.
(System.Data.SqlClient.SqlException: Timeout expired)
On the other hand the query of "B" executed within seconds (around 3).
I was able to reproduce the behavior by calling the code of "A" with the same parameters again (no changes to code or database).
"B" even executed within seconds at the same time "A" was increasing its CPU time.
Regrettably after a co-worker created the indices anew I can no longer reprocude the behavior.
The same co-worker mentioned that the query ran fast "last month" (although no code changed from "last month"...).
I debugged the code for both projects - both DataContext instances looked alike.
The db server process' sql handle contains the same SQL statement.
But "A" threw a timeout exception and "B" executed within seconds - repetitive!
Why does the same Linq-to-SQL query consume much more CPU time on the database server for project "A" as for "B"?
To be precise: If the query runs "slow" due to reasons - repetitive - how can the same query run faster just because it is called by another Linq-to-SQL code?
Can there be side effects I do not know of (yet)?
Are there some instance values of the DataContext I have to look at runtime specifically?
By the way: the SQL statement - via SSMS - does use the same query plan on each run.
For the sake of completeness I have linked a sample of:
the C# code fragments of project "B" (the SqlRequest.GetQuery part looks alike for both projects)
the SQL file contains the appropriate database schema
the database execution plan
Please keep in mind that I cannot disclose the full db schema nor the code nor the actual data I am querying against.
(The SQL tables have other columns beside the named ones and the C# code is a bit more complex because the Linq query is constructed conditionally.)
Update - more insight at run-time
Some properties of both DataContext instances:
Log = null;
Transaction = null;
CommandTimeout = 60;
Connection: System.Data.SqlClient.SqlConnection;
The SqlConnection was created from a connection string like that (both cases):
"Data Source=server;Initial Catalog=sourceDb;Persist Security Info=True;User ID=user;Password=password"
There are no explicit SqlCommands being run to pass SET options to the database session. Neither contains the inline TVF SET options.
You need to run a trace on SQL Server instead of debugging this from the C# side. This will show you everything both A and B are executing on the server. The execution plan does you no good because it's precisely that - just a plan. You want to see the exact statements and their actual performance metrics.
In the rare event you were to tell me that both SELECT statements are exactly the same but had vastly different performance I would be virtually certain they are running under different transaction isolation levels. A single SQL command is an implicit transaction even if you aren't explicitly creating any.
If for whatever reason the trace doesn't make it clear you should post the commands being ran along with their metrics.
Note: running a trace has some performance overhead cost to it so I would try to keep the timeframe small or run during off-peak if possible.
I think you will check LazyLoadingEnabled="true" in your "A" project edmx-file .
If LazyLoadingEnabled="true": In case of lazy loading, related objects (child objects) are not automatically loaded with its parent object until they are requested. Default LINQ supports lazy loading.
IF LazyLoadingEnabled="false": In case of eager loading, related objects (child objects) are loaded automatically with its parent object. To use Eager loading you need to use Include() method.
I'm running the same commands in ADO.NET C# and Sql Server Management studio. The SQL that runs via C# performs significantly worse - memory usage is worse (using up all available memory) and thus causing the database executing time to increase. The management studio isn't perfect (it too causes sql server to use up memory) but it's not as bad as via ADO.NET.
I am running: Windows 7, Sql Server 2008 R2, 10.50.1600. C# .NET 3.5. Sql Server management Studio 2008 R2. All programs and databases are on my local dev machine.
The SQL I am running is 40 create view's and 40 create unique indexes on 2 database's. I need to do this on the fly as we are running a database compare between 2 databases (for reasons that aren't relevant we need to compare views and not tables). And since performance is an issue we cannot leave the views and indexes around all the time.
The SQL looks like this:
create view [dbo].[view_datacompare_2011106] with schemabinding as (
SELECT t.[ID], t.[Column1], t.[Column2], t.[Column3], FROM dbo.Table t WHERE t.[ID] in ('1','2','3','4') )
go
create unique clustered index [index_datacompare_2011106] on [dbo].[view_datacompare_2011106] (ID)
go
...
The only difference is that the C# code does not call Go. Each create cmd is wrapped up in a using statement and called via ExecuteNonQuery() e.g.
using (SqlCommand cmd = new SqlCommand(sql, this.connectionActualDb))
{
cmd.CommandTimeout = Int32.Parse(SqlResources.TimeoutSeconds);
cmd.ExecuteNonQuery();
}
P.S. SET ARITHABORT must be ON when you are creating or changing indexes on computed columns or indexed views.
Use Waits and Queues methodology to investigate the performance bottleneck. You'll find the root cause and then we can advice accordingly. Most likely your C# application runs into concurrency due to locks, very likely held by the application itself. Typically one blames plan changes due to parameter sniffing, as in Slow in the Application, Fast in SSMS, but with DDL statements this is unlikely.
Why don't you put all the commands into a single string separated by GO and send the one string to the database?
It's called SQL Batching.