Extremely slow query times in Entity Framework compared to SSMS - c#

I've inherited a codebase and I'm having a weird issue with Entity Framework Core v3.1.19.
Entity Framework is generating the following query (as found in SQL Server Profiler) and it's taking nearly 30 seconds to run, when running the same code (again taken from profiler) takes 1 second in SSMS (this is one example but the entire site runs extremely slow when getting data from the database).
exec sp_executesql N'SELECT [t].[Id], [t].[AccrualLink], [t].[BidId], [t].[BidId1], [t].[Cancelled], [t].[ClientId], [t].[CreatedUtc], [t].[CreatorUserId], [t].[Date], [t].[DeletedUtc], [t].[DeleterUserId], [t].[EmergencyContact], [t].[EmergencyName], [t].[EmergencyPhone], [t].[EndDate], [t].[FinalizerId], [t].[Guid], [t].[Invoiced], [t].[IsDeleted], [t].[Notes], [t].[OfficeId], [t].[PONumber], [t].[PlannerId], [t].[PortAgencyAgentEmail], [t].[PortAgencyAgentName], [t].[PortAgencyAgentPhone], [t].[PortAgencyId], [t].[PortAgentId], [t].[PortId], [t].[PortType], [t].[PositionNote], [t].[ProposalLink], [t].[ServiceId], [t].[ShipId], [t].[ShorexAssistantEmail], [t].[ShorexAssistantName], [t].[ShorexAssistantPhone], [t].[ShorexManagerEmail], [t].[ShorexManagerName], [t].[ShorexManagerPhone], [t].[ShuttleBus], [t].[ShuttleBusEmail], [t].[ShuttleBusName], [t].[ShuttleBusPhone], [t].[ShuttleBusServiceProvided], [t].[TouristInformationBus], [t].[TouristInformationEmail], [t].[TouristInformationName], [t].[TouristInformationPhone], [t].[TouristInformationServiceProvided], [t].[UpdatedUtc], [t].[UpdaterUserId], [t].[Water], [t].[WaterDetails], [t0].[Id], [t0].[CreatedUtc], [t0].[CreatorUserId], [t0].[DeletedUtc], [t0].[DeleterUserId], [t0].[Guid], [t0].[IsDeleted], [t0].[LanguageId], [t0].[Logo], [t0].[Name], [t0].[Notes], [t0].[OldId], [t0].[PaymentTerms], [t0].[Pricing], [t0].[Services], [t0].[Status], [t0].[UpdatedUtc], [t0].[UpdaterUserId], [t1].[Id], [t1].[CreatedUtc], [t1].[CreatorUserId], [t1].[DeletedUtc], [t1].[DeleterUserId], [t1].[Guid], [t1].[IsDeleted], [t1].[Name], [t1].[OldId], [t1].[UpdatedUtc], [t1].[UpdaterUserId], [s].[Id], [s].[CreatedUtc], [s].[CreatorUserId], [s].[DeletedUtc], [s].[DeleterUserId], [s].[Guid], [s].[IsDeleted], [s].[Name], [s].[Pax], [s].[UpdatedUtc], [s].[UpdaterUserId]
FROM (
SELECT [o].[Id], [o].[AccrualLink], [o].[BidId], [o].[BidId1], [o].[Cancelled], [o].[ClientId], [o].[CreatedUtc], [o].[CreatorUserId], [o].[Date], [o].[DeletedUtc], [o].[DeleterUserId], [o].[EmergencyContact], [o].[EmergencyName], [o].[EmergencyPhone], [o].[EndDate], [o].[FinalizerId], [o].[Guid], [o].[Invoiced], [o].[IsDeleted], [o].[Notes], [o].[OfficeId], [o].[PONumber], [o].[PlannerId], [o].[PortAgencyAgentEmail], [o].[PortAgencyAgentName], [o].[PortAgencyAgentPhone], [o].[PortAgencyId], [o].[PortAgentId], [o].[PortId], [o].[PortType], [o].[PositionNote], [o].[ProposalLink], [o].[ServiceId], [o].[ShipId], [o].[ShorexAssistantEmail], [o].[ShorexAssistantName], [o].[ShorexAssistantPhone], [o].[ShorexManagerEmail], [o].[ShorexManagerName], [o].[ShorexManagerPhone], [o].[ShuttleBus], [o].[ShuttleBusEmail], [o].[ShuttleBusName], [o].[ShuttleBusPhone], [o].[ShuttleBusServiceProvided], [o].[TouristInformationBus], [o].[TouristInformationEmail], [o].[TouristInformationName], [o].[TouristInformationPhone], [o].[TouristInformationServiceProvided], [o].[UpdatedUtc], [o].[UpdaterUserId], [o].[Water], [o].[WaterDetails]
FROM [OpsDocuments] AS [o]
WHERE ([o].[IsDeleted] <> CAST(1 AS bit)) AND ((CASE
WHEN [o].[Cancelled] = CAST(0 AS bit) THEN CAST(1 AS bit)
ELSE CAST(0 AS bit)
END & CASE
WHEN [o].[Invoiced] = CAST(0 AS bit) THEN CAST(1 AS bit)
ELSE CAST(0 AS bit)
END) = CAST(1 AS bit))
ORDER BY [o].[Date]
OFFSET #__p_0 ROWS FETCH NEXT #__p_1 ROWS ONLY
) AS [t]
LEFT JOIN [TourClients] AS [t0] ON [t].[ClientId] = [t0].[Id]
LEFT JOIN [TourLanguages] AS [t1] ON [t0].[LanguageId] = [t1].[Id]
LEFT JOIN [Ships] AS [s] ON [t].[ShipId] = [s].[Id]
ORDER BY [t].[Date]',N'#__p_0 int,#__p_1 int',#__p_0=0,#__p_1=10
This query is returning 10 rows from a possible 55 so were not talking big numbers or anything.
At first I thought it might be data type issues on conversion but checking all the data types they are all correct and since the issue is showing in profiler I'm assuming this is a SQL issue not specifically Entity Framework. However I cant find any difference between the two when running in profiler except the one from EF just takes 30 times longer.
Hoping someone might have a suggestion of where to look.
Edit: Thanks for all the suggestions in the comments. As to the Linq and reproducible example it's going to be tricky as the code base for this project is some odd home-baked auto-generating system. You give it a ViewModel with tonnes of custom attributes and it tries to do everything for you (so many layers of abstraction) so its difficult to find anything.
It sounds like I'm going to have to start rewriting these into more finite controllers.

EF will always take longer than a raw SQL because EF has to materialize tracked entities for every entity returned in the query.
Looking at the SQL this is an eager-loading query across 4 tables, OPSDocuments, TourClients, TourLanguages, and Ships.
Reasons this could suddenly take much longer after some seemingly unrelated changes: new relationships being lazy loaded.
An example of this would be where this data is being serialized and a new relationship has been added to one or more entities which are now being tripped by lazy load hits. (Usually evidenced by seeing extra queries coming up after this one runs before the page loads)
Other causes for this to be taking longer than it should:
The DbContext is tracking too many entities. The more entities a DbContext is tracking, the more references it has to go through when piecing together results from a Linq query. Some teams expect that EF caches instances similar to NHibernate and this would improve performance. Typically it is the opposite, the more entities it is tracking the longer it can take to get results.
Concurrent reads & locks. If tables are not efficiently indexed this can be a bit of a killer when a system is run in production compared to testing/debugging. Typically though this would affect systems that have very large row and/or user counts.
The best general advice I can offer when it comes to tackling performance issues with EF is to leverage projection as much as possible. This helps you optimized queries and identify useful indexes that reflect the highest-volume scenarios you are pulling data, as well as avoid future pitfalls from changing relationships which can result in Select n+1 lazy load hits creeping into systems.
For example, instead of:
var results = context.OpsDocuments
.Include(x => x.TourClient)
.ThenInclude(x => x.TourLanguage)
.Include(x => x.Ship)
.OrderBy(x => x.Date)
.ToList();
use:
var results = context.OpsDocuments
.Select(x => new TourSummaryViewModel
{
DocumentId = x.DocumentId,
ClientId = x.Client.Id,
ClientName = x.Client.Name,
Language = x.Client.Language.Name,
ShipName = x.Ship.Name,
Date = x.Date
}).OrderBy(x => x.Date)
.ToList();
... Where the view model reflects just the details you need from the entity graph. This protects you from introduced relationships that the view/consumer doesn't need (unless you add them to the Select) and the resulting query can help identify useful indexes to boost performance if this is something that gets run a fair bit. (Tuning indexing based on actual DB use rather than guesswork)
I would also recommend that all queries like this implement a limiter for the maximum rows returned. (using Take) to help avoid surprises as systems age where row counts grow over time leading to performance degradation over time.

I know this is a very late answer, but based on a similar situation recently encountered - this looks very much like an EntityFramework LINQ-to-SQL clause in the codebase is using bitwise operators ('&', '|') instead of logical operators ('&&', '||'). That would explain the odd 'CAST 1 as bit' and '&' and '|' occurrences in the generated SQL above.
CASTs in the WHERE absolutely kill performance. In our case, a 30sec query immediately went subsecond once this was identified.*
Check LINQ along the lines of: ".Where(x => x.Prop1==true & x.Prop2==false) | (x.Prop3==true))..." and ensure the operators are '&&' instead of '&', etc. It's easy to be already thinking ahead to SQL when writing this code, but it's still C#!
I need to be a little more specific on how CASTing in WHEREs killed performance in our case, without actually CASTing the db fields themselves. Here's an example of generated SQL, from using bitwise ops in the EF Core .Where() C#:
WHERE CASE WHEN cid = 1234 THEN CAST(1 as bit) ELSE CAST (0 as bit) END & (CASE WHEN (date1 IS NULL OR date1 IN ('2000-1-1', '1999-1-1')) THEN CAST (1 as bit) ELSE CAST(0 as bit) END | CASE WHEN (isverified IS NOT NULL AND isverified = CAST(1 as bit)) THEN CAST (1 AS bit) ELSE CAST(0 as bit) END)
This can be rewritten with logical ops as:
WHERE ci=1234 AND ((date1 IS NULL OR date1 IN ('2000-1-1', '1999-1-1')) OR (isverified IS NOT NULL AND isverified=1))
First query (EF-generated thanks to bitwise ops mistakenly in the EF code Where clause) took 30secs on our 45-million-row table. The second was <1s. The explanation for this that I can see is - and I'm open to correction - that the first query essentially generates a bitwise expression per row that must be evaluated, thus being non-sargable and requiring a table scan.

The main issue here is that you have stated that this "query" is taking more than 30 seconds in EF and less than 1 second in SSMS, but what you haven't provided is the SQL that EF has compiled for execution
You're asking us to compare apples with the idea of an orange...
We really need to see the compiled SQL as a minimum but the C# / Linq code will also be helpful. It doesn't have to compile, but it will demonstrate some of the context that you are operating within.
tldr
This is less likely to be about EF itself and more about the patterns in the code you are executing and your specific query.
For such a small and simple query lazy loading should not be used at all, after that the usual suspects that we talk about with EF performance should not be significantly measurable for this tiny dataset either. All we can say from the little information provided is that your EF query does not match your expected SQL, so we should start there and make sure your EF query is compiling a reasonable approximation of the query that you are expecting.
If all else fails, simply use Raw SQL Queries and move on.
While it is true that there are some overheads inherent by using an ORM like EF, with a simple query like this we should be talking about a few milliseconds, anything else indicates that your EF Linq query is either wrong or written very poorly.
If you are using Lazy Loading, then be mindfull of which lines of code will cause a new query from the server instead of using the in-memory data. Lazy Loading can be powerful but there are relatively few situations where it makes sense. Using projections is a good alternative, but you should consider disabling lazy loading altogether and switching over to eager loading always. If you are unsure, try disabling the lazy loading feature of your data context, you'll find out very quickly if your code was depending on the lazy feature as it will likely fail at runtime.
If there is a single execution point then you should be able capture the raw SQL and time the round trip.
Post the code you used to time the execution, the raw SQL and the time please.
If a single execution point takes 30 seconds to load then there might be a cold start issue, that is you might have some processes executing before your query, wihtout knowing more about your framework, an easy example to debug with is to initiate the database connection first with a simple call to return the count of all the OpsDocuments records, then execute your query.
The other performance concerns like having too many columns or strange data type comparisons don't really apply here. You could optimise this query for sure, but with 10 rows and less than 50 columns, even a very slow PC should be able to read this result into an EF graph in a few milliseconds.
If you have already eliminated Lazy-Loading, and your captured SQL query generated by EF is lightning fast when executed in SSMS but awfully slow from your application runtime, then Locking "might" be a concern.
A simple way to verify if locking is an issue is to query the database for the current executing queries while your application is waiting for the response, if the wait time is truely 30 seconds, then you'll have plenty of time to execute the following in SSMS while you are waiting.
As a bonus, this will prove if the query is running at all
Declare #Identifier Char(1) = '~'
SELECT r.session_id, r.status,
st.TEXT AS batch_text,
qp.query_plan AS 'XML Plan',
r.start_time,
r.status,
r.total_elapsed_time, r.blocking_session_id, r.wait_type, r.wait_time, r.open_transaction_count, r.open_resultset_count
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(r.plan_handle) AS qp
WHERE st.TEXT NOT LIKE 'Declare #Identifier Char(1) = ''~''%'
ORDER BY cpu_time DESC;

Related

Why this parameterized SQL takes forever when the same hardcoded one executes in no time

I've got a query that looks like this:
SELECT ct,
text AS ST,
kval.idkwd
FROM (SELECT ST = kv.idkwd,
Count(kv.idkwd) CT,
kv.idkwd
FROM mwf
INNER JOIN info
ON mwf.ident = info.idinfo
INNER JOIN rel
ON rel.idinfo = info.idinfo
INNER JOIN pers
ON pers.idpers = rel.idpers
LEFT JOIN kwd kv
ON kv.idkwd = info.kwsvstatus
WHERE mwf.id IN ( :mwfIds)
GROUP BY idkwd) kw
INNER JOIN kwd kval
ON kw.idkwd = kval.idkwd
ORDER BY text
From a ASP.NET application, this query is executed this way, using NHibernate:
session.CreateQuery(query);
query.SetParameterList("mwfIds", mwfIds, NHibernateUtil.Guid);
return query.List();
For a reason unknown, it sometimes takes 30 seconds to run (for some given parameters). The measures are given by SQL Profiler.
I tried executing this same query with the same parameters on SSMS (copied from the SQL Profiler output), and it runs in less than 1 second.
Worse, if I change the C# code to
session.CreateQuery(hardcodedQuery);
return query.List();
where hardcodedQuery is the same query I ran in SSMS (i.e. the same as always, only without any parameter set using NH), it also runs in less than 1 second.
Why does the parameterized query take so much time ?
As already said by Sean Lange in his comment this behavior is very likely to be caused by parameter sniffing.
In my experience, it has always been solved by fixing the indexes. (Do not add indexes too quickly, having too many indexes may causes other performance issues. Like bad index choices by the query optimizer, leading to temp db spills by example.)
Parameter sniffing does not occur only on stored procedure. By examples, it occurs on sql queries executed through sp_executesql or EXEC(). It may even occurs with auto-parameterized scalar values founded in queries.
Parameter sniffing is an optimization fall-back used by SQL Server in case of missing indexes. It shapes a query plans generated for a first query with its specific parameters values, which then get cached in query plan cache. All subsequent call to the same query with different parameters values, with similar connection properties, will then use that query plan, whatever the parameters values are.
If the values of the first query call was corresponding to a corner case yielding a high filtering condition from one table, but others calls values do not cause the same high filtering, the cached query plan causes them to badly perform.
SSMS has rarely the same connection options than your application, causing it to not reuse the cached query plan used by the application. Another query plan gets generated, adapted to the query parameters values you are testing if you are lacking indexes. So SSMS appears to perform better... But no, it does just use a query plan tailored for the specific parameters values you are testing.
A more detailed, precise and adequate explanation can be read in Slow in the Application, Fast in SSMS? Understanding Performance Mysteries blog post.
Do not be deterred by its raw aspect, this blog is a great resource in my opinion. Do not either be fooled by the How SQL Server Compiles a Stored Procedure heading, he writes in the second sentence following it:
If your application does not use stored procedures, but submits SQL statements directly, most of what I say this chapter is still applicable.
This blog post will also give you guidance on how to resolve such issues.
This might be because of out-dated statistics. Please use "inner hash join" instead of "inner join". it possible makes a difference.
Or you can update statistics regularly (or use auto update statistics) if practical. Updating statistics may take long if your table is huge though.

Linq slowness materializing complex queries

I have often found that if I have too many joins in a Linq query (whether using Entity Framework or NHibernate) and/or the shape of the resulting anonymous class is too complex, Linq takes a very long time to materialize the result set into objects.
This is a generic question, but here's a specific example using NHibernate:
var libraryBookIdsWithShelfAndBookTagQuery = (from shelf in session.Query<Shelf>()
join sbttref in session.Query<ShelfBookTagTypeCrossReference>() on
shelf.ShelfId equals sbttref.ShelfId
join bookTag in session.Query<BookTag>() on
sbttref.BookTagTypeId equals (byte)bookTag.BookTagType
join btbref in session.Query<BookTagBookCrossReference>() on
bookTag.BookTagId equals btbref.BookTagId
join book in session.Query<Book>() on
btbref.BookId equals book.BookId
join libraryBook in session.Query<LibraryBook>() on
book.BookId equals libraryBook.BookId
join library in session.Query<LibraryCredential>() on
libraryBook.LibraryCredentialId equals library.LibraryCredentialId
join lcsg in session
.Query<LibraryCredentialSalesforceGroupCrossReference>()
on library.LibraryCredentialId equals lcsg.LibraryCredentialId
join userGroup in session.Query<UserGroup>() on
lcsg.UserGroupOrganizationId equals userGroup.UserGroupOrganizationId
where
shelf.ShelfId == shelfId &&
userGroup.UserGroupId == userGroupId &&
!book.IsDeleted &&
book.IsDrm != null &&
book.BookFormatTypeId != null
select new
{
Book = book,
LibraryBook = libraryBook,
BookTag = bookTag
});
// add a couple of where clauses, then...
var result = libraryBookIdsWithShelfAndBookTagQuery.ToList();
I know it's not the query execution, because I put a sniffer on the database and I can see that the query is taking 0ms, yet the code is taking about a second to execute that query and bring back all of 11 records.
So yeah, this is an overly complex query, having 8 joins between 9 tables, and I could probably restructure it into several smaller queries. Or I could turn it into a stored procedure - but would that help?
What I'm trying to understand is, where is that red line crossed between a query that is performant and one that starts to struggle with materialization? What's going on under the hood? And would it help if this were a SP whose flat results I subsequently manipulate in memory into the right shape?
EDIT: in response to a request in the comments, here's the SQL emitted:
SELECT DISTINCT book4_.bookid AS BookId12_0_,
libraryboo5_.librarybookid AS LibraryB1_35_1_,
booktag2_.booktagid AS BookTagId15_2_,
book4_.title AS Title12_0_,
book4_.isbn AS ISBN12_0_,
book4_.publicationdate AS Publicat4_12_0_,
book4_.classificationtypeid AS Classifi5_12_0_,
book4_.synopsis AS Synopsis12_0_,
book4_.thumbnailurl AS Thumbnai7_12_0_,
book4_.retinathumbnailurl AS RetinaTh8_12_0_,
book4_.totalpages AS TotalPages12_0_,
book4_.lastpage AS LastPage12_0_,
book4_.lastpagelocation AS LastPag11_12_0_,
book4_.lexilerating AS LexileR12_12_0_,
book4_.lastpageposition AS LastPag13_12_0_,
book4_.hidden AS Hidden12_0_,
book4_.teacherhidden AS Teacher15_12_0_,
book4_.modifieddatetime AS Modifie16_12_0_,
book4_.isdeleted AS IsDeleted12_0_,
book4_.importedwithlexile AS Importe18_12_0_,
book4_.bookformattypeid AS BookFor19_12_0_,
book4_.isdrm AS IsDrm12_0_,
book4_.lightsailready AS LightSa21_12_0_,
libraryboo5_.bookid AS BookId35_1_,
libraryboo5_.libraryid AS LibraryId35_1_,
libraryboo5_.externalid AS ExternalId35_1_,
libraryboo5_.totalcopies AS TotalCop5_35_1_,
libraryboo5_.availablecopies AS Availabl6_35_1_,
libraryboo5_.statuschangedate AS StatusCh7_35_1_,
booktag2_.booktagtypeid AS BookTagT2_15_2_,
booktag2_.booktagvalue AS BookTagV3_15_2_
FROM shelf shelf0_,
shelfbooktagtypecrossreference shelfbookt1_,
booktag booktag2_,
booktagbookcrossreference booktagboo3_,
book book4_,
librarybook libraryboo5_,
library librarycre6_,
librarycredentialsalesforcegroupcrossreference librarycre7_,
usergroup usergroup8_
WHERE shelfbookt1_.shelfid = shelf0_.shelfid
AND booktag2_.booktagtypeid = shelfbookt1_.booktagtypeid
AND booktagboo3_.booktagid = booktag2_.booktagid
AND book4_.bookid = booktagboo3_.bookid
AND libraryboo5_.bookid = book4_.bookid
AND librarycre6_.libraryid = libraryboo5_.libraryid
AND librarycre7_.librarycredentialid = librarycre6_.libraryid
AND usergroup8_.usergrouporganizationid =
librarycre7_.usergrouporganizationid
AND shelf0_.shelfid = #p0
AND usergroup8_.usergroupid = #p1
AND NOT ( book4_.isdeleted = 1 )
AND ( book4_.isdrm IS NOT NULL )
AND ( book4_.bookformattypeid IS NOT NULL )
AND book4_.lightsailready = 1
EDIT 2: Here's the performance analysis from ANTS Performance Profiler:
It is often database "good" practice to place lots of joins or super common joins into views. ORMs don't let you ignore these facts nor do they supplement the decades of time spent fine tuning databases to do these kinds of things efficiently. Refactor those joins into a singular view or a couple views if that'd make more sense in the greater perspective of your application.
NHibernate should be optimizing the query down and reducing the data so that .Net only has to mess with the important parts. However, if those domain objects are just naturally large, that's still a lot of data. Also, if it's a really large result set in terms of rows returned, that's a lot of objects getting instantiated even if the DB is able to return the set quickly. Refactoring this query into a view that only returns the data you actually need would also reduce object instantiation overhead.
Another thought would be to not do a .ToList(). Return the enumerable and let your code lazily consume the data.
According to profiling information, the CreateQuery takes 45% of the total execution time. However as you mentioned the query took 0ms when you executed directly. But this alone is not enough to say there is a performance problem because,
You are running the query with the profiler which has significant impact on execution time.
When you use a profiler, it will affect every code is being profiled but not the sql execution time (because it happens in the SQL server), so you can see everything else is slower compared to SQL statement.
so ideal scenario is to measure how long it takes to execute entire code block, measure time for SQL query and calculate times, and if you do that you will probably end up with different values.
However, I'm not saying that the the NH Linq to SQL implementation is optimized for any query you come up with, but there are other ways in NHibernate to deal with those situations such as QueryOverAPI, CriteriaQueries, HQL and finally SQL.
Where is that red line crossed between a query that is performant and
one that starts to struggle with materialization. What's going on under the hood?
This one is pretty hard question and without having detail knowledge of NHibernate Linq to SQL provider it's hard to provide a accurate answer. You can always try different mechanisms provided and see which one is the best for given scenario.
And would it help if this were a SP whose flat results I subsequently
manipulate in memory into the right shape?
Yes, using a SP would help things to work pretty fast, but using SP would add more maintenance problems to your code base.
You have generic question, I'll tell you generic answer :)
If you query data for reading (not for update) try to use anonymous classes. The reason is - they are lighter to create, they have no navigatoin properties. And you select only data you need! It's very important rule. So, try to replace your select with smth like this:
select new
{
Book = new { book.Id, book.Name},
LibraryBook = new { libraryBook.Id, libraryBook.AnotherProperty},
BookTag = new { bookTag.Name}
}
Stored procedures are good, when query is complex and linq-provider generates not effective code, so, you can replace it with plain SQL or stored procedure. It's not offten case and, I think, it's not your situation
Run your sql-query. How many rows it returns? Is it the same value as result? Sometimes linq provider generates code, that select much more rows to select one entity. It happens, when entity has one to many relationship with another selecting entity. For example:
class Book
{
int Id {get;set;}
string Name {get;set;}
ICollection<Tag> Tags {get;set;}
}
class Tag
{
string Name {get;set;}
Book Book {get;set;}
}
...
dbContext.Books.Where(o => o.Id == 1).Select(o=>new {Book = o, Tags = o.Tags}).Single();
I Select only one book with Id = 1, but provider will generate code, that returns rows amount equals to Tags amount (entity framework does this).
Split complex query to set of simple and join in client side. Sometimes, you have complex query with many conditionals and resulting sql become terrible. So, you split you big query to more simple, get results of each and join/filter on client side.
At the end, I advice you to use anonymous class as result of select.
Don’t use Linq’s Join. Navigate!
in that post you can see:
As long as there are proper foreign key constraints in the database, the navigation properties will be created automatically. It is also possible to manually add them in the ORM designer. As with all LINQ to SQL usage I think that it is best to focus on getting the database right and have the code exactly reflect the database structure. With the relations properly specified as foreign keys the code can safely make assumptions about referential integrity between the tables.
I agree 100% with the sentiments expressed by everyone else (with regards to their being two parts to the optimisation here and the SQL execution being a big unknown, and likely cause of poor performance).
Another part of the solution that might help you get some speed is to pre-compile your LINQ statements. I remember this being a huge optimisation on a tiny project (high traffic) I worked on ages and ages ago... seems like it would contribute to the client side slowness you're seeing. Having said all that though I've not found a need to use them since... so heed everyone else's warnings first! :)
https://msdn.microsoft.com/en-us/library/vstudio/bb896297(v=vs.100).aspx

Complex Linq-To-Entities query with deferred execution: prevent OrderBy being used as a subquery/projection

I built a dynamic LINQ-to-Entities query to support optional search parameters. It was quite a bit of work to get this producing performant SQL and I am NEARLY there, but I stumble across a big issue with OrderBy which gets translated into kind of a projection / subquery containing the actual query, causing extremely inperformant SQL. I can't find a solution to get this right. Maybe someone can help me out :)
I spare you the complete query for now as it is long and complex, I translate it into a simple sample for better understanding:
I'm doing something like this:
// Start with the base query
var query = from a in db.Articles
where a.UserId = 1;
// Apply some optional conditions
if (tagParam != null)
query = query.Where(a => a.Tag = tagParam);
if (authorParam != null)
query = query.Where(a => a.Author = authorParam);
// ... and so on ...
// I only want the 50 most recent articles, so I finally want to apply Take and OrderBy
query = query.OrderByDescending(a => a.Published);
query = query.Take(50);
The resulting SQL strangely translates the OrderBy in an container query:
select top 50 Id, Published, Title, Content
from (select Id, Published, Title Content
from Articles
where UserId = 1
and Author = #paramAuthor)
order by Published desc
Note that also the Top 50 got moved to the outer query. In case I would only use Take(50), the top 50 sql statement would correctly be applied to the inner query above (the outer query wouldn't even exist). Only when I use OrderBy, Linq-To-Entities uses this container query approach.
This causes a very bad execution plan where the inner query takes all articles that apply to the parameters from Disk and pass them to the outer query - and only there, OrderBy and Top is processed. In my case, this can be hundred thousands of lines. I already tried to move the order by manually into the inner statement and execute this - this produces much better results as the existing indexes allow the SQL Server to easily find the top 50 rows in right order without reading all rows from disk.
Is there any way I can get EF to append the order by clause to the inner query? Or any other trick to get this working right?
Any help would be greatly appreciated :)
Edit: As an additional information, some tests with less complex queries showed that the Optimizer normally handles such subquery scenarios well. In my scenario, the Optimizer fails on this unfortunately and moves hundrets of thousands of rows through the query plan. But moving the OrderBy to the inner query solves it and the Optimizer does it right.
Edit 2: After couple of hours of more testing it seems the issue with the wrong execution plan is a SQL Server issue that is not caused by the created container query. While the move of the order by and top clause into the inside query did fix the issue initially, I can't reproduce this now anymore, SQL Server started using the bad execution plan now also here (while the data in the DB remained unchanged). The move of the order by clause might caused SQL Server to take other statistics into account but it seems it was not due to the better/more clean query design. However, I still want to know why EF uses a container query here and if I can influence this behavior. If it will not improve performance, at least it would make debugging easier if the generated EF queries are more straightforward and not that convoluted.

Do Views degrade the EF query performance?

I was looking for some tips to improve my entity framework query performance and came accross this useful article.
The author of this article mentioned following:
09 Avoid using Views
Views degrade the LINQ query performance costly. These are slow in performance and impact the performance greatly. So avoid using views in LINQ to Entities.
I am just familiar with this meaning of view in the context of databases. And beacuse I don't understand this statement: Which views does he mean?
It depends, though rarely to a significant degree.
Let's say we've a view like:
CREATE VIEW TestView
AS
Select A.x, B.y, B.z
FROM A JOIN B on A.id = B.id
And we create an entity mapping for this.
Let's also assume that B.id is bound so that it is non-nullable and has a foreign key relationship with A.id - that is, whenever there's a B row, there is always at least one corresponding A.
Now, if we could do something like from t in context.TestView where t.x == 3 instead of from a in context.A join b in context.B on a.id equals b.id where a.x == 3 select new {a.x, b.y, b.z}.
We can expect the former to be converted to SQL marginally faster, because it's a marginally simpler query (from both the Linq and SQL perspective).
We can expect the latter to be converted from an SQL query to a SQLServer (or whatever) internal query marginally faster.
We can expect that internal query to be pretty much identical, unless something went a bit strange. As such, we'd expect the performance at that point to be identical.
In all, there isn't very much to choose between them. If I had to bet on one, I'd bet on that using the view being slightly faster especially on first call, but I wouldn't bet a lot on it.
Now lets consider (from t in context.TestView select t.z).Distinct(). vs (from b in context.B select b.z).Distinct().
Both of these should turn into a pretty simple SELECT DISTINCT z FROM ....
Both of these should turn into a table scan or index scan only of table B.
The first might not (flaw in the query plan), but that would be surprising. (A quick check on a similar view does find SQLServer ignoring the irrelevant table).
The first could take slightly longer to produce a query plan for, since the fact that the join on A.id is irrelevant would have to be deduced. But then database servers are good at that sort of thing; it's a set of computer science and problems that have had decades of work done on them.
If I had to bet on one, I'd bet on the view making things very slightly slower, though I'd bet more on it being so slight a difference that it disappears. An actual test with these two sorts of query found the two to be within the same margin of differences (i.e the range of different times for the two overlapped with each other).
The effect in this case on the production of the SQL from the linq query will be nil (they're effectively the same at that point, but with different names).
Lets consider if we had a trigger on that view, so that inserting or deleting carried out the equivalent inserts or deletes. Here we will gain slightly from using one SQL query rather than two (or more), and it's easier to ensure it happens in a single transaction. So a slight gain for views in this case.
Now, let's consider a much more complicated view:
CREATE VIEW Complicated
AS
Select A.x, B.x as y, C.z, COALESCE(D.f, D.g, E.h) as foo
FROM
A JOIN B on A.r = B.f + 2
JOIN C on COALESCE(A.g, B.x) = C.x
JOIN D on D.flag | C.flagMask <> 0
WHERE EXISTS (SELECT null from G where G.x + G.y = A.bar AND G.deleted = 0)
AND A.deleted = 0 AND B.deleted = 0
We could do all of this at the linq level. If we did, it would probably be a bit expensive as query production goes, though that is rarely the most expensive part of the overall hit on a linq query, though compiled queries may balance this out.
I'd lean toward the view being the more efficient approach, though I'd profile if that was my only reason for using the view.
Now lets consider:
CREATE VIEW AllAncestry
AS
WITH recurseAncetry (ancestorID, descendantID)
AS
(
SELECT parentID, childID
FROM Parentage
WHERE parentID IS NOT NULL
UNION ALL
SELECT ancestorID, childID
FROM recurseAncetry
INNER JOIN Parentage ON parentID = descendantID
)
SELECT DISTINCT (cast(ancestorID as bigint) * 0x100000000 + descendantID) as id, ancestorID, descendantID
FROM recurseAncetry
Conceptually, this view does a large number of selects; doing a select, and then recursively doing a select based on the result of that select and so on until it has all the possible results.
In actual execution, this is converted into two table scans and a lazy spool.
The linq-based equivalent would be much heavier; really you'd be better off either calling into the equivalent raw SQL, or loading the table into memory and then producing the full graph in C# (but note that this is going to be a waste on queries based on this that don't need everything).
In all, using a view here is going to be a big saving.
In summary; using views is generally of negligible performance impact, and that impact can go either way. Using views with triggers can give a slight performance win and make it easier to ensure data integrity, by forcing it to happen in a single transaction. Using views with a CTE can be a big performance win.
Non-performance reasons for using or avoiding views though are:
The use of views hides the relationship between the entities related to that view and the entities related to the underlying tables from your code. This is bad as your model is now incomplete in this regard.
If the views are used in other applications apart from yours, you will be more consistent with those other applications, take advantage of already tried-and-tested code, and automatically deal with changes to the view's implementation.
That's some pretty serious micro-optimisation in that article.
I wouldn't take it as gospel personally, having worked with EF quite a bit.
Sure those things can matter, but generally speaking, it's pretty quick.
If you've got a complicated view, and then you're performing further LINQ on that view, then sure, it could probably cause some slow performace, I wouldn't bet on it though.
The article doesn't even have any bench marks!
If performance is a serious issue for your program, narrow down which queries are slow and post them here, see if the SO community can help optimise the query for you. Much better solution than all the micro-optimisation if you ask me.

Why does LINQ-to-Entities put this query in a sub-select?

I have the following LINQ query:
var queryGroups = (from p in db.cl_contact_event
select new Groups { inputFileName = p.input_file_name }).Distinct();
Which translates to the following when run:
SELECT
[Distinct1].[C1] AS [C1],
[Distinct1].[input_file_name] AS [input_file_name]
FROM ( SELECT DISTINCT
[Extent1].[input_file_name] AS [input_file_name],
1 AS [C1]
FROM [mel].[cl_contact_event] AS [Extent1]
) AS [Distinct1]
Now I'm pretty sure that the reason there is a sub-select is because I have the base LINQ query surrounded by () and then perform .Distinct() but I don't know enough about LINQ to be sure of this. If that's indeed the case is there a way to restructure/code my query so that a sub-select doesn't occur?
I know that it probably seems that I'm just nit-picking here but I'm just curious.
In this I suspect that the actual root cause of the subquery is the anonymous type constructor. Because you are not selecting a known entity, but rather an arbitrary object constructed from other entity values, the EF parser needs to make sure it can produce the exact set of fields -- whether from a single table, joined tables, calculated fields, other sub-queries, etc. The expression tree parser is very good at writing SQL statements out of LINQ queries whenever possible, but it's not omniscient. It processes the queries in a systematic way, that will always produce correct results (in the sense that you get what you asked for), though not always optimal results.
As far as rewriting the query to eliminate the sub-select, first off: I don't see an obvious way to do so that eliminates the anonymous type and produces correct results. More importantly, though, I wouldn't bother. Modern SQL servers like Sybase are very smart -- often smarter than the developer -- and very good at producing an optimal query plan out of a query. Besides that, EF loves sub-queries, because they are very good ways to write complex queries in an automated fashion. You often find them even when your LINQ query did not appear use them. Trying to eliminate them all from your queries will quickly become an exercise in futility.
I wouldn't worry about this particular situation at all. SQL Server (and most likely any enterprise database) will optimize away the outer Select statement anyway. I would theorize that the reason this SQL statement is generated is because this is the most generic and reusable statement. From my experience, this always happens on Distinct().

Categories