The following (cut down) code excerpt is a Linq-To-Entities query that results in SQL (via ToTraceString) that is much slower than a hand crafted query. Am I doing anything stupid, or is Linq-to-Entities just bad at optimizing queries?
I have a ToList() at the end of the query as I need to execute it before using it to build an XML data structure (which was a whole other pain).
var result = (from mainEntity in entities.Main
where (mainEntity.Date >= today) && (mainEntity.Date <= tomorrow) && (!mainEntity.IsEnabled)
select new
{
Id = mainEntity.Id,
Sub =
from subEntity in mainEntity.Sub
select
{
Id = subEntity.Id,
FirstResults =
from firstResultEntity in subEntity.FirstResult
select new
{
Value = firstResultEntity.Value,
},
SecondResults =
from secondResultEntity in subEntity.SecondResult
select
{
Value = secondResultEntity.Value,
},
SubSub =
from subSubEntity in entities.SubSub
where (subEntity.Id == subSubEntity.MainId) && (subEntity.Id == subSubEntity.SubId)
select
new
{
Name = (from name in entities.Name
where subSubEntity.NameId == name.Id
select name.Name).FirstOrDefault()
}
}
}).ToList();
While working on this, I've also has some real problems with Dates. When I just tried to include returned dates in my data structure, I got internal error "1005".
Just as a general observation and not based on any practical experience with Linq-To-Entities (yet): having four nested subqueries inside a single query doesn't look like it's awfully efficient and speedy to begin with.
I think your very broad statement about the (lack of) quality of the SQL generated by Linq-to-Entities is not warranted - and you don't really back it up by much evidence, either.
Several well respected folks including Rico Mariani (MS Performance guru) and Julie Lerman (author of "Programming EF") have been showing in various tests that in general and overall, the Linq-to-SQL and Linq-to-Entities "engines" aren't really all that bad - they achieve overall at least 80-95% of the possible peak performance. Not every .NET app dev can achieve this :-)
Is there any way for you to rewrite that query or change the way you retrieve the bits and pieces that make up its contents?
Marc
Have you tried not materializing the result immediately by calling .ToList()? I'm not sure it will make a difference, but you might see improved performance if you iterate over the result instead of calling .ToList() ...
foreach( var r in result )
{
// build your XML
}
Also, you could try breaking up the one huge query into separate queries and then iterating over the results. Sending everything in one big gulp might be the issue.
Related
I have a very large amount of data that I need to gather for a report I am generating. All of this data comes from a Database that I am connected to via entity framework. For this query I have tried doing this a few different ways but no matter what I do it seems to be slow.
Overall I am curious if it is more efficient to have a LINQ query that has sub queries or is it better to do a foreach and then query for those values.
additional information for the DB a lot of the sub queries/loop iterations would be querying most of the largest tables in the DB.
Example code:
var b = (from brk in entities.Brokers
join pcy in Policies on brk.BrkId equals pcy.pcyBrkId
where pcy.DateStamp > twoYearsAgo
select new returnData
{
BroId = brk.brkId,
currentPrem = (from pcy in Policies
where pcy.PcyBrkID = brk.Brk.Id && pcy.InvDate > startDate && pcy.InvDate < endDate
select pcy.Premium).Sum(),
// 5 more similar subqueries
}).GroupBy(x=> x.BrkId).Select(x=> x.FirstOrDefault()).ToList();
OR
var b = (from brk in entities.Brokers
join pcy in Policies on brk.BrkId equals pcy.pcyBrkId
where pcy.DateStamp > twoYearsAgo
select new returnData
{
BroId = brk.brkId
}).GroupBy(x=> x.BrkId).Select(x=> x.FirstOrDefault()).ToList();
foreach( brk in b){
// grab data from subqueries here
}
One additional detail may be that I may be able to filter out some additional information if I grab the primary information reducing the results to go through in the foreach.
First of all, matters of performance always warrant profiling, no matter how reasonable or logical one or another solution might seem.
Saying that, usually, while working with database, less trips you do to database is better. Hence in your case it might be more efficient to have one single SQL query that retrieves big chunk of data over network, and after you process it locally with loops and whatnot. This guideline has to be an optimal solution for most cases.
All, obviously, depends on how big that data is, how big your network bandwidth is, and how fast and tuned your database is.
Side note: in general, if you work with big, or complex (intertwined) data, better to avoid using Entity Framework at all, especially when you're concerned about performance. Not sure if that might work for you.
I have often found that if I have too many joins in a Linq query (whether using Entity Framework or NHibernate) and/or the shape of the resulting anonymous class is too complex, Linq takes a very long time to materialize the result set into objects.
This is a generic question, but here's a specific example using NHibernate:
var libraryBookIdsWithShelfAndBookTagQuery = (from shelf in session.Query<Shelf>()
join sbttref in session.Query<ShelfBookTagTypeCrossReference>() on
shelf.ShelfId equals sbttref.ShelfId
join bookTag in session.Query<BookTag>() on
sbttref.BookTagTypeId equals (byte)bookTag.BookTagType
join btbref in session.Query<BookTagBookCrossReference>() on
bookTag.BookTagId equals btbref.BookTagId
join book in session.Query<Book>() on
btbref.BookId equals book.BookId
join libraryBook in session.Query<LibraryBook>() on
book.BookId equals libraryBook.BookId
join library in session.Query<LibraryCredential>() on
libraryBook.LibraryCredentialId equals library.LibraryCredentialId
join lcsg in session
.Query<LibraryCredentialSalesforceGroupCrossReference>()
on library.LibraryCredentialId equals lcsg.LibraryCredentialId
join userGroup in session.Query<UserGroup>() on
lcsg.UserGroupOrganizationId equals userGroup.UserGroupOrganizationId
where
shelf.ShelfId == shelfId &&
userGroup.UserGroupId == userGroupId &&
!book.IsDeleted &&
book.IsDrm != null &&
book.BookFormatTypeId != null
select new
{
Book = book,
LibraryBook = libraryBook,
BookTag = bookTag
});
// add a couple of where clauses, then...
var result = libraryBookIdsWithShelfAndBookTagQuery.ToList();
I know it's not the query execution, because I put a sniffer on the database and I can see that the query is taking 0ms, yet the code is taking about a second to execute that query and bring back all of 11 records.
So yeah, this is an overly complex query, having 8 joins between 9 tables, and I could probably restructure it into several smaller queries. Or I could turn it into a stored procedure - but would that help?
What I'm trying to understand is, where is that red line crossed between a query that is performant and one that starts to struggle with materialization? What's going on under the hood? And would it help if this were a SP whose flat results I subsequently manipulate in memory into the right shape?
EDIT: in response to a request in the comments, here's the SQL emitted:
SELECT DISTINCT book4_.bookid AS BookId12_0_,
libraryboo5_.librarybookid AS LibraryB1_35_1_,
booktag2_.booktagid AS BookTagId15_2_,
book4_.title AS Title12_0_,
book4_.isbn AS ISBN12_0_,
book4_.publicationdate AS Publicat4_12_0_,
book4_.classificationtypeid AS Classifi5_12_0_,
book4_.synopsis AS Synopsis12_0_,
book4_.thumbnailurl AS Thumbnai7_12_0_,
book4_.retinathumbnailurl AS RetinaTh8_12_0_,
book4_.totalpages AS TotalPages12_0_,
book4_.lastpage AS LastPage12_0_,
book4_.lastpagelocation AS LastPag11_12_0_,
book4_.lexilerating AS LexileR12_12_0_,
book4_.lastpageposition AS LastPag13_12_0_,
book4_.hidden AS Hidden12_0_,
book4_.teacherhidden AS Teacher15_12_0_,
book4_.modifieddatetime AS Modifie16_12_0_,
book4_.isdeleted AS IsDeleted12_0_,
book4_.importedwithlexile AS Importe18_12_0_,
book4_.bookformattypeid AS BookFor19_12_0_,
book4_.isdrm AS IsDrm12_0_,
book4_.lightsailready AS LightSa21_12_0_,
libraryboo5_.bookid AS BookId35_1_,
libraryboo5_.libraryid AS LibraryId35_1_,
libraryboo5_.externalid AS ExternalId35_1_,
libraryboo5_.totalcopies AS TotalCop5_35_1_,
libraryboo5_.availablecopies AS Availabl6_35_1_,
libraryboo5_.statuschangedate AS StatusCh7_35_1_,
booktag2_.booktagtypeid AS BookTagT2_15_2_,
booktag2_.booktagvalue AS BookTagV3_15_2_
FROM shelf shelf0_,
shelfbooktagtypecrossreference shelfbookt1_,
booktag booktag2_,
booktagbookcrossreference booktagboo3_,
book book4_,
librarybook libraryboo5_,
library librarycre6_,
librarycredentialsalesforcegroupcrossreference librarycre7_,
usergroup usergroup8_
WHERE shelfbookt1_.shelfid = shelf0_.shelfid
AND booktag2_.booktagtypeid = shelfbookt1_.booktagtypeid
AND booktagboo3_.booktagid = booktag2_.booktagid
AND book4_.bookid = booktagboo3_.bookid
AND libraryboo5_.bookid = book4_.bookid
AND librarycre6_.libraryid = libraryboo5_.libraryid
AND librarycre7_.librarycredentialid = librarycre6_.libraryid
AND usergroup8_.usergrouporganizationid =
librarycre7_.usergrouporganizationid
AND shelf0_.shelfid = #p0
AND usergroup8_.usergroupid = #p1
AND NOT ( book4_.isdeleted = 1 )
AND ( book4_.isdrm IS NOT NULL )
AND ( book4_.bookformattypeid IS NOT NULL )
AND book4_.lightsailready = 1
EDIT 2: Here's the performance analysis from ANTS Performance Profiler:
It is often database "good" practice to place lots of joins or super common joins into views. ORMs don't let you ignore these facts nor do they supplement the decades of time spent fine tuning databases to do these kinds of things efficiently. Refactor those joins into a singular view or a couple views if that'd make more sense in the greater perspective of your application.
NHibernate should be optimizing the query down and reducing the data so that .Net only has to mess with the important parts. However, if those domain objects are just naturally large, that's still a lot of data. Also, if it's a really large result set in terms of rows returned, that's a lot of objects getting instantiated even if the DB is able to return the set quickly. Refactoring this query into a view that only returns the data you actually need would also reduce object instantiation overhead.
Another thought would be to not do a .ToList(). Return the enumerable and let your code lazily consume the data.
According to profiling information, the CreateQuery takes 45% of the total execution time. However as you mentioned the query took 0ms when you executed directly. But this alone is not enough to say there is a performance problem because,
You are running the query with the profiler which has significant impact on execution time.
When you use a profiler, it will affect every code is being profiled but not the sql execution time (because it happens in the SQL server), so you can see everything else is slower compared to SQL statement.
so ideal scenario is to measure how long it takes to execute entire code block, measure time for SQL query and calculate times, and if you do that you will probably end up with different values.
However, I'm not saying that the the NH Linq to SQL implementation is optimized for any query you come up with, but there are other ways in NHibernate to deal with those situations such as QueryOverAPI, CriteriaQueries, HQL and finally SQL.
Where is that red line crossed between a query that is performant and
one that starts to struggle with materialization. What's going on under the hood?
This one is pretty hard question and without having detail knowledge of NHibernate Linq to SQL provider it's hard to provide a accurate answer. You can always try different mechanisms provided and see which one is the best for given scenario.
And would it help if this were a SP whose flat results I subsequently
manipulate in memory into the right shape?
Yes, using a SP would help things to work pretty fast, but using SP would add more maintenance problems to your code base.
You have generic question, I'll tell you generic answer :)
If you query data for reading (not for update) try to use anonymous classes. The reason is - they are lighter to create, they have no navigatoin properties. And you select only data you need! It's very important rule. So, try to replace your select with smth like this:
select new
{
Book = new { book.Id, book.Name},
LibraryBook = new { libraryBook.Id, libraryBook.AnotherProperty},
BookTag = new { bookTag.Name}
}
Stored procedures are good, when query is complex and linq-provider generates not effective code, so, you can replace it with plain SQL or stored procedure. It's not offten case and, I think, it's not your situation
Run your sql-query. How many rows it returns? Is it the same value as result? Sometimes linq provider generates code, that select much more rows to select one entity. It happens, when entity has one to many relationship with another selecting entity. For example:
class Book
{
int Id {get;set;}
string Name {get;set;}
ICollection<Tag> Tags {get;set;}
}
class Tag
{
string Name {get;set;}
Book Book {get;set;}
}
...
dbContext.Books.Where(o => o.Id == 1).Select(o=>new {Book = o, Tags = o.Tags}).Single();
I Select only one book with Id = 1, but provider will generate code, that returns rows amount equals to Tags amount (entity framework does this).
Split complex query to set of simple and join in client side. Sometimes, you have complex query with many conditionals and resulting sql become terrible. So, you split you big query to more simple, get results of each and join/filter on client side.
At the end, I advice you to use anonymous class as result of select.
Don’t use Linq’s Join. Navigate!
in that post you can see:
As long as there are proper foreign key constraints in the database, the navigation properties will be created automatically. It is also possible to manually add them in the ORM designer. As with all LINQ to SQL usage I think that it is best to focus on getting the database right and have the code exactly reflect the database structure. With the relations properly specified as foreign keys the code can safely make assumptions about referential integrity between the tables.
I agree 100% with the sentiments expressed by everyone else (with regards to their being two parts to the optimisation here and the SQL execution being a big unknown, and likely cause of poor performance).
Another part of the solution that might help you get some speed is to pre-compile your LINQ statements. I remember this being a huge optimisation on a tiny project (high traffic) I worked on ages and ages ago... seems like it would contribute to the client side slowness you're seeing. Having said all that though I've not found a need to use them since... so heed everyone else's warnings first! :)
https://msdn.microsoft.com/en-us/library/vstudio/bb896297(v=vs.100).aspx
Have got a method which returns IEnumerable<User> which I have been using Linq / Entity Framework / SQL Server to return results.
I came across a difficult conditional scenario, which was much more easily solved in C# iterating on the web server (at the end of a chain of linq statements, just before returning the data to the client):
public IEnumerable<User> ReturnUsersNotInRoles()
{
IQueryable<User> z = (from users
//...many joins..conditions...
).Distinct().Include(x => x.RoleUserLinks).ToList()
IEnumerable<User> list = new List<User>();
foreach (User user in z)
{
bool shouldReturnUser = true;
foreach (var rul in user.RoleUserLinks)
{
if (rul.LinkStatusID == (byte)Enums.LinkStatus.Added)
shouldReturnUser = false;
}
if (shouldReturnUser)
list.Add(user);
}
return list;
}
Question: In C# is there a more performant / less memory overhead way of doing this?
Am only bringing back the entities I need from Linq. There is no N+1 scenario. Performance currently is excellent.
I realise that ideally I'd be writing this in SQL / Linq, as then SQL Server would do its magic and serve me the data quickly. However I'm balancing this with a potentially v.hard query to understand, and excellent performance currently with iterating, and the ease of understanding the C# way.
How about this:
public IEnumerable<User> ReturnUsersNotInRoles()
{
var z = (from users
//...many joins..conditions...
).Distinct().Include(x => x.RoleUserLinks);
var addedLinkStatusID = (int)Enums.LinkStatus.Added;
return z.Where(user =>
false == user.RoleUserLinks.Any(link => link.LinkStatusID == addedLinkStatusID))
.ToList();
}
This should run completely as a SQL query - you could make the first part (z) materialize by adding a .ToList() at the end of the line that defines it.
By the way, regarding your question "In C# is there a more performant / less memory overhead way of doing this?" - well, firstly you can add a break statement right after you set shouldReturnUser = false;.
Secondly, I prefer using the LINQ primitives whenever possible whether or not I'm working with a database:
When used correctly, implementation using LINQ methods will probably be as fast or faster than anything you can write.
More importantly, they promote functional, stateless programming over stateful, bug-prone programming.
Also, if you are working with a database you have the bonus of being able to decide whether or not you want the code to run as a SQL query - all you have to do is decide where to materialize.
Your loop is equivalent to the following LINQ query - I find it easier to understand than the loop and it allows for complete execution on the server when combined with the first part of the query.
var linkStatusAdded = (Byte)Enums.LinkStatus.Added;
return z.Where(user => user.RoleUserLinks
.All(rul => rul.LinkStatusID != linkStatusAdded))
.ToList();
Is there a way to get count of resultset but return only top 5 records while making just one db hit instead of 2 (one for count and second for data)
There is not a particularly good way to do this in Entity Framework, at least as of v4. #Tobias writes a single LINQ query, but his suspicions are correct. You'll see multiple queries roll by in SQL Profiler.
Ignoring EF for a minute, this is a relatively complicated problem for SQL Server. Well, it's complicated once your data size gets large or your query gets complicated. You can get a flavor for what's involved here.
With that said, I wouldn't worry about it being 2 queries just yet. Don't optimize until you know it is an actual performance problem. You'll likely end up working around EF, maybe using the EF extensions and creating a stored proc that can take advantage of windowed functions and CTE's. Or maybe it will just return two result sets in a single procedure.
This little query should do the trick (I'm not sure if that's really just one physical query though, and it could be that the grouping is done in the code rather than in the DB), but it's definitely more convenient:
var obj = (from x in entities.SomeTable
let item = new { N = 1, x }
group item by item.N into g
select new { Count = g.Count(), First = g.Take(5) }).FirstOrDefault();
Nonetheless, just doing this in two queries will definitely be much faster (especially if you define them in one stored procedure, as proposed here).
I have written what I thought to be a pretty solid Linq statement but this is getting 2 to 5 second wait times on execution. Does anybody have thoughts about how to speed this up?
t.states = (from s in tmdb.tmZipCodes
where zips.Contains(s.ZipCode) && s.tmLicensing.Required.Equals(true)
group s by new Licensing {
stateCode = s.tmLicensing.StateCode,
stateName = s.tmLicensing.StateName,
FIPSCode = s.tmLicensing.FIPSCode,
required = (bool)s.tmLicensing.Required,
requirements = s.tmLicensing.Requirements,
canWorkWhen = s.tmLicensing.CanWorkWhen,
appProccesingTime = (int) s.tmLicensing.AppProcessingTime
}
into state
select state.Key).ToList();
I've changed it to a two stage query which runs almost instantaneously by doing a distinct query to make my grouping work, but it seems to me that it is a little counter intuitive to have that run so much faster than a single query.
Im not sure why it's taking so long, but it might help to have a look at LINQPad, it will show you the actual query being generated and help optimize.
also, it might not be the actual query that's taking a long time, it might be the query generation. I've found that the longest part is when the linq is being converted to the sql statement.
you could possibly use a compiled query to speed up the sql generation process. a little information can be found on 3devs. I'm not trying to promote my blog entry but i think it fits.
I would hope it's irrelevant, but
s.tmLicensing.Required.Equals(true)
looks an awful lot (to me) like:
s.tmLicensing
assuming it's a Boolean property.
Given that you know it's true, I don't see much point in having it in the grouping either.
Having said those things, John Boker is absolutely right on both counts: find out whether it's the SQL or LINQ, and then attack the relevant bit.
You don't seem to be using the group, just selecting the key at the end. So, does this do the same thing that you want?
t.states = (from s in tmdb.tmZipCodes
where zips.Contains(s.ZipCode) && s.tmLicensing.Required.Equals(true)
select new Licensing {
stateCode = s.tmLicensing.StateCode,
stateName = s.tmLicensing.StateName,
FIPSCode = s.tmLicensing.FIPSCode,
required = (bool)s.tmLicensing.Required,
requirements = s.tmLicensing.Requirements,
canWorkWhen = s.tmLicensing.CanWorkWhen,
appProccesingTime = (int) s.tmLicensing.AppProcessingTime
}).Distinct().ToList();
Also bear in mind that LINQ does not execute a query until it has to. So if you build your query in two statements, it will not execute that query against the data context (in this case SQL Server) until the call to ToList. When the query does run it will merge the multiple querys into one query and execute that.