Basic idea in psuedocode
Select(a, b) => new Tuple<List<Item1>, List<Item2>>(a, b)
I am trying to accomplish this in:
A single query to the db
obviously using linq (query or method syntax)
Here are the two classes involved
public class Bundle
{
public Guid Id { get; set; }
public string Name { get; set; }
public HashSet<Inventory> Inventories { get; set; }
}
public class Inventory
{
public Guid Id { get; set; }
public string Name { get; set; }
public int Stock { get; set; }
}
Right now all I can think of is
using (var context = new MyEntities())
{
return new Tuple<IEnumerable<Inventory>, IEnumerable<Bundle>>(context.Inventories.OrderBy(a => a.Stock).ToList()
, context.Bundles.Include(b => b.Inventories).OrderBy(c => c.Name).ToList());
}
However, this would hit the database twice.
I know UNION is used to combine result sets from 2 different queries but the two queries must have the same # of columns so I'm assuming it's best used when selecting the same data.
How can I select data from two different tables into two separate lists with only ONE hit on the db?
If you want two result sets, you can do it by throwing two queries. This can be done in a single database call without issue, but it won't magically divide into two sets of objects as you are interested in.
In fact, asking more than one question and getting more than one result set is very common when the cost of establishing connection (instantiation cost or latency cost, etc.) is great enough to warrant it. I have done it myself in a stored procedure, asking for everything a page needs in one query.
But, if performance is the key issue, caching is also very common. And, if these are drop down lists, or something else where the list requested is small, and the list does not change often, you can even pull it into memory when the application starts and let it set on the web server so you are not making the database trip.
I am not fond of LINQ to SQL, as it creates a mess for DBAs, but you can do it something like this (just one example, any method where you can chain commands will work):
var connString = "{conn string here}";
var commandString = "select * from tableA; select * from tableB";
var conn = new SqlConnection(connString);
var command = new SqlCommand(commandString, conn);
try {
conn.Open();
var result = command.Execute();
// work with results here
}
finally {
conn.Dispose();
}
I have not filled in all of the details, but you can do this with any number of commands. Once again, if the information does not change, consider a single hit and holding in memory (caching through programming) or using some other type of cache mechanism.
If you are worried about performance, I don't think that reducing the amount of queries is the way to go. In fact, if you want fine-grain optimizations, LINQ isn't the most appropriate tool, either.
That being said, you could make the two different objects match the same interface/columns, filling with dummy properties for those missing in the other type. This should theoretically be translated to a SQL union containing all of the columns.
var union = context.Inventories
.Select(i => new { i.Id, i.Name, i.Stock, Inventories=null })
.Concat(context.Bundles.Select(b => new { b.Id, b.Name, Stock=0, b.Inventories));
Note that in this case Concat is preferred over of Union, as it doesn't alter the order of your rows and allows duplicate rows.
What you are asking for is not possible.
If you are trying to get two distinct results you need to hit the DB twice, once per result set. You can do an outer join, but this would make your result set larger than it should, and judging from the fact that you want to hit the DB once, you care about performance.
Also, if performance was an issue, I would not use Linq in the first place.
I think that you can do this by asking for multiple result sets (as Gregory pointed out already), I am not sure though what would be the "best" way for this. Why don't you take a look at this msdn article for example?
How to: Use Stored Procedures Mapped for Multiple Result Shape
Or this, for a linq-to-sql approach:
LINQ to SQL: returning multiple result sets
I have two ICollection collections:
public partial class ObjectiveDetail
{
public int ObjectiveDetailId { get; set; }
public int Number { get; set; }
public string Text { get; set; }
}
var _objDetail1: // contains a list of ObjectiveDetails from my database.
var _objDetail2: // contains a list of ObjectiveDetails from web front end.
How can I iterate through these and issue and Add, Delete or Update to synchronize the database with the latest from the web front end?
If there is a record present in the first list but not the second then I would like to:
_uow.ObjectiveDetails.Delete(_objectiveDetail);
If there is a record present in the second list but not the first then I would like to:
_uow.ObjectiveDetails.Add(_objectiveDetail);
If there is a record (same ObjectiveDetailId) in the first and second then I need to see if they are the same and if not issue an:
_uow.ObjectiveDetails.Update(_objectiveDetail);
I was thinking to do this with some kind of:
foreach (var _objectiveDetail in _objectiveDetails) {}
but then I think I might need to have two of these and I am also wondering if there is a better way. Does anyone have any suggestions as to how I could do this?
The following code is one of some possible solutions
var toBeUpdated =
objectiveDetail1.Where(
a => objectiveDetail2.Any(
b => (b.ObjectiveDetailId == a.ObjectiveDetailId) &&
(b.Number != a.Number || !b.Text.Equals(a.Text))));
var toBeAdded =
objectiveDetail1.Where(a => objectiveDetail2.All(
b => b.ObjectiveDetailId != a.ObjectiveDetailId));
var toBeDeleted =
objectiveDetail2.Where(a => objectiveDetail1.All(
b => b.ObjectiveDetailId != a.ObjectiveDetailId));
The rest is a simple code to Add, Delete, Update the three collections to the database.
It's look like you just want the two lists to be a copy of one another, you can just implement a Copy method and replace the outdated collection, if you implement ICollection you will need to implement CopyTo, also you can add a version field to the container so you can know if you need to update it.
If you don't want to do it this way and you want to go through the elements and update them check if you can save in each object the state (modified, deleted, updated) this will help in the comparison.
foreach (var _objectiveDetail in _objectiveDetails) {} but then I
think I might need to have two of these and I am also wondering if
there is a better way. Does anyone have any suggestions as to how I
could do this?
instead of looping through whole collection use LINQ query:
var query = from _objectiveDetail in _objectiveDetails
where (condition)
select ... ;
update:
It's pointless to iterate through whole collection if you want to update/delete/add something from web end. Humans are a bit slower than computers, isn't it? Do it one by one. In fact I don't understand the idea of 2 collections. What is it for? If you still want it: use event to run query, select updated/deleted/added record, do appropriate operation on it.
I am writing an application that validates some cities. Part of the validation is checking if the city is already in a list by matching the country code and cityname (or alt cityname).
I am storing my existing cities list as:
public struct City
{
public int id;
public string countrycode;
public string name;
public string altName;
public int timezoneId;
}
List<City> cityCache = new List<City>();
I then have a list of location strings that contain country codes and city names etc. I split this string and then check if the city already exists.
string cityString = GetCity(); //get the city string
string countryCode = GetCountry(); //get the country string
city = new City(); //create a new city object
if (!string.IsNullOrEmpty(cityString)) //don't bother checking if no city was specified
{
//check if city exists in the list in the same country
city = cityCache.FirstOrDefault(x => countryCode == x.countrycode && (Like(x.name, cityString ) || Like(x.altName, cityString )));
//if no city if found, search for a single match accross any country
if (city.id == default(int) && cityCache.Count(x => Like(x.name, cityString ) || Like(x.altName, cityString )) == 1)
city = cityCache.FirstOrDefault(x => Like(x.name, cityString ) || Like(x.altName, cityString ));
}
if (city.id == default(int))
{
//city not matched
}
This is very slow for lots of records, as I am also checking other objects like airports and countries in the same way. Is there any way I can speed this up? Is there a faster collection for this kind of comparison than List<>, and is there a faster comparison function that FirsOrDefault()?
EDIT
I forgot to post my Like() function:
bool Like(string s1, string s2)
{
if (string.IsNullOrEmpty(s1) || string.IsNullOrEmpty(s2))
return s1 == s2;
if (s1.ToLower().Trim() == s2.ToLower().Trim())
return true;
return Regex.IsMatch(Regex.Escape(s1.ToLower().Trim()), Regex.Escape(s2.ToLower().Trim()) + ".");
}
I would use a HashSet for the CityString and CountryCode.
Something like
var validCountryCode = new HashSet<string>(StringComparison.OrdinalIgnoreCase);
if (validCountryCode.Contains(city.CountryCode))
{
}
etc...
Personally I would do all the validation in the constructor to ensure only valid City objects exist.
Other things to watch out for performance
Use HashSet if you're looking it up in a valid list.
Use IEqualityComparer where appropriate, reuse the object to avoid the construction/GC costs.
Use a Dictionary for anything you need to lookup (e.g. timeZoneId)
Edit 1
You're cityCache could be something like,
var cityCache = new Dictionary<string, Dictionary<string, int>>();
var countryCode = "";
var cityCode = "";
var id = x;
public static IsCityValid(City c)
{
return
cityCache.ContainsKey(c.CountryCode) &&
cityCache[c.CountryCode].ContainsKey(c.CityCode) &&
cityCache[c.CountryCode][c.CityCode] == c.Id;
}
Edit 2
Didn't think I have to explain this, but based on the comments, maybe.
FirstOrDefault() is an O(n) operation. Essentially everytime you are trying to find a find something in a list, you can either be lucky and it is the first in the list, or unlucky and it is the last, average of list.Count / 2. A dictionary on the other hand will be an O(1) lookup. Using the IEqualtiyComparer it will generate a HashCode() and lookup what bucket it sits in. If there are loads of collisions only then will it use the Equals to find what you're after in the list of things in the same bucket. Even with a poor quality HashCode() (short of returning the same HashCode always) because Dictionary / HashSet use prime number buckets you will split your list up reducing the number of Equalities you need to complete.
So a list of 10 objects means you're on average running LIKE 5 times.
A Dictionary of the same 10 objects as below (depending on the quality of the HashCode), could be as little as one HashCode() call followed by one Equals() call.
This sounds like a good candidate for a binary tree.
For binary tree implementations in .NET, see: Objects that represent trees
EDIT:
If you want to search a collection quickly, and that collection is particularly large, then your best option is to sort it and implement a search algorithm based on that sorting.
Binary trees are a good option when you want to search quickly and insert items relatively infrequently. To keep your searches quick, though, you'll need to use a balancing binary tree.
For this to work properly, though, you'll also need a standard key to use for your cities. A numeric key would be best, but strings can work fine too. If you concatenated your city with other information (such as the state and country) you will get a nice unique key. You could also change the case to all upper- or lower-case to get a case-insensitive key.
If you don't have a key, then you can't sort your data. If you can't sort your data, then there's not going to many "quick" options.
EDIT 2:
I notice that your Like function edits your strings a lot. Editing a string is an extremely expensive operation. You would be much better off performing the ToLower() and Trim() functions once, preferably when you are first loading your data. This will probably speed up your function considerably.
I've written this code to project one to many relation but it's not working:
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
IEnumerable<Store> stores = connection.Query<Store, IEnumerable<Employee>, Store>
(#"Select Stores.Id as StoreId, Stores.Name,
Employees.Id as EmployeeId, Employees.FirstName,
Employees.LastName, Employees.StoreId
from Store Stores
INNER JOIN Employee Employees ON Stores.Id = Employees.StoreId",
(a, s) => { a.Employees = s; return a; },
splitOn: "EmployeeId");
foreach (var store in stores)
{
Console.WriteLine(store.Name);
}
}
Can anybody spot the mistake?
EDIT:
These are my entities:
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
public double Price { get; set; }
public IList<Store> Stores { get; set; }
public Product()
{
Stores = new List<Store>();
}
}
public class Store
{
public int Id { get; set; }
public string Name { get; set; }
public IEnumerable<Product> Products { get; set; }
public IEnumerable<Employee> Employees { get; set; }
public Store()
{
Products = new List<Product>();
Employees = new List<Employee>();
}
}
EDIT:
I change the query to:
IEnumerable<Store> stores = connection.Query<Store, List<Employee>, Store>
(#"Select Stores.Id as StoreId ,Stores.Name,Employees.Id as EmployeeId,
Employees.FirstName,Employees.LastName,Employees.StoreId
from Store Stores INNER JOIN Employee Employees
ON Stores.Id = Employees.StoreId",
(a, s) => { a.Employees = s; return a; }, splitOn: "EmployeeId");
and I get rid of exceptions! However, Employees are not mapped at all. I am still not sure what problem it had with IEnumerable<Employee> in first query.
This post shows how to query a highly normalised SQL database, and map the result into a set of highly nested C# POCO objects.
Ingredients:
8 lines of C#.
Some reasonably simple SQL that uses some joins.
Two awesome libraries.
The insight that allowed me to solve this problem is to separate the MicroORM from mapping the result back to the POCO Entities. Thus, we use two separate libraries:
Dapper as the MicroORM.
Slapper.Automapper for mapping.
Essentially, we use Dapper to query the database, then use Slapper.Automapper to map the result straight into our POCOs.
Advantages
Simplicity. Its less than 8 lines of code. I find this a lot easier to understand, debug, and change.
Less code. A few lines of code is all Slapper.Automapper needs to handle anything you throw at it, even if we have a complex nested POCO (i.e. POCO contains List<MyClass1> which in turn contains List<MySubClass2>, etc).
Speed. Both of these libraries have an extraordinary amount of optimization and caching to make them run almost as fast as hand tuned ADO.NET queries.
Separation of concerns. We can change the MicroORM for a different one, and the mapping still works, and vice-versa.
Flexibility. Slapper.Automapper handles arbitrarily nested hierarchies, it isn't limited to a couple of levels of nesting. We can easily make rapid changes, and everything will still work.
Debugging. We can first see that the SQL query is working properly, then we can check that the SQL query result is properly mapped back to the target POCO Entities.
Ease of development in SQL. I find that creating flattened queries with inner joins to return flat results is much easier than creating multiple select statements, with stitching on the client side.
Optimized queries in SQL. In a highly normalized database, creating a flat query allows the SQL engine to apply advanced optimizations to the whole which would not normally be possible if many small individual queries were constructed and run.
Trust. Dapper is the back end for StackOverflow, and, well, Randy Burden is a bit of a superstar. Need I say any more?
Speed of development. I was able to do some extraordinarily complex queries, with many levels of nesting, and the dev time was quite low.
Fewer bugs. I wrote it once, it just worked, and this technique is now helping to power a FTSE company. There was so little code that there was no unexpected behavior.
Disadvantages
Scaling beyond 1,000,000 rows returned. Works well when returning < 100,000 rows. However, if we are bringing back >1,000,000 rows, in order to reduce the traffic between us and SQL server, we should not flatten it out using inner join (which brings back duplicates), we should instead use multiple select statements and stitch everything back together on the client side (see the other answers on this page).
This technique is query oriented. I haven't used this technique to write to the database, but I'm sure that Dapper is more than capable of doing this with some more extra work, as StackOverflow itself uses Dapper as its Data Access Layer (DAL).
Performance Testing
In my tests, Slapper.Automapper added a small overhead to the results returned by Dapper, which meant that it was still 10x faster than Entity Framework, and the combination is still pretty darn close to the theoretical maximum speed SQL + C# is capable of.
In most practical cases, most of the overhead would be in a less-than-optimum SQL query, and not with some mapping of the results on the C# side.
Performance Testing Results
Total number of iterations: 1000
Dapper by itself: 1.889 milliseconds per query, using 3 lines of code to return the dynamic.
Dapper + Slapper.Automapper: 2.463 milliseconds per query, using an additional 3 lines of code for the query + mapping from dynamic to POCO Entities.
Worked Example
In this example, we have list of Contacts, and each Contact can have one or more phone numbers.
POCO Entities
public class TestContact
{
public int ContactID { get; set; }
public string ContactName { get; set; }
public List<TestPhone> TestPhones { get; set; }
}
public class TestPhone
{
public int PhoneId { get; set; }
public int ContactID { get; set; } // foreign key
public string Number { get; set; }
}
SQL Table TestContact
SQL Table TestPhone
Note that this table has a foreign key ContactID which refers to the TestContact table (this corresponds to the List<TestPhone> in the POCO above).
SQL Which Produces Flat Result
In our SQL query, we use as many JOIN statements as we need to get all of the data we need, in a flat, denormalized form. Yes, this might produce duplicates in the output, but these duplicates will be eliminated automatically when we use Slapper.Automapper to automatically map the result of this query straight into our POCO object map.
USE [MyDatabase];
SELECT tc.[ContactID] as ContactID
,tc.[ContactName] as ContactName
,tp.[PhoneId] AS TestPhones_PhoneId
,tp.[ContactId] AS TestPhones_ContactId
,tp.[Number] AS TestPhones_Number
FROM TestContact tc
INNER JOIN TestPhone tp ON tc.ContactId = tp.ContactId
C# code
const string sql = #"SELECT tc.[ContactID] as ContactID
,tc.[ContactName] as ContactName
,tp.[PhoneId] AS TestPhones_PhoneId
,tp.[ContactId] AS TestPhones_ContactId
,tp.[Number] AS TestPhones_Number
FROM TestContact tc
INNER JOIN TestPhone tp ON tc.ContactId = tp.ContactId";
string connectionString = // -- Insert SQL connection string here.
using (var conn = new SqlConnection(connectionString))
{
conn.Open();
// Can set default database here with conn.ChangeDatabase(...)
{
// Step 1: Use Dapper to return the flat result as a Dynamic.
dynamic test = conn.Query<dynamic>(sql);
// Step 2: Use Slapper.Automapper for mapping to the POCO Entities.
// - IMPORTANT: Let Slapper.Automapper know how to do the mapping;
// let it know the primary key for each POCO.
// - Must also use underscore notation ("_") to name parameters in the SQL query;
// see Slapper.Automapper docs.
Slapper.AutoMapper.Configuration.AddIdentifiers(typeof(TestContact), new List<string> { "ContactID" });
Slapper.AutoMapper.Configuration.AddIdentifiers(typeof(TestPhone), new List<string> { "PhoneID" });
var testContact = (Slapper.AutoMapper.MapDynamic<TestContact>(test) as IEnumerable<TestContact>).ToList();
foreach (var c in testContact)
{
foreach (var p in c.TestPhones)
{
Console.Write("ContactName: {0}: Phone: {1}\n", c.ContactName, p.Number);
}
}
}
}
Output
POCO Entity Hierarchy
Looking in Visual Studio, We can see that Slapper.Automapper has properly populated our POCO Entities, i.e. we have a List<TestContact>, and each TestContact has a List<TestPhone>.
Notes
Both Dapper and Slapper.Automapper cache everything internally for speed. If you run into memory issues (very unlikely), ensure that you occasionally clear the cache for both of them.
Ensure that you name the columns coming back, using the underscore (_) notation to give Slapper.Automapper clues on how to map the result into the POCO Entities.
Ensure that you give Slapper.Automapper clues on the primary key for each POCO Entity (see the lines Slapper.AutoMapper.Configuration.AddIdentifiers). You can also use Attributes on the POCO for this. If you skip this step, then it could go wrong (in theory), as Slapper.Automapper would not know how to do the mapping properly.
Update 2015-06-14
Successfully applied this technique to a huge production database with over 40 normalized tables. It worked perfectly to map an advanced SQL query with over 16 inner join and left join into the proper POCO hierarchy (with 4 levels of nesting). The queries are blindingly fast, almost as fast as hand coding it in ADO.NET (it was typically 52 milliseconds for the query, and 50 milliseconds for the mapping from the flat result into the POCO hierarchy). This is really nothing revolutionary, but it sure beats Entity Framework for speed and ease of use, especially if all we are doing is running queries.
Update 2016-02-19
Code has been running flawlessly in production for 9 months. The latest version of Slapper.Automapper has all of the changes that I applied to fix the issue related to nulls being returned in the SQL query.
Update 2017-02-20
Code has been running flawlessly in production for 21 months, and has handled continuous queries from hundreds of users in a FTSE 250 company.
Slapper.Automapper is also great for mapping a .csv file straight into a list of POCOs. Read the .csv file into a list of IDictionary, then map it straight into the target list of POCOs. The only trick is that you have to add a propery int Id {get; set}, and make sure it's unique for every row (or else the automapper won't be able to distinguish between the rows).
Update 2019-01-29
Minor update to add more code comments.
See: https://github.com/SlapperAutoMapper/Slapper.AutoMapper
I wanted to keep it as simple as possible, my solution:
public List<ForumMessage> GetForumMessagesByParentId(int parentId)
{
var sql = #"
select d.id_data as Id, d.cd_group As GroupId, d.cd_user as UserId, d.tx_login As Login,
d.tx_title As Title, d.tx_message As [Message], d.tx_signature As [Signature], d.nm_views As Views, d.nm_replies As Replies,
d.dt_created As CreatedDate, d.dt_lastreply As LastReplyDate, d.dt_edited As EditedDate, d.tx_key As [Key]
from
t_data d
where d.cd_data = #DataId order by id_data asc;
select d.id_data As DataId, di.id_data_image As DataImageId, di.cd_image As ImageId, i.fl_local As IsLocal
from
t_data d
inner join T_data_image di on d.id_data = di.cd_data
inner join T_image i on di.cd_image = i.id_image
where d.id_data = #DataId and di.fl_deleted = 0 order by d.id_data asc;";
var mapper = _conn.QueryMultiple(sql, new { DataId = parentId });
var messages = mapper.Read<ForumMessage>().ToDictionary(k => k.Id, v => v);
var images = mapper.Read<ForumMessageImage>().ToList();
foreach(var imageGroup in images.GroupBy(g => g.DataId))
{
messages[imageGroup.Key].Images = imageGroup.ToList();
}
return messages.Values.ToList();
}
I still do one call to the database, and while i now execute 2 queries instead of one, the second query is using a INNER join instead of a less optimal LEFT join.
A slight modification of Andrew's answer that utilizes a Func to select the parent key instead of GetHashCode.
public static IEnumerable<TParent> QueryParentChild<TParent, TChild, TParentKey>(
this IDbConnection connection,
string sql,
Func<TParent, TParentKey> parentKeySelector,
Func<TParent, IList<TChild>> childSelector,
dynamic param = null, IDbTransaction transaction = null, bool buffered = true, string splitOn = "Id", int? commandTimeout = null, CommandType? commandType = null)
{
Dictionary<TParentKey, TParent> cache = new Dictionary<TParentKey, TParent>();
connection.Query<TParent, TChild, TParent>(
sql,
(parent, child) =>
{
if (!cache.ContainsKey(parentKeySelector(parent)))
{
cache.Add(parentKeySelector(parent), parent);
}
TParent cachedParent = cache[parentKeySelector(parent)];
IList<TChild> children = childSelector(cachedParent);
children.Add(child);
return cachedParent;
},
param as object, transaction, buffered, splitOn, commandTimeout, commandType);
return cache.Values;
}
Example usage
conn.QueryParentChild<Product, Store, int>("sql here", prod => prod.Id, prod => prod.Stores)
According to this answer there is no one to many mapping support built into Dapper.Net. Queries will always return one object per database row. There is an alternative solution included, though.
Here is another method:
Order (one) - OrderDetail (many)
using (var connection = new SqlCeConnection(connectionString))
{
var orderDictionary = new Dictionary<int, Order>();
var list = connection.Query<Order, OrderDetail, Order>(
sql,
(order, orderDetail) =>
{
Order orderEntry;
if (!orderDictionary.TryGetValue(order.OrderID, out orderEntry))
{
orderEntry = order;
orderEntry.OrderDetails = new List<OrderDetail>();
orderDictionary.Add(orderEntry.OrderID, orderEntry);
}
orderEntry.OrderDetails.Add(orderDetail);
return orderEntry;
},
splitOn: "OrderDetailID")
.Distinct()
.ToList();
}
Source: http://dapper-tutorial.net/result-multi-mapping#example---query-multi-mapping-one-to-many
Here is a crude workaround
public static IEnumerable<TOne> Query<TOne, TMany>(this IDbConnection cnn, string sql, Func<TOne, IList<TMany>> property, dynamic param = null, IDbTransaction transaction = null, bool buffered = true, string splitOn = "Id", int? commandTimeout = null, CommandType? commandType = null)
{
var cache = new Dictionary<int, TOne>();
cnn.Query<TOne, TMany, TOne>(sql, (one, many) =>
{
if (!cache.ContainsKey(one.GetHashCode()))
cache.Add(one.GetHashCode(), one);
var localOne = cache[one.GetHashCode()];
var list = property(localOne);
list.Add(many);
return localOne;
}, param as object, transaction, buffered, splitOn, commandTimeout, commandType);
return cache.Values;
}
its by no means the most efficient way, but it will get you up and running. I'll try and optimise this when i get a chance.
use it like this:
conn.Query<Product, Store>("sql here", prod => prod.Stores);
bear in mind your objects need to implement GetHashCode, perhaps like this:
public override int GetHashCode()
{
return this.Id.GetHashCode();
}
I'm running into a common need in my project to return collections of my model objects, plus a count of certain types of children within each, but I don't know if it is possible or how to model a "TotalCount" property in a Model class and populate it as part of on single Entity Framework query, preferably using LINQ queries. Is it possible to do this whilst being able to use the Entity Framework .Include("Object") and .Skip() and .Take()? I'm new to the Entity Framework so I may be missing tons of obvious stuff that can allow this...
I would like to be able to paginate on the dynamically constructed count properties as well. I'm thinking that the most scalable approach would be to store the counts as separate database properties and then simply query the count properties. But for cases where there are small row counts that I'm dealing with, I'd rather do the counts dynamically.
In a model like this:
Table: Class
Table: Professor
Table: Attendee
Table: ClassComment
I'd like to return a list of Class objects in the form of List, but I would also like the counts of Attendees and Class comments to be determined in a single query (LINQ preferred) and set in two Class properties called AttendeeCount and ClassCommentCount.
I have this thus far:
var query = from u in context.Classes
orderby tl.Name
select u;
List<Class> topics = ((ObjectQuery<Class>)query)
.Include("ClassComments")
.Skip(startRecord).Take(recordsToReturn).ToList();
Any suggestions or alternative query approaches that can still allow the use of .Include() and pagination would be much much appreciated, in order to produce a single database query, if at all possible. Thank you for any suggestions!
Try this:
public class ClassViewModel {
public Class Class { get; set; }
public int AttendeeCount { get; set; }
public int ClassCommentCount { get; set; }
}
var viewModel = context.Classes.Select(clas =>
new ClassViewModel {
Class = clas,
AttendeeCount = clas.ClassAttendes.Count,
ClassCommentCount = clas.ClassComments.Count}
).OrderBy(model => model.ClassCommentCount).Skip(startRecord).Take(recordsToReturn).ToList();
You don't have to include comments to get count.
It will not work this way. The easiest approach is to use projection into anonymous (or custom) non entity type. I would try something like this:
var query = context.Classes
.Include("ClassComments") // Only add this if you want eager loading of all realted comments
.OrderBy(c => c.Name)
.Skip(startRecord)
.Take(recordsToReturn)
.Select(c => new
{
Class = c,
AttendeeCount = c.Attendees.Count(),
ClassCommentCount = c.ClassComments.Count() // Not needed because you are loading all Class comments so you can call Count on loaded collection
});
The problem in your requirement are AttendeeCount and ClassCommentCount properties. You can't easily add them to your model because there is no corresponding column in database (unless you define one and in such case you don't need to manually count records). You can define them in partial Class implementation but in such case you can't use them in Linq-to-entities query.
The only way to map this in EF is to use DB view and create special read only entity to represent it in your applicaiton or to use DefiningQuery which is custom SQL command defined in SSDL instead of DB table or view.