I have faced a strange problem. When user comes to any page of my web app
I do check if user has permissions to access it, and provide trial period if its first time to come.
Here is my piece of code:
List<string> temp_workers_id = new List<string>();
...
if (temp_workers_id.Count > 6)
{
System.Data.SqlTypes.SqlDateTime sqlDate = new System.Data.SqlTypes.SqlDateTime(DateTime.Now.Date);
var rusers = dbctx.tblMappings.Where(tm => temp_workers_id.Any(c => c == tm.ModelID));
var permissions = dbctx.UserPermissions
.Where(p => rusers
.Any(ap => ap.UserID == p.UserID)
&& p.DateStart != null
&& p.DateEnd != null
&& p.DateStart <= sqlDate.Value
&& p.DateEnd >= sqlDate.Value);
if (permissions.Count() < 1)
{
permissions = dbctx.UserPermissions
.Where(p => rusers
.Any(ap => ap.UserID == p.UserID)
&& p.DateStart == null
&& p.DateEnd == null);
var used = dbctx.UserPermissions
.Where(p => rusers
.Any(ap => ap.UserID == p.UserID)
&& p.DateStart != null
&& p.DateEnd != null);
if (permissions.Count() > 0 && used.Count() < 1)
{
var p = permissions.First();
using (Models.TTTDbContext tdbctx = new Models.TTTDbContext())
{
var tp = tdbctx.UserPermissions.SingleOrDefault(tup => tup.UserID == p.UserID);
tp.DateStart = DateTime.Now.Date;
tp.DateEnd = DateTime.Now.Date.AddDays(60);
tdbctx.SaveChanges();
}
here the First() method throws exception:
Sequence contains no elements
how that even could be?
EDIT:
I dont think that user opens two browsers and navigate here at the same time, but could be the concurrency issue?
You claim you only found this in the server logs and didn't encounter it during debugging. That means that between these lines:
if (permissions.Count() > 0)
{
var p = permissions.First();
Some other process or thread changed your database, so that the query didn't match any documents anymore.
This is caused by permissions holding a lazily evaluated resource, meaning that the query is only executed when you iterate it (which Count() and First()) do.
So in the Count(), the query is executed:
SELECT COUNT(*) ... WHERE ...
Which returns, at that moment, one row. Then the data is modified externally, causing the next query (at First()):
SELECT n1, n2, ... WHERE ...
To return zero rows, causing First() to throw.
Now for how to solve that, is up to you, and depends entirely on how you want to model this scenario. It means the second query was actually correct: at that moment, there were no more rows that fulfilled the query criteria. You could materialize the query once:
permissions = query.Where(...).ToList()
But that would mean your logic operates on stale data. The same would happen if you'd use FirstOrDefault():
var permissionToApply = permissions.FirstOrDefault();
if (permissionToApply != null)
{
// rest of your logic
}
So it's basically a lose-lose scenario. There's always the chance that you're operating on stale data, which means that the next code:
tdbctx.UserPermissions.SingleOrDefault(tup => tup.UserID == p.UserID);
Would throw as well. So every time you query the database, you'll have to write the code in such a way that it can handle the records not being present anymore.
Related
I am making a call to my database via Linq in which I am trying to filter the data which the database returns.
However, my query is ending up with multiple does not contains check in which each instance of the property has to be separately converted to lower case so that it is case insensitive.
I am not entirely sure if there is a performance impact but if there is a way to better frame the query or to optimize it.
I surely would like to factor that in.
var accounts = context.Accounts
.Where(x => !x.type.ToLower().Contains("distribution")
&& !x.type.ToLower().Contains("bonus")
&& !x.type.ToLower().Contains("dividend")
&& !x.type.ToLower().Contains("redemption")
&& !x.type.ToLower().Contains("institutional")
&& !x.type.ToLower().Contains("unclaimed")
&& !x.type.ToLower().Contains("segregated")
&& !x.type.ToLower().Contains("discontinued")
&& !x.type.ToLower().Contains("retail")
&& !x.type.ToLower().Contains("cumulative")
&& !x.type.ToLower().Contains("monthly payment option")
&& !x.type.ToLower().Contains("payout")
&& !x.type.ToLower().Contains("withheld")
&& !x.type.ToLower().Contains("pf")
&& !x.type.ToLower().Contains(" p f "))
.ToList();
You can put all your asserted strings in a collection and take advantage of the linq All method. Something like this:
private static readonly string[] Filters = new []
{
"distribution",
"bonus",
"dividend",
"redemption",
"institutional",
"unclaimed",
"segregated",
"discontinued",
"retail",
"cumulative",
"monthly payment option",
"payout",
"withheld",
"pf",
" p f "
};
var accounts = context.Accounts.Where(x => Filters.All(f => !x.ToLower().Contains(f)));
Or more optimized:
var accounts = context.Accounts.Where(x => Filters.All(f => !x.Contains(f, StringComparer.CurrentCultureIgnoreCase)));
I was originally using a foreach loop and then for each element in the loop, I perform a LINQ query like so:
foreach (MyObject identifier in identifiers.Where(i => i.IsMarkedForDeletion == false))
{
if (this.MyEntities.Identifiers.Where(pi => identifier.Field1 == pi.Field1 && identifier.Field2 == pi.Field2 && identifier.Field3 == pi.Field3).Any())
{
return false;
}
}
return true;
Then I modified it like so:
if (identifiers.Any(i => !i.IsMarkedForDeletion && this.MyEntities.Identifiers.Where(pi => i.Field1 == pi.Field1 && i.Field2 == pi.Field2 && i.Field3 == pi.Field3).Any()))
{
return false;
}
return true;
My question is this still the wrong way to use LINQ? Basically, I want to eliminate the need for the foreach loop (which seems like I should be able to get rid of it) and also make the DB query faster by not performing separate DB queries for each element of a list. Instead, I want to perform one query for all elements. Thanks!
You can change your code in this way, and it will be converted to SQL statement as expected.
To prevent runtime errors during transformation, it will be better to save DBSet to the IQueryable variable; identifiers should be IQueryable too, so you should change your code into something like this (to be honest, Resharper converted your foreach in this short labda):
IQueryable<MyObject2> identifiers = MyEntities.Identifiers.Where(i => i.IsMarkedForDeletion == false);
IQueryable<MyObject2> ids = MyEntities.Identifiers.AsQueryable();
return identifiers.All(identifier => !ids.Any(pi => identifier.Field1 == pi.Field1 && identifier.Field2 == pi.Field2 && identifier.Field3 == pi.Field3));
If identifiers is in memory collection you can change code in this way (hope that fields are string):
IQueryable<MyObject2> ids = MyEntities.Identifiers.AsQueryable();
string[] values = identifiers.Where(i => i.IsMarkedForDeletion == false).Select(i => String.Concat(i.Field1, i.Field2, i.Field3)).ToArray();
return !ids.Any(i => values.Contains(i.Field1 + i.Field2 + i.Field3));
Unfortunately your modified version will be executed exactly the same way (i.e. multiple database queries) as in the original foreach approach because EF does not support database query with joins to in memory collection (except for primitive and enumeration type collections), so if you try the most logical way
bool result = this.MyEntities.Identifiers.Any(pi => identifiers.Any(i =>
!i.IsMarkedForDeletion &&
i.Field1 == pi.Field1 && i.Field2 == pi.Field2 && i.Field3 == pi.Field3));
you'll get
NotSupportedException: Unable to create a constant value of type 'YourType'. Only primitive types or enumeration types are supported in this context.
The only way to let EF execute a single database query is to manually build a LINQ query with Concat per each item from in memory collection, like this
IQueryable<Identifier> query = null;
foreach (var item in identifiers.Where(i => !i.IsMarkedForDeletion))
{
var i = item;
var subquery = this.MyEntities.Identifiers.Where(pi =>
pi.Field1 == i.Field1 && pi.Field2 == i.Field2 && pi.Field3 == i.Field3);
query = query != null ? query.Concat(subquery) : subquery;
}
bool result = query != null && query.Any();
See Logging and Intercepting Database Operations of how to monitor the EF actions.
I would use it as follows:
if (identifiers.Where(i => !i.IsMarkedForDeletion &&
this.MyEntities.Identifiers.Field1 == i.Field1 &&
this.MyEntities.Identifiers.Field2 == i.Field2 &&
this.MyEntities.Identifiers.Field3 == i.Field3).Any()))
{
return false;
}
return true;
I hope this helps. Even though it is more to type out, it is more understandable and readable then using multiple 'where' statements.
Using Entity Framework 6.0.2 and .NET 4.5.1 in Visual Studio 2013 Update 1 with a DbContext connected to SQL Server:
I have a long chain of filters I am applying to a query based on the caller's desired results. Everything was fine until I needed to add paging. Here's a glimpse:
IQueryable<ProviderWithDistance> results = (from pl in db.ProviderLocations
let distance = pl.Location.Geocode.Distance(_geo)
where pl.Location.Geocode.IsEmpty == false
where distance <= radius * 1609.344
orderby distance
select new ProviderWithDistance() { Provider = pl.Provider, Distance = Math.Round((double)(distance / 1609.344), 1) }).Distinct();
if (gender != null)
{
results = results.Where(p => p.Provider.Gender == (gender.ToUpper() == "M" ? Gender.Male : Gender.Female));
}
if (type != null)
{
int providerType;
if (int.TryParse(type, out providerType))
results = results.Where(p => p.Provider.ProviderType.Id == providerType);
}
if (newpatients != null && newpatients == true)
{
results = results.Where(p => p.Provider.ProviderLocations.Any(pl => pl.AcceptingNewPatients == null || pl.AcceptingNewPatients == AcceptingNewPatients.Yes));
}
if (string.IsNullOrEmpty(specialties) == false)
{
List<int> _ids = specialties.Split(',').Select(int.Parse).ToList();
results = results.Where(p => p.Provider.Specialties.Any(x => _ids.Contains(x.Id)));
}
if (string.IsNullOrEmpty(degrees) == false)
{
List<int> _ids = specialties.Split(',').Select(int.Parse).ToList();
results = results.Where(p => p.Provider.Degrees.Any(x => _ids.Contains(x.Id)));
}
if (string.IsNullOrEmpty(languages) == false)
{
List<int> _ids = specialties.Split(',').Select(int.Parse).ToList();
results = results.Where(p => p.Provider.Languages.Any(x => _ids.Contains(x.Id)));
}
if (string.IsNullOrEmpty(keyword) == false)
{
results = results.Where(p =>
(p.Provider.FirstName + " " + p.Provider.LastName).Contains(keyword));
}
Here's the paging I added to the bottom (skip and max are just int parameters):
if (skip > 0)
results = results.Skip(skip);
results = results.Take(max);
return new ProviderWithDistanceDto { Locations = results.AsEnumerable() };
Now for my question(s):
As you can see, I am doing an orderby in the initial LINQ query, so why is it complaining that I need to do an OrderBy before doing a Skip (I thought I was?)...
I was under the assumption that it won't be turned into a SQL query and executed until I enumerate the results, which is why I wait until the last line to return the results AsEnumerable(). Is that the correct approach?
If I have to enumerate the results before doing Skip and Take how will that affect performance? Obviously I'd like to have SQL Server do the heavy lifting and return only the requested results. Or does it not matter (or have I got it wrong)?
I am doing an orderby in the initial LINQ query, so why is it complaining that I need to do an OrderBy before doing a Skip (I thought I was?)
Your result starts off correctly as an ordered queryable: the type returned from the query on the first line is IOrderedQueryable<ProviderWithDistance>, because you have an order by clause. However, adding a Where on top of it makes your query an ordinary IQueryable<ProviderWithDistance> again, causing the problem that you see down the road. Logically, that's the same thing, but the structure of the query definition in memory implies otherwise.
To fix this, remove the order by in the original query, and add it right before you are ready for the paging, like this:
...
if (string.IsNullOrEmpty(languages) == false)
...
if (string.IsNullOrEmpty(keyword) == false)
...
result = result.OrderBy(r => r.distance);
As long as ordering is the last operation, this should fix the runtime problem.
I was under the assumption that it won't be turned into a SQL query and executed until I enumerate the results, which is why I wait until the last line to return the results AsEnumerable(). Is that the correct approach?
Yes, that is the correct approach. You want your RDBMS to do as much work as possible, because doing paging in memory defeats the purpose of paging in the first place.
If I have to enumerate the results before doing Skip and Take how will that affect performance?
It would kill the performance, because your system would need to move around a lot more data than it did before you added paging.
I'm getting the "Possible Multiple Enumeration of IEnumerable" warning from Reshaper. How to handle it is already asked in another SO question. My question is slightly more specific though, about the various places the warning will pop up.
What I'm wondering is whether or not Resharper is correct in giving me this warning. My main concern is that the warning occurs on all instances of the users variable below, indicated in code by "//Warn".
My code is gathering information to be displayed on a web page in a grid. I'm using server-side paging, since the entire data set can be tens or hundreds of thousands of rows long. I've commented the code as best as possible.
Again, please let me know whether or not this code is susceptible to multiple enumerations. My goal is to perform my filtering and sorting of data before calling ToList(). Is that the correct way to do this?
private List<UserRow> GetUserRows(UserFilter filter, int start, int limit,
string sort, SortDirection dir, out int count)
{
count = 0;
// LINQ applies filter to Users object
var users = (
from u in _userManager.Users
where filter.Check(u)
select new UserRow
{
UserID = u.UserID,
FirstName = u.FirstName,
LastName = u.LastName,
// etc.
}
);
// LINQ orders by given sort
if (!String.IsNullOrEmpty(sort))
{
if (sort == "UserID" && dir == SortDirection.ASC)
users = users.OrderBy(u => u.UserID); //Warn
else if (sort == "UserID" && dir == SortDirection.DESC)
users = users.OrderByDescending(u => u.UserID); //Warn
else if (sort == "FirstName" && dir == SortDirection.ASC)
users = users.OrderBy(u => u.FirstName); //Warn
else if (sort == "FirstName" && dir == SortDirection.DESC)
users = users.OrderByDescending(u => u.FirstName); //Warn
else if (sort == "LastName" && dir == SortDirection.ASC)
users = users.OrderBy(u => u.LastName); //Warn
else if (sort == "LastName" && dir == SortDirection.DESC)
users = users.OrderByDescending(u => u.LastName); //Warn
// etc.
}
else
{
users = users.Reverse(); //Warn
}
// Output variable
count = users.Count(); //Warn
// Guard case - shouldn't trigger
if (limit == -1 || start == -1)
return users.ToList(); //Warn
// Pagination and ToList()
return users.Skip((start / limit) * limit).Take(limit).ToList(); //Warn
}
Yes, ReSharper is right: count = users.Count(); enumerates unconditionally, and then if the limit or the start is not negative 1, the ToList would enumerate users again.
It appears that once ReSharper decides that something is at risk of being enumerated multiple times, it flags every single reference to the item in question with the multiple enumeration warning, even though it's not the code that is responsible for multiple enumeration. That's why you see the warning on so many lines.
A better approach would add a separate call to set the count. You can do it upfront in a separate statement, like this:
count = _userManager.Users.Count(u => filter.Check(u));
This way you would be able to leave users in its pre-enumerated state until the final call of ToList.
Your warning is hopefully generated by the call to Count, which does run an extra query.
In the case where limit == -1 || start == -1 you could make the ToList call and then get the count from that, but in the general case there's nothing you can do - you are making two queries, one for the full count and one for a subset of the items.
I'd be curious to see whether fixing the special case causes the warning to go away.
Edit: As this is LINQ-to-objects, you can replace the last return line with a foreach loop that goes through your whole collection counting them, but also builds up your restricted skip/take sublist dynamically, and thus only iterates once.
You could also benefit from only projecting (select new UserRow) in this foreach loop and just before your special-case ToList, rather than projecting your whole collection and then potentially discarding the majority of your objects.
var users = _userManager.Users.Where(u => filter.Check(u));
// Sort as above
List<UserRow> rtn;
if (limit == -1 || start == -1)
{
rtn = users.Select(u => new UserRow { UserID = u.UserID, ... }).ToList();
count = rtn.Length;
}
else
{
int takeFrom = (start / limit) * limit;
int forgetFrom = takeFrom + limit;
count = 0;
rtn = new List<UserRow>();
foreach(var u in users)
{
if (count >= takeFrom && count < forgetFrom)
rtn.Add(new UserRow { UserID = u.UserID, ... });
count++;
}
}
return rtn;
Can anyone help me figure this out?
The below code works fine and gets inside the if statument
foreach (var m in msg)
{
if (string.IsNullOrEmpty(m.PhoneNumber))
{
m.PhoneNumber = (from c in db.Customers
where c.CustomerID == m.CustomerID
select c.PhoneNumber).Single();
}
}
However in the below code phoneNumber is never set
foreach (var m in msg.Where(z => (z.PhoneNumber == null || z.PhoneNumber == "")))
{
m.PhoneNumber = (from c in db.Customers
where c.CustomerID == m.CustomerID
select c.PhoneNumber).Single();
}
I'm presuming its because the top code actually evaluates the expression whereas the below dosent. If that is the case then how can you check for null on an unevaluated LINQ query?
EDIT Just to stop confusion here is how msg is poplated in both cases
var msg = from m in db.Messages
where (m.StatusID == (int)MessageStatus.Submitted && m.MessageBoxTypeID == (int)MessageBoxType.Outbox)
select m;
I’m somewhat baffled by this one, but I have a wild guess. If the msg sequence is an IQueryable<T> which translates to an SQL query, then the behavior of the two snippets may vary. Suppose you have:
var msg =
from m in dataContext.MyTable
select m;
Your first snippet would cause the entire msg sequence to be enumerated, thereby issuing an unfiltered SELECT…FROM command to the database and fetching all the rows within your table.
foreach (var m in msg)
On the other hand, your second snippet applies a filter to your sequence before it is enumerated. Thus, the command issued to the database is a SELECT…FROM…WHERE.
foreach (var m in msg.Where(z => (z.PhoneNumber == null || z.PhoneNumber == "")))
There are various cases where the behavior of a filter applied in .NET would differ from its translation to Transact-SQL. For one, case-sensitivity. In your case, I assume that the mismatch is caused by entries whose PhoneNumber consists of whitespace, as these may match the empty string in SQL Server.
To test this possibility, check what happens if you change your second snippet to:
foreach (var m in msg.ToList().Where(z => (z.PhoneNumber == null || z.PhoneNumber == "")))
Edit: Your issue might be that your query is being executed again during subsequent access (when you check whether PhoneNumber was set).
If you execute:
foreach (var m in msg.Where(z => (z.PhoneNumber == null || z.PhoneNumber == "")))
{
m.PhoneNumber = …
}
bool stillHasNulls = msg.Any(z => z.PhoneNumber == null || z.PhoneNumber == "");
You will find that stillHasNulls might still evaluate to true, since your assignment to m.PhoneNumber is being lost when you re-evaluate the msg sequence (in the above case, when you execute msg.Any, which issues an EXISTS command to the database).
For your m.PhoneNumber assignments to be preserved, you need to either persist them to the database (if that’s what you want), or else make sure that you’re accessing the same sequence elements each time. One way to do this would be to pre-populate the sequence as a collection, using ToList.
msg = msg.Where(z => (z.PhoneNumber == null || z.PhoneNumber == "")).ToList();
foreach (var m in msg)
{
m.PhoneNumber = …
}
In the above code, the filter still gets issued to the database as a SELECT…FROM…WHERE, but the result is evaluated eagerly, and then stored as a list within msg. Any subsequent queries on msg would be evaluated against the pre-populated in-memory collection (which would contain any new values you assign to its elements).