Very poor performance of ASP.NET MVC view - c#

I'm having a problem with an application I made for my company. We are taking queries out of an ERP system. People can search for an article, and then the application shows them all relevant technical data, pictures and/or datasheets.
Problem is: it's loading very slow. Queries seem to run fine, but the generation of the view takes ages.
This is my search code (don't mind the Dutch parts):
public IQueryable<Item> GetItems(string vZoekString)
{
db.Configuration.LazyLoadingEnabled = false;
//Split de ZoekString
var searchTerms = vZoekString.ToLower().Split(null);
// TODO: alles in een db query
//Resultaten oplijsten
var term = searchTerms[0];
var results = db.item_general.Where(c => c.ITEM.Contains(term) || c.DSCA.Contains(term));
//Zoeken op alle zoektermen
if (searchTerms.Length > 1)
{
for (int i = 0; i < searchTerms.Length; i++)
{
var tempTerm = searchTerms[i];
results = results.Where(c => c.ITEM.Contains(tempTerm) || c.DSCA.Contains(tempTerm));
}
}
//Show
return results;
And then, these results are returned to the view like this:
public ActionResult SearchResults(string vZoekString, string filterValue, int? pageNo)
{
//Bewaking zoekstring
if (vZoekString != null)
{
pageNo = 1;
}
else
{
vZoekString = filterValue;
}
//De zoekstring doorgeven
if (vZoekString == null)
{
return RedirectToAction("Index", "Home");
}
else
{
ViewBag.ZoekString = vZoekString;
}
//Ophalen Items via Business-Object
//var vItems = new SearchItems().GetItems(vZoekString);
SearchItems vSearchItems = new SearchItems();
IQueryable<Item> vItems = vSearchItems.GetItems(vZoekString);
//Nummering
int pageSize = 10;
int page = (pageNo ?? 1);
//Show
return View(vItems.OrderBy(x => x.ITEM).AsNoTracking().ToPagedList(page, pageSize));
}
What can be wrong in my situation? Am I overlooking something?
UPDATE:
I've checked my code, and it seems that everything works very quickly, but it takes more than 10 seconds when it reaches .ToPagedList(). So my guess is that there is something wrong with that. I'm using the paged list from Nuget.

While I can't evaluate your view code without seeing it the problem could very well be in the database query.
An IQueryable does not actually load anything from the database until you use the results. So the database query will only be run after the View code has started.
Try changing the View call to:
var items = vItems.OrderBy(x => x.ITEM).AsNoTracking().ToPagedList(page, pageSize);
return View(items);
And then check to see if the View is still the bottleneck.
(This should probably be a comment instead, but I don't have the reputation....)

In most cases where you face performance issues with MVC and EF it is due to returning entities to views and getting stung by lazy loading. The reason for this is that when ASP.Net is told to send an object to the browser it needs to serialize it. The process of serialization iterates over the entity which touches lazy-load proxies, triggering those related entities to load one at a time.
You can detect this by running a profiler against your database, set a breakpoint point prior to the end of your action, then watch what queries execute as the action call returns. Lazy loading due to serialization will appear as a number of individual (TOP 1) queries being executed in rapid succession after the action has completed prior to the page rendering.
The simplest suggestion to avoid this pain is don't return entities from controllers.
IQueryable<Item> vItems = vSearchItems.GetItems(vZoekString);
var viewModels = vItems.OrderBy(x => x.ITEM)
.Select(x => new ItemViewModel
{
ItemId = x.ItemId,
// .. Continue populating view model. If model contains a hierarchy of data, map those to related view models as well.
}).ToPagedList(page, pageSize);
return View(viewModels);
The benefit of this approach:
The .Select() will result in a query that only retrieves the data you actually need to populate the view models. (The data your view needs) Faster query, less data across the wire between DB server -> App server -> Browser
This doesn't result in any lazy loads.
The caveat of this approach:
You need to take care that the .Select() expression will go to SQL, so no .Net or private functions for stuff like translating/formatting data. Populate raw values into the view model then expose properties on the view model to do the translation which will serialize that formatted data to the client. Basic stuff like FullName = x.FirstName + " " + x.LastName is fine, but avoid things like OrderDate = DateTime.Parse(x.DateAsISO) if the DB stored dates as strings for example.
You can leverage mappers like Automapper to assist with mapping between Entity and ViewModel. Provided the mapping tool inspects/traverses the destination to populate rather than the source, you should be good. Automapper does support integration within IQueryable so it would be a worthwhile investment to research that if you want to leverage a mapper.

Related

How to set List child element with for each If it is empty initially?

I have Ilist to get all Offer from repository using entity framework core. Also I have service model OfferResponseModel which includes
OfferRequestModel as reference. I used mapster to bind entity model to service model. However it only set first child. Now I want to bind it manually. I created "offers" with the size of "Offer". When I try to use foreach loop, I cannot set "offers" child element.Because it has no elements. So, I can I solve this.
var offer = await _unitOfWork.Offers.GetAllOffer();
if (offer == null)
throw ServiceExceptions.OfferNotFound;
var results = new List<OfferResponseModel>(offer.Count);
results.ForEach(c => { c.Offer = new OfferRequestModel(); });
int i = 0;
foreach(var result in results)
{
result.Offer.User = Offer[i].User.Adapt<UserResponseModel>();
result.Offer.Responsible = Offer[i].Responsible.Adapt<EmployeeResponseModel>();
result.CreatedDate = Offer[i].CreatedDate;
result.ModifiedBy = Guid.Parse(Offer[i].UpdatedBy);
result.Active = Offer[i].Status;
result.Offer = Offer[i].Offer;
result.Offer.User.Company = Offer[i].Company.Adapt<CompanyModel>();
i++;
}
I created "offers" with the size of "Offer".
No, you created it with that capacity. It's still an empty list. It's not clear to me why you're trying to take this approach at all - it looks like you want one OfferResponseModel for each entry in offer, directly from that - which you can do with a single LINQ query. (I'm assuming that offer and Offer are equivalent here.)
var results = Offer.Select(o => new OfferResponseModel
{
Offer = new OfferRequestModel
{
User = o.User.Adapt<UserResponseModel>(),
Responsible = o.Responsible.Adapt<EmployeeResponseModel>()
},
CreatedDate = o.CreatedDate,
ModifiedBy = Guid.Parse(o.UpdatedBy),
Active = o.Status
}).ToList();
That doesn't set the Offer.User.Company in each entry, but your original code is odd as it sets the User and Responsible properties in the original Offer property, and then replaces the Offer with Offer[i].Offer. (Aside from anything else, I'd suggest trying to use the term "offer" less frequently - just changing the plural to "offers" would help.)
I suspect that with the approach I've outlined above, you'll be able to work out what you want and express it more clearly anyway. You definitely don't need to take the "multiple loops" approach of your original code.
One thing you have left out is the type of the offer variable that is referenced in the code. But I am thinking you need to do something along these lines:
if (offer == null)
throw ServiceExceptions.OfferNotFound;
var results = offer.Select(o => new OfferResponseModel
{
Offer = new OfferRequestModel
{
User = o.User.Adapt<UserResponseModel>(),
Responsible = o.Responsible.Adapt<EmployeeResponseModel>(),
...
}
}).ToList();
Select basically loops through any items in offer and "converts" them to other objects, in this case OfferResponseModel. So inside select you simply new up an OfferResponseModel and directly sets all the properties you need to set.
You need using System.Linq; for Select to be available.

How to send data from linq query to view page

I have two database tables that are linked by an attribute in one of the tables. The tables are being separately represented as models in an ASP .NET core MVC app. In a controller action, I have a linq query that performs a join on the two tables and selects a set of columns from the join results. I'm trying to send this result set to a view page so that the data can be displayed in a paginated table, but I'm not sure how exactly this should be done.
In the past, when using ASP .NET MVC (not core), I've been able to execute stored procedures that return result sets in controller actions, iterate through the result sets, build up lists with the data and then store the lists in the viewbag which can be accessed in the view. I've tried to directly store the EntityQueryable object in the viewbag but I got an error and I'm not sure how I would go about iterating through it anyway.
What would be the best way to send the data returned from the linq query to the View page?
Controller Action code:
var resultsObj = (from rd in _db.ResData
join ra in _db.ResAvailability on rd.RecNo equals ra.RecNoDate
where ra.TotalPrice < Int32.Parse(resDeals.priceHighEnd) && ra.TotalPrice > Int32.Parse(resDeals.priceLowEnd)
select new
{
Name = rd.Name,
ImageUrl = rd.ImageUrl,
ResortDetails = rd.ResortDetails,
CheckIn = ra.CheckIn,
Address = rd.Address,
TotalPrice = ra.TotalPrice
}).Take(10);
ViewBag.resultSet = resultsObj;
EDIT:
My query is returning data that from multiple tables (since it is a join) so data from the query results has to
be extracted and separated into the two different viewmodels which correspond to the tables in the join.
The first viewmodel represents each row of the query results. The second viewmodel is just a list to hold all of the rows that are contained in the query results.
I'm trying to understand how to do the data extraction from the query results into the viewmodels as I have explained here.
I would return a ViewModel instead. Generally you may use it to send the data to/from the View and controller.
I am currently working on a big project, and ViewModels work pretty well for me.
Check this short video:
https://www.youtube.com/watch?v=m086xSAs9gA
UPDATE
I am assuming that your query works properly (I did not read it).
To send your data via a ViewModel to the View.
First create the required ViewModel classes:
public class PageNameViewModel
{
public string Name { get; set; }
public IEnumerable<ResortDetailViewModel> ResortDetailViewModels { get; set; }
... rest of properties are not shown for clarity ...
}
public class ResortDetailViewModel
{
public string Detail1 { get; set; }
public int Detail2 { get; set; }
... etc. ...
}
Now use the ViewModels in the controller (or let us say, fill the data in the viewmodel):
var viewModel = (from rd in _db.ResData
join ra in _db.ResAvailability on rd.RecNo equals ra.RecNoDate
where ra.TotalPrice < Int32.Parse(resDeals.priceHighEnd) && ra.TotalPrice > Int32.Parse(resDeals.priceLowEnd)
.Take(10)
select new ClassNameViewModel
{
Name = rd.Name,
ImageUrl = rd.ImageUrl,
ResortDetailViewModels = rd.ResortDetails.Select(o =>
new ResortDetailViewModel
{
Detail1 = o.detail1,
Detail2 = o.detail2,
... etc. ...
},
CheckIn = ra.CheckIn,
Address = rd.Address,
TotalPrice = ra.TotalPrice
});
return View(viewModel);
Now you can use the ViewModel in the View (I assume you know how, as you watched the view I linked).
Notice that ViewModels should ideally hold primitive data in this case (that will make your life easier if you plan later to serialize the ViewModel and send it to another client).
In the above code, I converted any complex types to primitive types, and that should be on each element (notice I did the same on ResortResults, as I converted them to an array of ViewModel, i.e., array of an objects that only holds primitive data, so no deep hierarchy).
I also moved Take(10) to the upper side of the code, so you do not need to create a ViewModel for each element, and then take only 10! that is just wasting of performance for nothing. By moving it to the upper side, we take the 10 elements before creating the ViewModels, i.e., we create the ViewModels for only the required elements.
I hope that helps. If you need any further help, please please tell me.

Implementing pagination in JsonResult

We can implementing pagination in C#, ASP.NET in ActionResult function like this
public ActionResult Index(int? page)
{
Entities db = new Entities();
return View(db.myTable.ToList().ToPagedList(page ?? 1,8 ));
}
how to implementing pagination in JsonResult function that sending result to ajax in html view?
public JsonResult GetSearchingData(string SearchBy, string SearchValue)
{
var subCategoryToReturn = myList.Select(S => new { Name = S.Name });
return Json(subCategoryToReturn, JsonRequestBehavior.AllowGet);
}
stop thinking in terms of UI and start thinking in terms of data.
you have some data you want to paginate. That's it. Forget about MVC at this point or JSONResult or anything else that has nothing to do with the data.
One thing to be aware of, this code you posted above:
db.myTable.ToList().ToPagedList(page ??1,8 )
if you do it like this, your entire table will be returned from the database and then it will be paginated. What you want is to only return the data already paginated, so instead of returning 100 records and then only taking the first 20, only return the 20.
Don't use ToList() at that point, use something like this instead:
var myData = db.myTable.Skip(pageNumber * pageSize).Take(pageSize)
I don't have any running code to check this, but hopefully you get the idea, only return the data already paginated, so only return the data you will display and nothing more. The UI can send the page index you click on, the pageSize can be a predefined number stored in appSettings for example.
You can use, .Skip() for skipping first n elements and Take() for fetching the next n rows.
int pageNumber = 0;
int ItemsPerPage= 10;
var Results = db.myTable.ToList().Skip(ItemsPerPage* pageNumber).Take(numberOfObjectsPerPage);
Suppose that you only need 10 items Per page. Number of Pages would be equal to
TotalPages = Math.Ceiling(TotalItems/ItemsPerPage); // IF 201 Items then 201/10 ceiling = 21 pages
Now you can make Pagination buttons in html. I suggest you use Jquery Pagniation Library
https://esimakin.github.io/twbs-pagination/

How to retrieve records more than 4000 from Raven DB in SIngle Session [duplicate]

I know variants of this question have been asked before (even by me), but I still don't understand a thing or two about this...
It was my understanding that one could retrieve more documents than the 128 default setting by doing this:
session.Advanced.MaxNumberOfRequestsPerSession = int.MaxValue;
And I've learned that a WHERE clause should be an ExpressionTree instead of a Func, so that it's treated as Queryable instead of Enumerable. So I thought this should work:
public static List<T> GetObjectList<T>(Expression<Func<T, bool>> whereClause)
{
using (IDocumentSession session = GetRavenSession())
{
return session.Query<T>().Where(whereClause).ToList();
}
}
However, that only returns 128 documents. Why?
Note, here is the code that calls the above method:
RavenDataAccessComponent.GetObjectList<Ccm>(x => x.TimeStamp > lastReadTime);
If I add Take(n), then I can get as many documents as I like. For example, this returns 200 documents:
return session.Query<T>().Where(whereClause).Take(200).ToList();
Based on all of this, it would seem that the appropriate way to retrieve thousands of documents is to set MaxNumberOfRequestsPerSession and use Take() in the query. Is that right? If not, how should it be done?
For my app, I need to retrieve thousands of documents (that have very little data in them). We keep these documents in memory and used as the data source for charts.
** EDIT **
I tried using int.MaxValue in my Take():
return session.Query<T>().Where(whereClause).Take(int.MaxValue).ToList();
And that returns 1024. Argh. How do I get more than 1024?
** EDIT 2 - Sample document showing data **
{
"Header_ID": 3525880,
"Sub_ID": "120403261139",
"TimeStamp": "2012-04-05T15:14:13.9870000",
"Equipment_ID": "PBG11A-CCM",
"AverageAbsorber1": "284.451",
"AverageAbsorber2": "108.442",
"AverageAbsorber3": "886.523",
"AverageAbsorber4": "176.773"
}
It is worth noting that since version 2.5, RavenDB has an "unbounded results API" to allow streaming. The example from the docs shows how to use this:
var query = session.Query<User>("Users/ByActive").Where(x => x.Active);
using (var enumerator = session.Advanced.Stream(query))
{
while (enumerator.MoveNext())
{
User activeUser = enumerator.Current.Document;
}
}
There is support for standard RavenDB queries, Lucence queries and there is also async support.
The documentation can be found here. Ayende's introductory blog article can be found here.
The Take(n) function will only give you up to 1024 by default. However, you can change this default in Raven.Server.exe.config:
<add key="Raven/MaxPageSize" value="5000"/>
For more info, see: http://ravendb.net/docs/intro/safe-by-default
The Take(n) function will only give you up to 1024 by default. However, you can use it in pair with Skip(n) to get all
var points = new List<T>();
var nextGroupOfPoints = new List<T>();
const int ElementTakeCount = 1024;
int i = 0;
int skipResults = 0;
do
{
nextGroupOfPoints = session.Query<T>().Statistics(out stats).Where(whereClause).Skip(i * ElementTakeCount + skipResults).Take(ElementTakeCount).ToList();
i++;
skipResults += stats.SkippedResults;
points = points.Concat(nextGroupOfPoints).ToList();
}
while (nextGroupOfPoints.Count == ElementTakeCount);
return points;
RavenDB Paging
Number of request per session is a separate concept then number of documents retrieved per call. Sessions are short lived and are expected to have few calls issued over them.
If you are getting more then 10 of anything from the store (even less then default 128) for human consumption then something is wrong or your problem is requiring different thinking then truck load of documents coming from the data store.
RavenDB indexing is quite sophisticated. Good article about indexing here and facets here.
If you have need to perform data aggregation, create map/reduce index which results in aggregated data e.g.:
Index:
from post in docs.Posts
select new { post.Author, Count = 1 }
from result in results
group result by result.Author into g
select new
{
Author = g.Key,
Count = g.Sum(x=>x.Count)
}
Query:
session.Query<AuthorPostStats>("Posts/ByUser/Count")(x=>x.Author)();
You can also use a predefined index with the Stream method. You may use a Where clause on indexed fields.
var query = session.Query<User, MyUserIndex>();
var query = session.Query<User, MyUserIndex>().Where(x => !x.IsDeleted);
using (var enumerator = session.Advanced.Stream<User>(query))
{
while (enumerator.MoveNext())
{
var user = enumerator.Current.Document;
// do something
}
}
Example index:
public class MyUserIndex: AbstractIndexCreationTask<User>
{
public MyUserIndex()
{
this.Map = users =>
from u in users
select new
{
u.IsDeleted,
u.Username,
};
}
}
Documentation: What are indexes?
Session : Querying : How to stream query results?
Important note: the Stream method will NOT track objects. If you change objects obtained from this method, SaveChanges() will not be aware of any change.
Other note: you may get the following exception if you do not specify the index to use.
InvalidOperationException: StreamQuery does not support querying dynamic indexes. It is designed to be used with large data-sets and is unlikely to return all data-set after 15 sec of indexing, like Query() does.

LINQ to SharePoint 2010 getting error "All new entities within an object graph must be added/attached before changes are submitted."

I've been having a problem for some time, and I've exhausted all means of figuring this out for myself.
I have 2 lists in a MS Sharepoint 2010 environment that are holding personal physician data for a medical group...nothing special just mainly text fields and a few lookup choice fields.
I am trying to write a program that will migrate the data over from List A to List B. I am using LINQ to Sharepoint to accomplish this. Everything compiles just fine, but when it runs and hits the SubmitChanges() method, I get a runtime error that states:
"All new entities within an object graph must be added/attached before changes are submitted."
this issue must be outside of my realm of C# knowledge because I simply cannot find the solution for it. The problem is DEFINITELY stemming from the fact that some of the columns are of type "Lookup", because when I create a new "Physician" entity in my LINQ query, if I comment out the fields that deal with the lookup columns, everything runs perfectly.
With the lookup columns included, if I debug and hit breakpoints before the SubmitChanges() method, I can look at the new "Physician" entities created from the old list and the fields, including data from the lookup columns, looks good, the data is in there the way I want it to be, it just flakes out whenever it tries to actually update the new list with the new entities.
I have tried several methods of working around this error, all to no avail. In particular, I have tried created a brand new EntityList list and calling the Attach() method after each new "Physician" Entity is created, but to no avail, it just sends me around in a bunch of circles, chasing other errors such as "ID cannot be null", "Cannot insert entities that have been deleted" etc.,
I am no farther now than when I first got this error and any help that anyone can offer would certainly be appreciated.
Here is my code:
using (ProviderDataContext ctx = new ProviderDataContext("http://dev"))
{
SPSite sitecollection = new SPSite("http://dev");
SPWeb web = sitecollection.OpenWeb();
SPList theOldList = web.Lists.TryGetList("OldList_Physicians");
//Create new Physician entities.
foreach(SPListItem l in theOldList.Items)
{
PhysiciansItem p = new PhysiciansItem()
{
FirstName = (String)l["First Name"],
Title = (String)l["Last Name"],
MiddleInitial = (String)l["Middle Init"],
ProviderNumber = Convert.ToInt32(l["Provider No"]),
Gender = ConvertGender(l),
UndergraduateSchool =(String)l["UG_School"],
MedicalSchool = (String)l["Med_School"],
Residency = (String)l["Residency"],
Fellowship = (String)l["Fellowship"],
Internship = (String)l["Internship"],
PhysicianType = ConvertToPhysiciantype(l),
Specialty = ConvertSpecialties(l),
InsurancesAccepted = ConvertInsurance(l),
};
ctx.Physicians.InsertOnSubmit(p);
}
ctx.SubmitChanges(); //this is where it flakes out
}
}
//Theses are conversion functions that I wrote to convert the data from the old list to the new lookup columns.
private Gender ConvertGender(SPListItem l)
{
Gender g = new Gender();
if ((String)l["Sex"] == "M")
{
g = Gender.M;
}
else g = Gender.F;
return g;
}
//Process and convert the 'Physician Type', namely the distinction between MD (Medical Doctor) and
//DO (Doctor of Osteopathic Medicine). State Regualtions require this information to be attached
//to a physician's profile.
private ProviderTypesItem ConvertToPhysiciantype(SPListItem l)
{
ProviderTypesItem p = new ProviderTypesItem();
p.Title = (String)l["Provider_Title:Title"];
p.Intials = (String)l["Provider_Title"];
return p;
}
//Process and convert current Specialty and SubSpecialty data into the single multi-choice lookup column
private EntitySet<Item> ConvertSpecialties(SPListItem l)
{
EntitySet<Item> theEntityList = new EntitySet<Item>();
Item i = new Item();
i.Title = (String)l["Provider Specialty"];
theEntityList.Add(i);
if ((String)l["Provider SubSpecialty"] != null)
{
Item theSubSpecialty = new Item();
theSubSpecialty.Title = (String)l["Provider SubSpecialty"];
theEntityList.Add(theSubSpecialty);
}
return theEntityList;
}
//Process and add insurance accepted.
//Note this is a conversion from 3 boolean columns in the SP Environment to a multi-select enabled checkbox
//list.
private EntitySet<Item> ConvertInsurance(SPListItem l)
{
EntitySet<Item> theEntityList = new EntitySet<Item>();
if ((bool)l["TennCare"] == true)
{
Item TenncareItem = new Item();
TenncareItem.Title = "TennCare";
theEntityList.Add(TenncareItem);
}
if ((bool)l["Medicare"] == true)
{
Item MedicareItem = new Item();
MedicareItem.Title = "Medicare";
theEntityList.Add(MedicareItem);
}
if ((bool)l["Commercial"] == true)
{
Item CommercialItem = new Item();
CommercialItem.Title = "Commercial";
theEntityList.Add(CommercialItem);
}
return theEntityList;
}
}
So this may not be the answer you're looking for, but it's what's worked for me in the past. I've found that updating lookup fields using Linq to Sharepoint to be quite frustrating. It frequently doesn't work, or doesn't work efficiently (forcing me to query an item by ID just to set the lookup value).
You can set up the entity so that it has an int property for the lookup id (for each lookup field) and a string property for the lookup value. If, when you generate the entities using SPMetal, you don't generate the list that is being looked up then it will do this on it's own. What I like to do is (using your entity as an example)
Generate the entity for just that one list (Physicians) in some temporary folder
Pull out the properties for lookup id & value (there will also be private backing fields that need to come along for the ride too) for each of the lookups (or the ones that I'm interested in)
Create a partial class file for Physicians in my actual project file, so that regenerating the entire SPMetal file normally (without restricting to just that list) doesn't overwrite changes
Paste the lookup id & value properties in this partial Physicians class.
Now you will have 3 properties for each lookup field. For example, for PhysicianType there will be:
PhysicianType, which is the one that is currently there. This is great when querying data, as you can perform joins and such very easily.
PhysicianTypeId which can be occasionally useful for queries if you only need ID as it makes it a bit simpler, but mostly I use it whenever setting the value. To set a lookup field you only need to set the ID. This is easy, and has a good track record of actually working (correctly) in my experiences.
PhysicianTypeValue which could be useful when performing queries if you just need the lookup value, as a string (meaning it will be the raw value, rather than something which is already parsed if it's a multivalued field, or a user field, etc. Sometimes I'd rather parse it myself, or maybe just see what the underlying value is when doing development. Even if you don't use it and use the first property, I often bring it along for the ride since I'm already doing most of the work to bring the PhysicianTypeId field over.
It seems a bit hacky, and contrary to the general design of linq-to-SharePoint. I agree, but it also has the advantage of actually working, and not actually being all that hard (once you get the rhythm of it down and learn what exactly needs to be copied over to move the properties from one file to another).

Categories