I am retrieving a big list of news items.
I want to utilise caching of this list for obvious reasons (running a query any more than necessary won't do).
I bind this list to a repeater on which I have extended to enable paging (flicking between pages causes page load and so a retrieval of the list again).
The complication is the fact that these news items can also be queried by date from a query string 'year' that is present only when querying the news items by date.
Below is pseudo code for what I have so far (pasting the real code here would take too much time trimming out all of the bits that just add confusion):
if(complete news items list are not cached OR querystring["year"] != null)
{
Int year = queryString["year"] (could be null)
Cache.Add(GetNewsItems(year));
}
else
{
return cached newsItems;
}
The problem is that when the news items page loads (because of a paging control's postback) the querystring's [year] parameter will still be populated and so will re-do the GetNewsItems. Also - even if the home URL (i.e. no query string) is then navigated to - in effect there is still a cached version of the news items so it will not bother to try and retrieve them - but they MAY be there for a particular year and so aren't relevant to an 'all years' kind of search.
Do I add a new cache entry to flag what most recent search was done?
What considerations do I have to make here to cache timing out?
I can add a new query string if needs be (preferably not) - would this solve the problem?
Could you get the complete list on the first query, and cache this (presuming it is not too enormous)? Then for any subsequent queries get the data from the cache and query it to filter out just the year you want.
Maybe you could store the data in the cache in the form of
IDictionary<int, IEnumerable<NewsItem>>
(assuming there are more than one NewsItem for a year) where the key is the year so you can just retrieve a single dictionary value pair to get a year's data.
Alternatively cache the data by year, as you are doing in your sample, and implement a mechanism to get data from the cache for each year if it exists. When getting all, you can either just get all from the data store and cache this separately or implement some mechanism to determine what years are missing from the cache and just get these. Personally, I think I would go with caching all and filtering data from the cache so as to not clog up the memory with too much cached data.
A useful TryGetValue pattern to get from cache is something like:
private bool TryGetValue<U>(string key, out U value)
{
object cachedValue = HttpContext.Current.Cache.Get(key);
if (cachedValue == null)
{
value = default(U);
return false;
}
else
{
try
{
value = (U)cachedValue;
return true;
}
catch
{
value = default(U);
return false;
}
}
}
Not sure if this might help for your case, but you can add the repeater to an user control, and enable OutputCache for it, to be invalidated by POST/GET params:
<%# OutputCache Duration="100" VaryByParam="year" %>
Related
I have a situation wherein a List object is built off of values pulled from a MSSQL database. However, this particular table is mysteriously getting an errant record or two tossed in. Removing the records cause trouble even though they have no referential links to any other tables, and will still get recreated without any known user actions taken. This causes some trouble as it puts unwanted values on display that add a little bit of confusion. The specific issue is that this is a platform that allows users to run a search for quotes, and the filtering allows for sales rep selection. The select/dropdown field is showing these errant values, and they need to be removed.
Given that deleting the offending table rows does not provide a desirable result, I was thinking that maybe the best course of action was to modify the code where the List object is created and either filter the values out or remove them after the object is populated. I'd like to do this in a clean, scalible fashion by providing some kind of appendable data object where I could just add in a new string value if something else cropped up as opposed to doing something clunky that adds new code to find the value and remove it each time.
My thought was to create a string array, and somehow loop through that to remove bad List values, but I wasn't entirely certain that was the best way to approach this, and I could not for the life of me think of a clean approach for this. I would think that the best way would be to add a filter within the Find arguments, but I don't know how to add in an array or list that way. Otherwise I figured to loop through the values either before or after the sorting of the List and remove any matches that way, but I wasn't sure that was the best choice of actions.
I have attached the current code, and would appreciate any suggestions.
int licenseeID = Helper.GetLicenseeIdByLicenseeShortName(Membership.ApplicationName);
List<User> listUsers;
if (Roles.IsUserInRole("Admin"))
{
//get all users
listUsers = User.Find(x => x.LicenseeID == licenseeID).ToList();
}
else
{
//get only the current user
listUsers = User.Find(x => (x.LicenseeID == licenseeID && x.EmailAddress == Membership.GetUser().Email)).ToList();
}
listUsers.Sort((x, y) => string.Compare(x.FirstName, y.FirstName));
-- EDIT --
I neglected to mention that I did not develop this, I merely inherited its maintenance after the original developer(s) disappeared, and my coworker who was assigned to it left the company. I'm not really really skilled at handling ASP.NET sites. Many object sources are hidden and unavailable for edit, I assume due to them being defined in a DLL somewhere. So, for any of these objects that are sourced from database tables, altering the tables will not help, since I would not be able to get the new data anyway.
However, I did try to do the following to filter out the undersirable data:
List<String> exclude = new List<String>(new String[] { "value1" , "value2" });
listUsers = User.Find(x => x.LicenseeID == licenseeID && !exclude.Contains(x.FirstName)).ToList();
Unfortunately it only resulted in an error being displayed to the page.
-- EDIT #2 --
I got the server setup to accept a new event viewer source so I could write info to the Application log to see what was happening. Looks like this installation of ASP.NET does not accept "Contains" as an action on a List object. An error gets kicked out stating that the method is not available.
I will probably add a bit to the table and flag Errant rows and then skip them when I query the table, something like
&& !ErrantData
Other way, that requires a bit more upkeep but doesn't require db change, would be to keep a text file that gets periodically updated and you read it and remove users from list based on it.
The bigger issue is unknown rows creeping in your database. Changing user credentials and adding creation timestamps may help you narrow down the search scope.
I have MVC project with WCF service.
When I display a list of data, I do want to load everything from the database/service and do a client paging. But I do want a server-side paging. If I have 100 records and my page size is 10, then when a user clicks on page 1, it will only retrieve the first 10 records from the database and if a user clicks on Page 3, then it will only retrieve the corresponding ten records.
I am not using Angular or any other bootstrap.
Can someone guide me how to do it?
public ActionResult Index(int pageNo = 1)
{
..
..
..
MyViewModel[] myViewModelListArray = MyService.GetData();
//when I create this PageList, BLL.GetData have to retreive all the records to show more than a single page no.
//But if the BLL.GetData() was changed to retrieve a subset, then it only shows a single page no.
//what I wanted to do is, show the correct no of pages (if there are 50 records, and pageSize is 10, then show
//page 1,2,3,4,5 and only retrieve 10 records at a time.
PagedList<MyViewModel> pageList = new PagedList<<MyViewModel>(myViewModelListArray, pageNo, pageSizeListing);
..
..
..
return View(pageList);
}
The best approach is to use LINQ to Entities operators Skip & Take.
For example, to page
int items_per_page = 10;
MyViewModel[] myViewModelListArray = MyService.GetData().OrderBy(p => p.ID).Skip((pageNo - 1) * items_per_page).Take(items_per_page).ToArray();
NOTE: The data must be ordered, so the pages have some consistency (but I did by an arbitrary field ID). Also some databases required 'order by' to apply 'limit' or 'top' (which is how Take/Skip are implemented).
I put it that way, because I dont know how you are retrieving the data.
But instead retrieving the full list with GetData and then filtering out, better include the pagination in the query inside GetData (so you don't retrieve unnecessary data).
Add paramters page size and page number to your service method and make the result an object which returns TotalCount and a List Items (Items being the items on the current page). Then you can use those values to create the PagedList.
Inside your business logic code you will do two queries one for the count of items and one for the items on the page.
Also if you are starting the project now do yourself a favor and remove the useless WCF service from your architecture.
I have a Session in my Controller as follow
Session["Fulldata"] = data;
Which stores data like Name,Id,City etc;
It stores multiple rows of such data.
How how do I access or check if any row has particular Name in it, in the View?
eg :
#if(Session("Fulldata").has("ABC"))
{
//Do Something
}
I want to check Name for each row in Session. Any help will be appreciated
First you need to cast the Session("Fulldata") back to you type as session store your collection as object type.
List<CustomClass> data = (List<CustomClass>)Session("Fulldata");
If data is a collection you can use linq Enumerable.Any to Search
#if(data.Any(d=>d.YourAttr1 == 'ABC' || d.YourAttr2 == 'ABC'))
{
//Do Something
}
As a additional note please do not use session unnecessarily especially for big data as session will need space on your server and as user/session increase it could adversely effect the performance.
I am getting around 200K rows from database. Everytime I click on pagination it getting records from DB. How to avoid that? Caching??
public ActionResult Demo(int? page, string sortOrder, string recordSize)
{
int pageSize = Convert.ToInt32(recordSize = string.IsNullOrEmpty(recordSize) ? "20" : recordSize);
int pageNumber = (page ?? 1);
var customers = DBHelper.GetCustomers();
return View(customers.ToPagedList(pageNumber, pageSize));
}
You should never pull back all rows to cache or use. That is simply too much data and you are going to take massive performance hits in doing so. You should try to only ever pull back the data you actually want to use.
If you are using LinqToSql or EntityFrameWork or similar data access, you can use linq to retrieve the page you want. This would be the most efficient solution because you will not pull back all 200k rows.
var customers = Customers.Skip(pageIndex * pageSize).Take(pageSize);
Caching could improve retrieval speeds for the second time around, but your initial speed problem is going to be with pulling back all 200k rows from the database with each request. Yes, you could pull them all back, cache them, and use them, but this is not an ideal approach. Ideal caching would be to cache each page with a dependency on the previous page so that you can properly invalidate your cache.
I have to load data into some dropdownlists which includes loading data into datatables through oracle stored procedures.But this make my page loading slow after each postback. Which technique I should use to make the page loading faster.
To make understanding better, I will quote the instance here
We will load the datatable by using oracle connection and stored procedures.
We will load this datatable content into dropdownlist
On the selection of this one dropdownlist one more dropdownlist will be populated using the above procedure.
Now my question is how I can make my page load faster by using some tuning techniques or implementing some new ones. My question is specific to asp.net 2.0 framework.
Thanks in advance.
Without seeing your code and taking into consideration your concerns regarding the postback load time ("... slow after each postback."), I can suggest that maybe you could leverage the ASP.NET ViewState functionality to populate your drop-down lists only once during the initial page load and let ASP.NET restore their state (items as well as the selected value) during subsequent postbacks. The pseudocode like the following could do the trick:
protected void Page_Load(object sender, EventArgs e)
{
if (!Page.IsPostBack)
{
LoadData();
}
}
private void LoadData()
{
// this.List1 refers to the DropDownList control defined in your page markup
DropDownList list = this.List1;
IList<Tuple<string, string>> items = /* Getting items from the database */;
int selectedItemIndex = /* Getting the selected item index */;
for (int i = 0; i < items.Count; i++)
{
list.Items.Add(new ListItem(items[i].Item1, items[i].Item2, i == selectedItemIndex));
}
}
Hope this helps.
P.S. When you ask this kind of questions, please make sure you post a relevant code snippets because otherwise it's very hard to suggest something since the problem might be not what what it seems to be :-)
I would recommend you to cache data with which is populated your dropdownlist. There is a lot of techniques, it's all depends on how much data you are loading to your dropdown.
As I expect you have the table with a large amount of columns(like table users - id, name, telephone, birthdate, reletaion1, reletaion2, relation3...) but you need from this table only 2 colums - Id, Name. So:
1) I would recommend you to use SelectListItem as Dto object.
2) And on application starts load all entire data in your bd as List items to your memory cache, but this will work fine if you have not more than ~10000 objects(depends on your memory amount of server).
3) Have fun with fast-working cached data.
The other way is to store this object in NoSql like Name-value collection, which is made for storing such kind a data it also will work faster than using the Sql for such a kind of data.