I followed the instruction of Pagination for Angular step by step in it's documentation that is here https://www.npmjs.com/package/ngx-pagination.
My pagination works perfect and I don't have any problem with that. However since I'm working with a large data-set - I don't want to work with the full collection in memory, and need some kind of server-side paging, where the server sends just a single page at a time. As mentioned in that article I should use totalItems parameter and use count, but I don't know how? How should I set the total?
<table class='table' *ngIf="collection">
<tbody>
<tr *ngFor="let item of collection |
paginate: { itemsPerPage: 10, currentPage: p, totalItems: total }">
<td>{{ item.id }}</td>
<td>{{ item.name }}</td>
</tr>
</tbody>
And my WEB API is like this:
[HttpGet("[action]")]
public async Task<IEnumerable<MyClass>> MyMethod()
{
int perPage = 10;
int start = (page - 1) * perPage;
int end = start + perPage;
using (HttpClient client = new HttpClient())
{
client.BaseAddress = new Uri("externalAPI");
MediaTypeWithQualityHeaderValue contentType =
new MediaTypeWithQualityHeaderValue("application/json");
client.DefaultRequestHeaders.Accept.Add(contentType);
HttpResponseMessage response = await client.GetAsync(client.BaseAddress);
string content = await response.Content.ReadAsStringAsync();
List<MyClass> data = JsonConvert.DeserializeObject<List<MyClass>>(content);
return data.Skip(start).Take(perPage).ToList();
}
}
And:
p: number = 1;
total: number;
http.get('url', {
params: {
page : this.p
}
}).subscribe(result => {
this.collections = result.json() as Collection[];
}, error => console.error(error));
in order to paginate on the server side you need to have two things:
pageSize or itemsPerPage in your case
pageNumber which is basically your currentPage
You need to send these two values to your webapi so it knows what data to return.
They would become parameters to your action and you can then pass them through to webapi.
how you paginate in webapi, it depends on your code. If you use EntityFramework it's straight forward with Take and Skip methods. If you have a stored procedure ( so T-SQL ) you can do it with Fetch and Offset.
A word of caution on pageNumber. Page 1 on your UI needs to be become page 0 on the server side, so when your UI requests page 1 that is basically page 0 of the data. Page 2 UI side, becomes Page 1 data side, so you probably pass
pageNumber - 1 to the back end. Just keep this in mind.
totalItems is a number that comes from the back end.
Let's say your web api returns paginated data which looks like this:
public class myReturnedData
{
public string someData1 { get;set; }
public string someData2 { get;set }
}
your api returns a list of this class basically.
At this point create another object which looks like this:
public class myPaginatedReturnedData
{
public List<myReturnedData> { get; set; }
public int TotalItemsCount { get; set; }
}
your front end has no way of knowing what the total count is since it only receives one page of data so you need to get that number back from the API and this is one way of doing it.
So before you paginate, on the server side, you do a total count of your items and then you paginate the data and finally send back both these items.
On the front end side, you will have pageSize, and totalItemsCount and you can use this to calculate how many page indexes you should display to the user.
If your pageSize is 10 and totalItemsCount is 55 then your page index will be from 1 to 6, with page 6 only showing 5 items. You can easily write a method, on the client side for this calculation.
<-- extra details -->
change this:
public async Task<IEnumerable<MyClass>> MyMethod()
to
public async Task<myPaginatedReturnedData> MyMethod()
I've basically changed your original returned class to the new one in my example which is a wrapper of yours plus the totalCount value.
This allows you to set the value in your front end since you are now returning it together with your actual paginated data.
On the client side, the response of the API will be a string.
You could parse the response string into a JSON object, using something like
var apiData = JSON.parse(responseString)
This gives you an object and you can access your data from there.
Related
We can implementing pagination in C#, ASP.NET in ActionResult function like this
public ActionResult Index(int? page)
{
Entities db = new Entities();
return View(db.myTable.ToList().ToPagedList(page ?? 1,8 ));
}
how to implementing pagination in JsonResult function that sending result to ajax in html view?
public JsonResult GetSearchingData(string SearchBy, string SearchValue)
{
var subCategoryToReturn = myList.Select(S => new { Name = S.Name });
return Json(subCategoryToReturn, JsonRequestBehavior.AllowGet);
}
stop thinking in terms of UI and start thinking in terms of data.
you have some data you want to paginate. That's it. Forget about MVC at this point or JSONResult or anything else that has nothing to do with the data.
One thing to be aware of, this code you posted above:
db.myTable.ToList().ToPagedList(page ??1,8 )
if you do it like this, your entire table will be returned from the database and then it will be paginated. What you want is to only return the data already paginated, so instead of returning 100 records and then only taking the first 20, only return the 20.
Don't use ToList() at that point, use something like this instead:
var myData = db.myTable.Skip(pageNumber * pageSize).Take(pageSize)
I don't have any running code to check this, but hopefully you get the idea, only return the data already paginated, so only return the data you will display and nothing more. The UI can send the page index you click on, the pageSize can be a predefined number stored in appSettings for example.
You can use, .Skip() for skipping first n elements and Take() for fetching the next n rows.
int pageNumber = 0;
int ItemsPerPage= 10;
var Results = db.myTable.ToList().Skip(ItemsPerPage* pageNumber).Take(numberOfObjectsPerPage);
Suppose that you only need 10 items Per page. Number of Pages would be equal to
TotalPages = Math.Ceiling(TotalItems/ItemsPerPage); // IF 201 Items then 201/10 ceiling = 21 pages
Now you can make Pagination buttons in html. I suggest you use Jquery Pagniation Library
https://esimakin.github.io/twbs-pagination/
I'm using Realm + Xamarin Forms to do what I figure is about the most basic operation possible: a list view shows a collection of items, with a search bar filtering the results as the user types.
I have a get only collection property used as the list view's items source, initially populated from a Realm query, and this gets updated automatically with any changes to data, but I can't figure out how to update the search text without adding a set and literally replacing the entire collection.
This is very inefficient--I assume this is triggering re-registration of a bunch of notify-changed event listeners for the collection and each item in it and generally causing mass chaos with each letter tapped.
In the past I've created my own wrapping observable collection with a search method to handle this and I suppose that is an option here as well, but is there any way to do this with Realm? That is, to update the query without recreating the entire collection, some way to re-run the original query?
Update: This technique not longer works.
https://github.com/realm/realm-dotnet/issues/1569
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...it also differs from the behavior of LINQ to Objects, where every iteration will reevaluate expressions, meaning that changes to both sides of a condition will affect the result. A Realm query will evaluate the right-hand sides of the conditions on the first run.
When you construct a query that contains Where parameters that are based upon non-Realm based conditions the query results do not update when those variable/parameters are changed unless you update/execute the query again.
Realm queries are live, in the sense that they will continue to represent the current state of the database.
So what I do is to create a filter class (RealmObject), then if you instance a "filter object", save it to Realm, you can base your Linq's Where parameters upon one or more of the "filter" properties. Updating this RealmObject filter via Realm.Add(filterObject, true) your queries based upon that object are also updated
Realm queries are live, in the sense that they will continue to represent the current state of the database.
The results are lighting fast filtering that works great in any UI Search routine.
Example Model:
public class ARealmClass : RealmObject
{
public int Key { get; set; }
public string KeyString { get; set; }
}
public class ARealmClassFilter : RealmObject
{
[PrimaryKey]
public int Key { get; set; }
public int FilterKeyBy { get; set; }
}
Populate a Realm with some test data
var realm = Realm.GetInstance();
var all = realm.All<ARealmClass>();
if (all.Count() == 0)
{
realm.Write(() =>
{
for (int i = 0; i < 1000; i++)
{
var obj = new ARealmClass { Key = i, KeyString = i.ToString() };
realm.Add(obj);
}
});
}
Dynamic Live Query Example:
var realm = Realm.GetInstance();
var all = realm.All<ARealmClass>();
Console.WriteLine(all.Count());
var filterItem = new ARealmClassFilter { Key = 1, FilterKeyBy = 500 };
realm.Write(() =>
{
realm.Add(filterItem);
});
var filtered = all.Where(_ => _.Key > filterItem.FilterKeyBy);
Console.WriteLine(filtered.Count());
realm.Write(() =>
{
filterItem.FilterKeyBy = 750;
realm.Add(filterItem, true);
});
Console.WriteLine(filtered.Count());
Output:
2017-04-24 11:53:20.376 ios_build_foo[24496:3239020] 1000
2017-04-24 11:53:20.423 ios_build_foo[24496:3239020] 499
2017-04-24 11:53:20.425 ios_build_foo[24496:3239020] 249
Note: Quoted text # https://realm.io/docs/xamarin/latest/api/linqsupport.html
I know variants of this question have been asked before (even by me), but I still don't understand a thing or two about this...
It was my understanding that one could retrieve more documents than the 128 default setting by doing this:
session.Advanced.MaxNumberOfRequestsPerSession = int.MaxValue;
And I've learned that a WHERE clause should be an ExpressionTree instead of a Func, so that it's treated as Queryable instead of Enumerable. So I thought this should work:
public static List<T> GetObjectList<T>(Expression<Func<T, bool>> whereClause)
{
using (IDocumentSession session = GetRavenSession())
{
return session.Query<T>().Where(whereClause).ToList();
}
}
However, that only returns 128 documents. Why?
Note, here is the code that calls the above method:
RavenDataAccessComponent.GetObjectList<Ccm>(x => x.TimeStamp > lastReadTime);
If I add Take(n), then I can get as many documents as I like. For example, this returns 200 documents:
return session.Query<T>().Where(whereClause).Take(200).ToList();
Based on all of this, it would seem that the appropriate way to retrieve thousands of documents is to set MaxNumberOfRequestsPerSession and use Take() in the query. Is that right? If not, how should it be done?
For my app, I need to retrieve thousands of documents (that have very little data in them). We keep these documents in memory and used as the data source for charts.
** EDIT **
I tried using int.MaxValue in my Take():
return session.Query<T>().Where(whereClause).Take(int.MaxValue).ToList();
And that returns 1024. Argh. How do I get more than 1024?
** EDIT 2 - Sample document showing data **
{
"Header_ID": 3525880,
"Sub_ID": "120403261139",
"TimeStamp": "2012-04-05T15:14:13.9870000",
"Equipment_ID": "PBG11A-CCM",
"AverageAbsorber1": "284.451",
"AverageAbsorber2": "108.442",
"AverageAbsorber3": "886.523",
"AverageAbsorber4": "176.773"
}
It is worth noting that since version 2.5, RavenDB has an "unbounded results API" to allow streaming. The example from the docs shows how to use this:
var query = session.Query<User>("Users/ByActive").Where(x => x.Active);
using (var enumerator = session.Advanced.Stream(query))
{
while (enumerator.MoveNext())
{
User activeUser = enumerator.Current.Document;
}
}
There is support for standard RavenDB queries, Lucence queries and there is also async support.
The documentation can be found here. Ayende's introductory blog article can be found here.
The Take(n) function will only give you up to 1024 by default. However, you can change this default in Raven.Server.exe.config:
<add key="Raven/MaxPageSize" value="5000"/>
For more info, see: http://ravendb.net/docs/intro/safe-by-default
The Take(n) function will only give you up to 1024 by default. However, you can use it in pair with Skip(n) to get all
var points = new List<T>();
var nextGroupOfPoints = new List<T>();
const int ElementTakeCount = 1024;
int i = 0;
int skipResults = 0;
do
{
nextGroupOfPoints = session.Query<T>().Statistics(out stats).Where(whereClause).Skip(i * ElementTakeCount + skipResults).Take(ElementTakeCount).ToList();
i++;
skipResults += stats.SkippedResults;
points = points.Concat(nextGroupOfPoints).ToList();
}
while (nextGroupOfPoints.Count == ElementTakeCount);
return points;
RavenDB Paging
Number of request per session is a separate concept then number of documents retrieved per call. Sessions are short lived and are expected to have few calls issued over them.
If you are getting more then 10 of anything from the store (even less then default 128) for human consumption then something is wrong or your problem is requiring different thinking then truck load of documents coming from the data store.
RavenDB indexing is quite sophisticated. Good article about indexing here and facets here.
If you have need to perform data aggregation, create map/reduce index which results in aggregated data e.g.:
Index:
from post in docs.Posts
select new { post.Author, Count = 1 }
from result in results
group result by result.Author into g
select new
{
Author = g.Key,
Count = g.Sum(x=>x.Count)
}
Query:
session.Query<AuthorPostStats>("Posts/ByUser/Count")(x=>x.Author)();
You can also use a predefined index with the Stream method. You may use a Where clause on indexed fields.
var query = session.Query<User, MyUserIndex>();
var query = session.Query<User, MyUserIndex>().Where(x => !x.IsDeleted);
using (var enumerator = session.Advanced.Stream<User>(query))
{
while (enumerator.MoveNext())
{
var user = enumerator.Current.Document;
// do something
}
}
Example index:
public class MyUserIndex: AbstractIndexCreationTask<User>
{
public MyUserIndex()
{
this.Map = users =>
from u in users
select new
{
u.IsDeleted,
u.Username,
};
}
}
Documentation: What are indexes?
Session : Querying : How to stream query results?
Important note: the Stream method will NOT track objects. If you change objects obtained from this method, SaveChanges() will not be aware of any change.
Other note: you may get the following exception if you do not specify the index to use.
InvalidOperationException: StreamQuery does not support querying dynamic indexes. It is designed to be used with large data-sets and is unlikely to return all data-set after 15 sec of indexing, like Query() does.
I have a Module class, a User, a UserModule and a UserModuleLevel class.
_module_objects is a static ObservableCollection of Modules and gets created when the program starts, there's about 10 of them. e.g. User Management, Customer Services, etc.
User as you can probably guess is user details: ID, Name, etc. Populated from a db query.
With UserModules, I do not keep the module information in the db, just the module level, which is just the module security levels. this is kept in the db as: User_ID, Module_ID, ModuleLevel, ModuleLevelAccess.
What I'm trying to do is populate an ObservableCollection of users in the fastest manner. I have about 120,000 users, usually these users only have access to 2 or 3 of the 10 modules.
Below is what I have tried so far, however the piece with asterisks around it is the bottle neck, because it is going through every module of every user.
Hoping for some advice to speed things up.
public class UserRepository
{
ObservableCollection<User> m_users = new ObservableCollection<User>();
public UserRepository(){}
public void LoadUsers()
{
var users = SelectUsers();
foreach (var u in users)
{
m_users.Add(u);
}
}
public IEnumerable<User> SelectUsers()
{
var userModulesLookup = GetUserModules();
var userModuleLevelsLookup = GetUserModuleLevels().ToLookup(x => Tuple.Create(x.User_ID, x.Module_ID));
clsDAL.SQLDBAccess db = new clsDAL.SQLDBAccess("DB_USERS");
db.setCommandText("SELECT * FROM USERS");
using (var reader = db.ExecuteReader())
{
while (reader.Read())
{
var user = new User();
var userId = NullSafeGetter.GetValueOrDefault<int>(reader, "USER_ID");
user.User_ID = userId;
user.Username = NullSafeGetter.GetValueOrDefault<string>(reader, "USERNAME");
user.Name = NullSafeGetter.GetValueOrDefault<string>(reader, "NAME");
user.Job_Title = NullSafeGetter.GetValueOrDefault<string>(reader, "JOB_TITLE");
user.Department = NullSafeGetter.GetValueOrDefault<string>(reader, "DEPARTMENT");
user.Company = NullSafeGetter.GetValueOrDefault<string>(reader, "COMPANY");
user.Phone_Office = NullSafeGetter.GetValueOrDefault<string>(reader, "PHONE_OFFICE");
user.Phone_Mobile = NullSafeGetter.GetValueOrDefault<string>(reader, "PHONE_MOBILE");
user.Email = NullSafeGetter.GetValueOrDefault<string>(reader, "EMAIL");
user.UserModules = new ObservableCollection<UserModule>(userModulesLookup);
//**************** BOTTLENECK **********************************
foreach (var mod in user.UserModules)
{
mod.UserModuleLevels = new ObservableCollection<UserModuleLevel>(userModuleLevelsLookup[Tuple.Create(userId, mod.Module.Module_ID)]);
}
//**************************************************************
yield return user;
}
}
}
private static IEnumerable<Users.UserModule> GetUserModules()
{
foreach (Module m in ModuleKey._module_objects)
{
//Set a reference in the UserModule to the original static module.
var user_module = new Users.UserModule(m);
yield return user_module;
}
}
private static IEnumerable<Users.UserModuleLevel> GetUserModuleLevels()
{
clsDAL.SQLDBAccess db_user_module_levels = new clsDAL.SQLDBAccess("DB_USERS");
db_user_module_levels.setCommandText(#"SELECT * FROM USER_MODULE_SECURITY");
using (var reader = db_user_module_levels.ExecuteReader())
{
while (reader.Read())
{
int u_id = NullSafeGetter.GetValueOrDefault<int>(reader, "USER_ID");
int m_id = NullSafeGetter.GetValueOrDefault<int>(reader, "MODULE_ID");
int ml_id = NullSafeGetter.GetValueOrDefault<int>(reader, "MODULE_LEVEL_ID");
int mla = NullSafeGetter.GetValueOrDefault<int>(reader, "MODULE_LEVEL_ACCESS");
yield return new Users.UserModuleLevel(u_id, m_id, ml_id, mla);
}
}
}
}
In the end I'll put the users into a DataGrid with module security displayed, buttons with green show there is some type of access to this module, clicking on it will bring up actual security settings.
For performance gains you can do a few things:
Change your data access code to perform JOINs in SQL to get your data as a single result set.
SQL tends to be a fair bit faster at returning a result set of relational data than C# is at glueing the data together after the fact. This is because it's optimised to do just that and you should take advantage of that
You should probably consider paging the results - any user that says they need all 120,000 results at once should be slapped upside the head with a large trout. Paging the results will limit the amount of processing that you need to do in the application
Doing the above can be quite daunting as you would need to modify your application to include paging - often 3rd party controls such as grids etc have some paging mechanisms built in, and these days most ORM software has some sort of paging support which translates your C# code to the correct dialect for your chosen RDBMS
A good example (I've been working with a bit lately) is ServiceStack OrmLite.
I believe it to be free as long as you are using the legacy V3 version (which is pretty darn good .. https://github.com/ServiceStackV3/ServiceStackV3) and I've seen some forks of it on GitHub which are currently maintained (http://www.nservicekit.com/)
There is a small learning curve, but nothing the examples/docs can't tell you
Here's an extension method I'm using to page my queries in my service layer:
public static SqlExpressionVisitor<T> PageByRequest<T>(this SqlExpressionVisitor<T> expr, PagedRequest request)
{
return expr.Limit((request.PageNumber - 1) * request.PageSize, request.PageSize);
}
The request contains the page number and page size (from my web app), and the Limit extension method in OrmLite does the rest. I should probably add that the <T> generic parameter is the object type that OrmLite will map to after it has queried.
Here's an example of that (its just a POCO with some annotations)
[Alias("Customers")]
public class Customer : IHasId<string>
{
[Alias("AccountCode")]
public string Id { get; set; }
public string CustomerName { get; set; }
// ... a load of other fields
}
The method is translated to T-SQL and results in the following query against the DB (for this example I selected page 4 on my customer list with a page size of 10):
SELECT <A big list of Fields> FROM
(SELECT ROW_NUMBER() OVER (ORDER BY AccountCode) As RowNum, * FROM "Customers")
AS RowConstrainedResult
WHERE RowNum > 40 AND RowNum <= 50
This keeps the query time down to way less than a second and ensures I don't need to write a shedload of vendor specific SQL
It really depends on how much application you have already got - if you are too far in, it may be a nightmare to refactor for an ORM, but it's worth considering for other projects
I'm using a Redis database and ServiceStack client for it. I have a class called "Post" which has a property GroupId. Now when I'm storing this class the key is "urn:post:2:groupid:123". Now if I want to find all posts related to one group i need to use SearchKeys("urn:*groupid:123") method to retrieve all posts related to one group. Is this best practice to use Redis DB or should I convert my post key into form of "urn:groupid:123"post:2" ? If so how I can achieve this?
Post class:
public class Post
{
public const string POST_INCREMENT_KEY = "POST_INCREMENT";
public string Id { get; set; }
public string Message { get; set; }
public string GroupId { get; set; }
public void BuildId(long uniqueId)
{
Id = uniqueId + ":groupid:" + GroupId;
}
}
Code for storing post:
var post = new Post
{
GroupId = groupId,
Message = Request.Form["message"]
};
post.BuildId(_client.Increment(Post.POST_INCREMENT_KEY, 1));
_client.Store(post);
The best practice in redis is to maintain an index of the relationship you want to query.
Manually maintaining an index in Redis
An index is just a redis SET containing the related Ids you want to maintain, given that you want to "retrieve all posts related to one group" I would maintain the following index:
const string GroupPostIndex = "idx:group>post:{0}";
So that everytime you store a post, you also want to update the index, e.g:
client.Store(post);
client.AddItemToSet(GroupPostIndex.Fmt(groupId), post.Id);
Note: Redis SET operations are idempotent in that adding an item/id multiple times to a SET will always result in there being only one occurrence of that item in the SET, so its always safe to add an item to the set whenever storing a POST without needing to check if it already exists.
Now when I want to retrieve all posts in a group I just need to get all the ids from the SET with:
var postIds = client.GetAllItemsFromSet(GroupPostIndex.Fmt(groupId));
Then fetch all the posts with those ids:
var posts = redis.As<Post>().GetByIds(postIds);
Using ServiceStack.Redis Related Entities API's
The above shows what's required to maintain an index in Redis yourself, but as this is a common use-case, ServiceStack.Redis also offers a high-level typed API that you can use instead.
Which lets you store related entities with:
client.As<Group>().StoreRelatedEntities(groupId, post);
Note: this also takes care of storing the Post
and retrieve them with:
var posts = client.As<Group>().GetRelatedEntities<Post>(groupId);
It also offers other convenience API's like quickly finding out how many posts there are within a given group:
var postsCount = client.As<Group>().GetRelatedEntitiesCount<Post>(groupId);
and deleting either 1 or all entities in the group:
client.As<Group>().DeleteRelatedEntity<Post>(groupId, postId);
client.As<Group>().DeleteRelatedEntities<Post>(groupId); //all group posts