I just came accross this table:
Please let me know what difference in poor-->better for the last 5 items.
The reason for all of this is quite simple. When you write SPList.Items.Count to get the total number of items, SPList.Items returns the collection of all items in the list.
You don't want the all items, this can be an expensive action.
By writing SPList.ItemCount, you make sure you only read a number from the database, and not all items.
Essentially, this is true for all items in the list - you should generally avoid using the entire Collection objects (i.e. SPList.Items or SPFolder.Files) when you can. Similarly, if you use them more than once, you should cache them using a local variable.
Here's an example using indexers. Suppose I have a Guid, and want to get an item.
SPListItem item = list.Items[guid];
Looks innocent enough, but it is actually the same as:
SPListItemCollection items = list.Items;
SPListItem item = items[guid];
The point is - SharePoint (and C#, really) doesn't know what you're going to do next, or how you're going to use the collection. The moment you've wrote .Items you already made a slow operation.
he reason for all of this is quite simple. When you write SPList.Items.Count to get the total number of items, SPList.Items returns the collection of all items in the list.
You don't want the all items, this can be an expensive action.
By writing SPList.ItemCount, you make sure you only read a number from the database, and not all items.
Essentially, this is true for all items in the list - you should generally avoid using the entire Collection objects (i.e. SPList.Items or SPFolder.Files) when you can. Similarly, if you use them more than once, you should cache them using a local variable.
Here's an example using indexers. Suppose I have a Guid, and want to get an item.
SPListItem item = list.Items[guid];
Looks innocent enough, but it is actually the same as:
SPListItemCollection items = list.Items;
SPListItem item = items[guid];
The point is - SharePoint (and C#, really) doesn't know what you're going to do next, or how you're going to use the collection. The moment you've wrote .Items you already made a slow operation.
Related
So I have a SP list with about 100k items (voucher codes) in it.
Each of them has columns for State (Active/Used), Value (10,20,30) Group (Normal, Special) and Code (random alphanumeric). Each of the columns are indexed
I can't use CAML to get the next active code for a certain group and value, because each of the criteria would return > 5k items (list view threshold).
So what would be the most efficient way to retrieve the next code?
As the list is continuously growing, loading all items with SPListItemCollectionPosition is not really an option. Isn't there a better way?
It should work for onprem, as well as spOnline
Thank you
If your Code is a progressive number, increasing at any new entered item, you have 2 options:
Create a service List that you can name something like CodeCounter, here you store the last created Code in a column. Create a workflow in main list, starting at item creation. This workflow will read the last created Code from service list, then update it to Code+1 and in parallel manages the update (optional) of main code in main list
use an event handler (farm solution, here the query surely works better)
I'm attempting to merge a list of lists into one master sorted list. (It's a list of lists of strings in C#, but the type isn't really relevant here)
There's a similar problem here
How to merge a list of lists with same type of items to a single list of items?
but my requirements are different.
Every comparison I make is going to be based on user input, so I can't do any sort of all-at-once LINQ thing like the other question. I have to ask the user which item is 'better' for every single comparison.
My initial idea was to effectively copy the 'merge' part of 'mergesort', and repeatedly use binary search for each new term popped off the second list in each merge.
However, I still don't know what would be the most efficient order to merge the lists. Should I merge smallest with smallest, and build up size progressively that way? Or would it be better to have a designated big list that the small lists get merged into once?
I'm not sure how to go about proving this either way.
How would I merge these lists of arbitrary size whilst performing the minimum number of user-facing comparisons?
Given that none of those lists are sorted, and you don't have any pre-existing knowledge about the data in the lists, you cannot do this better than O(nlogn) comparisons. Here n is size(list 1) + size(list 2) +...size(list final).
You can simply iterate through the list of lists and construct a master list with n elements. Then you can sort the master list using either quick sort or merge sort. The time complexity would be O(nlogn) for sorting. There is an additional memory of O(n) for the master list.
So you would ask the user about nlogn times. You can minimize the number of times you ask the user about comparison by caching the results of the prior comparisons and not asking the user the same comparison again.
So I have an arraylist that will display data to a listbox. Then, the user can select an entry in the listbox. From there, I need to be able to grab whatever entry they select. I'm using VS 2010.
I have been trying
tempArray.Insert(0, myarray.IndexOf(mylistbox.SelectedIndex);
All this is doing is giving me the actual index number and not the contents of the index. I'm not sure how to index the arraylist to get the object that is contained at that index.
And yes, I know that I should be using List objects, but it is for class and we have yet to be taught list objects.
As the name suggests .IndexOf() will return an index from the array.
Array.IndexOf Method
This is obviously not what you want.
What you should look at is using the SelectedItem instead of the SelectedIndex.
That way, you can access the object directly and insert it into your list. One thing to remember is that SelectedItem will return an Object. This means that it will have to be cast to the type that you expect to be using.
Also, are you wanting to constantly insert items to the top of the list or does it not matter. If you can append it to the end of the list, try using .Add(yourObject).
If these are actual arrays you are working with, shouldn't you be able to to access what's in the array by using the [ ] syntax? i.e.
myArray[myListBox.SelectedIndex]
1) Dont use ArrayList... that's an old class. Use List<T> instead.
2) Have you tried ListBox.SelectedItem ? That seems a bit simpler...
In my Sitecore content tree there are few thousands of items, and I just want to alter few items programmatically. Instead of rebuilding the entire lucene index which is taking a big time, I want to update index entries for each item I'm altering in real time. I tried
item.Database.Indexes.UpdateItem(item);
but it is obsolete and ask me to use SearchManager.
Can anyone guide me how to update index entries for a given item?
PS: I'm altering items from desktop application, not the website.
Try to execute one of the HistoryEngine.RegisterItem... methods, e.g:
item.Database.Engines.HistoryEngine.RegisterItemSaved(item, new ItemChanges(item));
item.Database.Engines.HistoryEngine.RegisterItemCreated(item);
item.Database.Engines.HistoryEngine.RegisterItemMoved(item, oldParentId);
Well actually there is no Update operation on indexes, so feel free to do delete/add
I have a Windows.Forms.ListView where the user shall be able to add and remove entries. Particularly, those are files (with attributes) the user can pick through a dialog. Now, I want to check whether the file names / entries I get from the file picker are already in the list; in other words, there shall only be unique items in the ListView.
I could not find any way to compare ListViewItems to check whether the exact same entry and information is already present in my ListView. The only way I see now is to:
> Loop through the files I get from the picker (multiselect is true)
> Loop through ListView.Items
compare ListViewItem.Text
> Loop through ListViewItem.SubItems
compare .Text
If during the comparisons a complete match was found, the new entry is a duplicate and thus is not added afterwards.
This seems like an awful lot of effort to do something that I would find to be a function that is not so uncommon. Is there any other way to achieve this?
The file system itself uses only the filename to test for uniqueness, so you should do the same, no need to compare sub-items too.
Items in a ListView typically represent some object. What I usually do is to assign that object (or at least some value identifying the object) to the Tag property of the corresponding ListViewItem when they are added to the list. That way you get a quite simple setup where you can compare items by getting the values from the Tag property and perform the comparison on those objects instead of the list view representation of them.