New to C# 3: should this be done with linq? - c#

I have two List's of words. I want to count the words larger than 3 characters that exists in both lists. Using C# how would you solve it ?

I would use
var count = Enumerable.Intersect(listA, listB).Count(word => word.Length > 3);
assuming listA and listB are of type IEnumerable<String>.

Assuming list 1 length is N and list 2 length is M:
I would first filter since this is a cheap operation O(N+M) then do the intersection, A relatively expensive operation based on the current implementation. The cost of the Intersect call is complicated and is fundamentally driven by the behaviours of the hash function:
If the hash function is poor then the performance can degrade to O(N*M) (as every string in one list is checked against every stream in the other.
If the performance is good then cost is simlpy a lookup in a hash, as such this is O(1), thus a cost of M checks in the hash and cost of M to construct the hash so also O(N+M) in time but with an additional cost of O(N) in space.
The construction of the backing set will be the killer in performance terms.
If you knew that both lists were, say, sorted already then you could write your own Intersects check with constant space overhead and O(N+M) worst case running time not to mention excellent memory locality.
This leaves you with:
int count = Enumerable.Intersect(
list1.Where(word => word.Length > 3),
list2.Where(word => word.Length > 3)).Count();
Incidentally the Enumerable.Intersect method's performance behaviour may change considerably depending on the order of the arguments.
In most cases making the smaller of the two the first argument will produce faster, more memory efficient code and the first argument is used to construct a backing temporary (hash based) Set. This of course is coding to a (hidden) implementation detail so should be considered only if this is shown to be an issue after performance analysis and highlighted as a micro optimization if so.
The second filter on list2 is fundamentally unecessary for correctness (since the Intersects will remove such entries anyway)
If it quite possible that the following is faster
int count = Enumerable.Intersect(
list1.Where(word => word.Length > 3),
list2).Count();
However filtering by length is very cheap for long strings compared to calculating their hash codes. The better one will only be found through benchmarking with inputs appropriate to your usage.

List<string> t = new List<string>();
List<string> b = new List<string>();
...
Console.WriteLine(t.Count(x => x.Length > 3 && b.Contains(x)));

Related

Enumerator overhead vs indexers in strings performance

I was charged with speeding up a text processing/normalization section of our code, and there were multiple sections that had multiple, configurable lists of "if you see this, replace with that", and they were implemented with big stacks of regexes. That looked like a good place to start - and it was.
I implemented a simple Trie loaded with the configuration entries and then had a
Match (string raw, int idx = 0)
function that skimmed the raw input, looking through the Trie for matches.
My first draft of the match function used a for loop and an indexer (i.e.
TrieNode node = Root;
for (; idx < raw.Length; idx++)
{
TrieNode next;
if (node.TryGetValue(raw[idx], out next))
...
In it and it was several orders of magnitude faster than a pile of regexes.
I wanted to clean up and generalize the Trie, maybe make it configurable for either chars or words as tokens, and after all the generisizing I replaced the above with
foreach (var c in idx > 0 ? raw.Skip(idx) : raw)
{
...
and was surprised to see just how much overhead the change in iteration caused. I expected there to be some overhead but the foreach method was about 100x slower (4300 ms per run of 100 articles vs 40 ms with for loop) - just that change alone.
I've seen lots of articles from various time periods saying "of course Linq and enumerators suck!" to "always use foreach because the performance is close enough and foreach is cooler".
None of the oflow articles I found were very current so I thought I'd drop this note in a bottle.
I get the enumerator allocation is going to add a little overhead and Skip() is never going to be as fast as jumping right ahead with an indexer, but it was a pretty stark contrast.
I did find a debate about whether String should implement IReadOnlyList or not, which seems like it could have been the best of both worlds but that doesn't exist.
Is anyone else surprised that has that amount of overhead?
I'm not surprised that Skip is orders of magnitude slower since it will be O(n) (essentially incrementing an integer until you get to idx) versus O(1) for the direct indexer.
I would not generalize this to "Linq sucks - use foreach". You could implement functionally the same code as Skip in your foreach and get roughly the same results. The problem is not that you're using Linq - the problem is that you're using Skip on a collection that supports direct access.
If you want to generalize it to use either chars or words as tokens, it may be simplest to convert raw to a List<T> and support either a list of chars or a list of strings - with what you have, there should not be a significant performance difference between the two.

Iterate over strings that ".StartsWith" without using LINQ

I'm building a custom textbox to enable mentioning people in a social media context. This means that I detect when somebody types "#" and search a list of contacts for the string that follows the "#" sign.
The easiest way would be to use LINQ, with something along the lines of Members.Where(x => x.Username.StartsWith(str). The problem is that the amount of potential results can be extremely high (up to around 50,000), and performance is extremely important in this context.
What alternative solutions do I have? Is there anything similar to a dictionary (a hashtable based solution) but that would allow me to use Key.StartsWith without itterating over every single entry? If not, what would be the fastest and most efficient way to achieve this?
Do you have to show a dropdown of 50000? If you can limit your dropdown, you can for example just display the first 10.
var filteredMembers = new List<MemberClass>
foreach(var member in Members)
{
if(member.Username.StartWith(str)) filteredMembers.Add(member);
if(filteredMembers >= 10) break;
}
Alternatively:
You can try storing all your member's usernames into a Trie in addition to your collection. That should give you a better performance then looping through all 50000 elements.
Assuming your usernames are unique, you can store your member information in a dictionary and use the usernames as the key.
This is a tradeoff of memory for performance of course.
It is not really clear where the data is stored in the first place. Are all the names in memory or in a database?
In case you store them in database, you can just use the StartsWith approach in the ORM, which would translate to a LIKE query on the DB, which would just do its job. If you enable full text on the column, you could improve the performance even more.
Now supposing all the names are already in memory. Remember the computer CPU is extremely fast so even looping through 50 000 entries takes just a few moments.
StartsWith method is optimized and it will return false as soon as it encounters a non-matching character. Finding the ones that actually match should be pretty fast. But you can still do better.
As others suggest, you could build a trie to store all the names and be able to search for matches pretty fast, but there is a disadvantage - building the trie requires you to read all the names and create the whole data structure which is complex. Also you would be restricted only to a given set of characters and a unexpected character would have to be dealt with separately.
You can however group the names into "buckets". First start with the first character and create a dictionary with the character as a key and a list of names as the value. Now you effectively narrowed every following search approximately 26 times (supposing English alphabet). But don't have to stop there - you can perform this on another level, for the second character in each group. And then third and so on.
With each level you are effectively narrowing each group significantly and the search will be much faster afterwards. But there is of course the up-front cost of building the data structure, so you always have to find the right trade-off for you. More work up-front = faster search, less work = slower search.
Finally, when the user types, with each new letter she narrows the target group. Hence, you can always maintain the set of relevant names for the current input and cut it down with each successive keystroke. This will prevent you from having to go from the beginning each time and will improve the efficiency significantly.
Use BinarySearch
This is a pretty normal case, assuming that the data are stored in-memory, and here is a pretty standard way to handle it.
Use a normal List<string>. You don't need a HashTable or a SortedList. However, an IEnumerable<string> won't work; it has to be a list.
Sort the list beforehand (using LINQ, e.g. OrderBy( s => s)), e.g. during initialization or when retrieving it. This is the key to the whole approach.
Find the index of the best match using BinarySearch. Because the list is sorted, a binary search can find the best match very quickly and without scanning the whole list like Select/Where might.
Take the first N entries after the found index. Optionally you can truncate the list if not all N entries are a decent match, e.g. if someone typed "AZ" and there are only one or two items before "BA."
Example:
public static IEnumerable<string> Find(List<string> list, string firstFewLetters, int maxHits)
{
var startIndex = list.BinarySearch(firstFewLetters);
//If negative, no match. Take the 2's complement to get the index of the closest match.
if (startIndex < 0)
{
startIndex = ~startIndex;
}
//Take maxHits items, or go till end of list
var endIndex = Math.Min(
startIndex + maxHits - 1,
list.Count-1
);
//Enumerate matching items
for ( int i = startIndex; i <= endIndex; i++ )
{
var s = list[i];
if (!s.StartsWith(firstFewLetters)) break; //This line is optional
yield return s;
}
}
Click here for a working sample on DotNetFiddle.

Bitwise operation in large list as fast as possible in c#

I have a list from 10,000 long value
and I want to compare that data withe 100,000 other long value
compare is a bitwise operation -->
if (a&b==a) count++;
which algoritm I can use for getting best performance?
If I understand your question correctly, you want to check a against each b whether some predicate is true. So a naive solution to your problem would be as follows:
var result = aList.Sum(a => bList.Count(b => (a & b) == a));
I'm not sure this can really be sped up for an arbitrary predicate, because you can't get around checking each a against each b. What you could try is run the query in parallel:
var result = aList.AsParallel().Sum(a => bList.Count(b => (a & b) == a));
Example:
aList: 10,000 random long values; bList: 100,000 random long values.
without AsParallel: 00:00:13.3945187
with AsParallel: 00:00:03.8190386
Put all of your as into a trie data structure, where the first level of the tree corresponds to the first bit of the number, the second to the second bit, and so on. Then, for each b, walk down the trie; if this bit is 1 in b, then count both branches, or if this bit is 0 in b, count only the 0 branch of the trie. I think this should be O(n+m), but I haven't thought about it very hard.
You can probably get the same semantics but with better cache characteristics by sorting the list of as and using the sorted list in much the same way as the trie. This is going to be slightly worse in terms of number of operations - because you'll have to search for stuff a lot of the time - but the respect for the CPU cache might more than make up for it.
N.B. I haven't thought about correctness much harder than I've thought about big-O notation, which is to say probably not enough.

How to determine the runtime for the two reverse string methods in O() notaion?

I have two queries to reverse a string. Need to compare them:
public string ReverseD(string text)
{
return new string(text.ToCharArray().Reverse().ToArray());
}
public string ReverseB(string text)
{
char[] charArray = text.ToCharArray();
Array.Reverse(charArray);
return new string(charArray);
}
How can I determine the run time for these two algorithms in O() notation? Need to compare.
Both are O(n) - they both go through the full array, which is O(n).
The first algorithm has worse constant factor though since effectively you are "buffering" the full array and then emit it in reverse order (unless Enumerable.Reverse() is optimized for arrays under the hood, don't have Reflector handy right now). Since you are buffering the full array, then emit a new array in reverse order you could say that effort is 2*N, so the constant factor c = 2.
The second algorithm will use array indexes so you are performing n/2 element swaps within the same array - still O(n) but a constant factor c = 0.5.
While knowing the asymptotic performance of these two approaches to reversing a string is useful, it's probably not what you really need. You're probably not going to be applying the algorithm to strings whose length becomes larger and larger.
In cases like this, it's actualy more helpful to just run both algorithms a bunch of times, and see which one takes less time.
Most likely it will be the straight Array.Reverse() version, since it will intelligently swap items within the array, whereas the Enumerable.Reverse() method will yield return each element in reverse order. Either will be O(n), since they both manipulate each of the n items in the array a constant number of times.
But again, the best way to see which one will perform better is to actually run them and see.

What .NET collection provides the fastest search

I have 60k items that need to be checked against a 20k lookup list. Is there a collection object (like List, HashTable) that provides an exceptionly fast Contains() method? Or will I have to write my own? In otherwords, is the default Contains() method just scan each item or does it use a better search algorithm.
foreach (Record item in LargeCollection)
{
if (LookupCollection.Contains(item.Key))
{
// Do something
}
}
Note. The lookup list is already sorted.
In the most general case, consider System.Collections.Generic.HashSet as your default "Contains" workhorse data structure, because it takes constant time to evaluate Contains.
The actual answer to "What is the fastest searchable collection" depends on your specific data size, ordered-ness, cost-of-hashing, and search frequency.
If you don't need ordering, try HashSet<Record> (new to .Net 3.5)
If you do, use a List<Record> and call BinarySearch.
Have you considered List.BinarySearch(item)?
You said that your large collection is already sorted so this seems like the perfect opportunity? A hash would definitely be the fastest, but this brings about its own problems and requires a lot more overhead for storage.
You should read this blog that speed tested several different types of collections and methods for each using both single and multi-threaded techniques.
According to the results, a BinarySearch on a List and SortedList were the top performers constantly running neck-in-neck when looking up something as a "value".
When using a collection that allows for "keys", the Dictionary, ConcurrentDictionary, Hashset, and HashTables performed the best overall.
I've put a test together:
First - 3 chars with all of the possible combinations of A-Z0-9
Fill each of the collections mentioned here with those strings
Finally - search and time each collection for a random string (same string for each collection).
This test simulates a lookup when there is guaranteed to be a result.
Then I changed the initial collection from all possible combinations to only 10,000 random 3 character combinations, this should induce a 1 in 4.6 hit rate of a random 3 char lookup, thus this is a test where there isn't guaranteed to be a result, and ran the test again:
IMHO HashTable, although fastest, isn't always the most convenient; working with objects. But a HashSet is so close behind it's probably the one to recommend.
Just for fun (you know FUN) I ran with 1.68M rows (4 characters):
Keep both lists x and y in sorted order.
If x = y, do your action, if x < y, advance x, if y < x, advance y until either list is empty.
The run time of this intersection is proportional to min (size (x), size (y))
Don't run a .Contains () loop, this is proportional to x * y which is much worse.
If it's possible to sort your items then there is a much faster way to do this then doing key lookups into a hashtable or b-tree. Though if you're items aren't sortable you can't really put them into a b-tree anyway.
Anyway, if sortable sort both lists then it's just a matter of walking the lookup list in order.
Walk lookup list
While items in check list <= lookup list item
if check list item = lookup list item do something
Move to next lookup list item
If you're using .Net 3.5, you can make cleaner code using:
foreach (Record item in LookupCollection.Intersect(LargeCollection))
{
//dostuff
}
I don't have .Net 3.5 here and so this is untested. It relies on an extension method. Not that LookupCollection.Intersect(LargeCollection) is probably not the same as LargeCollection.Intersect(LookupCollection) ... the latter is probably much slower.
This assumes LookupCollection is a HashSet
If you aren't worried about squeaking every single last bit of performance the suggestion to use a HashSet or binary search is solid. Your datasets just aren't large enough that this is going to be a problem 99% of the time.
But if this just one of thousands of times you are going to do this and performance is critical (and proven to be unacceptable using HashSet/binary search), you could certainly write your own algorithm that walked the sorted lists doing comparisons as you went. Each list would be walked at most once and in the pathological cases wouldn't be bad (once you went this route you'd probably find that the comparison, assuming it's a string or other non-integral value, would be the real expense and that optimizing that would be the next step).

Categories