I am not sure if CopyMost is the correct term to use here, but it's the term my client used ("CopyMost Data Protocol"). Sounds like he wants the mode? I have a set of data:
Increment Value
.02 1
.04 1
.06 1
.08 2
.10 2
I need to return which Value occurs the most "CopyMost". In this case, the value is 1. Right now I had planned on writing an Extension Method for IEnumerable to do this for integer values. Is there something built into Linq that already does this easily? Or is it best for me to write an extension method that would look something like this
records.CopyMost(x => x.Value);
EDIT
Looks like I am looking for the modal average. I've provided an updated answer that allows for a tiebreaker condition. It's meant to be used like this, and is generic.
records.CopyMost(x => x.Value, x => x == 0);
In this case x.Value would be an int, and if the the count of 0s was the same as the counts of 1s and 3s, it would tiebreak on 0.
Well, here's one option:
var query = (from item in data
group 1 by item.Value into g
orderby g.Count() descending
select g.Key).First();
Basically we're using GroupBy to group by the value - but all we're interested in for each group is the size of the group and the key (which is the original value). We sort the groups by size, and take the first element (the one with the most elements).
Does that help?
Jon beat me to it, but the term you're looking for is Modal Average.
Edit:
If I'm right In thinking that it's modal average you need then the following should do the trick:
var i = (from t in data
group t by t.Value into aggr
orderby aggr.Count() descending
select aggr.Key).First();
This method has been updated several times in my code over the years. It's become a very important method, and is much different than it use to be. I wanted to provide the most up to date version in case anyone was looking to add CopyMost or a Modal Average as a linq extension.
One thing I did not think I would need was a tiebreaker of some sort. I have now overloaded the method to include a tiebreaker.
public static K CopyMost<T, K>(this IEnumerable<T> records, Func<T, K> propertySelector, Func<K, bool> tieBreaker)
{
var grouped = records.GroupBy(x => propertySelector(x)).Select(x => new { Group = x, Count = x.Count() });
var maxCount = grouped.Max(x => x.Count);
var subGroup = grouped.Where(x => x.Count == maxCount);
if (subGroup.Count() == 1)
return subGroup.Single().Group.Key;
else
return subGroup.Where(x => tieBreaker(x.Group.Key)).Single().Group.Key;
}
The above assumes the user enters a legitimate tiebreaker condition. You may want to check and see if the tiebreaker returns a valid value, and if not, throw an exception. And here's my normal method.
public static K CopyMost<T, K>(this IEnumerable<T> records, Func<T, K> propertySelector)
{
return records.GroupBy(x => propertySelector(x)).OrderByDescending(x => x.Count()).Select(x => x.Key).First();
}
Related
I've seen the following:
Random element of List<T> from LINQ SQL
What I'm doing now - to get three random elements - is the following:
var user1 = users.ElementAt( rand.Next( users.Count() ) );
var user2 = users.Where(u => !u.Equals(user1)).ElementAt( rand.Next( users.Count() ) );
var user3 = users.Where(u => !u.Equals(user1) && !u.Equals(user2)).ElementAt( rand.Next( users.Count() ) );
But this is obviously unweildy and inefficient. How can I do this elegantly and efficiently with one trip to the database?
EDIT: Based on below answer, I made the following extension method:
public static IQueryable<T> SelectRandom<T>(this IQueryable<T> list, int count) {
return list.OrderBy(_ => Guid.NewGuid()).Take(count);
}
var result = users.SelectRandom(3);
BUT it seems like this would be inefficient for large datasets. Another proposal is to take the .Count() of the IQueryable, select n random numbers that fall within that result, and then shoot a query to the db with the selected random indices... but the Count() might be expensive here.
The following should work:
users.OrderBy(_ => Guid.NewGuid()).Take(3)
This retrieves the first 3 elements from the users table, while sorting them by a value which is different each time.
Compared to AD.Net's answer.. well you'd require a list of userids generated randomly.. it doesn't suggest a way to do that
Assuming I have the following string array:
string[] str = new string[] {"max", "min", "avg", "max", "avg", "min"}
Is it possbile to use LINQ to get a list of indexes that match one string?
As an example, I would like to search for the string "avg" and get a list containing
2, 4
meaning that "avg" can be found at str[2] and str[4].
.Select has a seldom-used overload that produces an index. You can use it like this:
str.Select((s, i) => new {i, s})
.Where(t => t.s == "avg")
.Select(t => t.i)
.ToList()
The result will be a list containing 2 and 4.
Documentation here
You can do it like this:
str.Select((v,i) => new {Index = i, Value = v}) // Pair up values and indexes
.Where(p => p.Value == "avg") // Do the filtering
.Select(p => p.Index); // Keep the index and drop the value
The key step is using the overload of Select that supplies the current index to your functor.
You can use the overload of Enumerable.Select that passes the index and then use Enumerable.Where on an anonymous type:
List<int> result = str.Select((s, index) => new { s, index })
.Where(x => x.s== "avg")
.Select(x => x.index)
.ToList();
If you just want to find the first/last index, you have also the builtin methods List.IndexOf and List.LastIndexOf:
int firstIndex = str.IndexOf("avg");
int lastIndex = str.LastIndexOf("avg");
(or you can use this overload that take a start index to specify the start position)
First off, your code doesn't actually iterate over the list twice, it only iterates it once.
That said, your Select is really just getting a sequence of all of the indexes; that is more easily done with Enumerable.Range:
var result = Enumerable.Range(0, str.Count)
.Where(i => str[i] == "avg")
.ToList();
Understanding why the list isn't actually iterated twice will take some getting used to. I'll try to give a basic explanation.
You should think of most of the LINQ methods, such as Select and Where as a pipeline. Each method does some tiny bit of work. In the case of Select you give it a method, and it essentially says, "Whenever someone asks me for my next item I'll first ask my input sequence for an item, then use the method I have to convert it into something else, and then give that item to whoever is using me." Where, more or less, is saying, "whenever someone asks me for an item I'll ask my input sequence for an item, if the function say it's good I'll pass it on, if not I'll keep asking for items until I get one that passes."
So when you chain them what happens is ToList asks for the first item, it goes to Where to as it for it's first item, Where goes to Select and asks it for it's first item, Select goes to the list to ask it for its first item. The list then provides it's first item. Select then transforms that item into what it needs to spit out (in this case, just the int 0) and gives it to Where. Where takes that item and runs it's function which determine's that it's true and so spits out 0 to ToList, which adds it to the list. That whole thing then happens 9 more times. This means that Select will end up asking for each item from the list exactly once, and it will feed each of its results directly to Where, which will feed the results that "pass the test" directly to ToList, which stores them in a list. All of the LINQ methods are carefully designed to only ever iterate the source sequence once (when they are iterated once).
Note that, while this seems complicated at first to you, it's actually pretty easy for the computer to do all of this. It's not actually as performance intensive as it may seem at first.
While you could use a combination of Select and Where, this is likely a good candidate for making your own function:
public static IEnumerable<int> Indexes<T>(IEnumerable<T> source, T itemToFind)
{
if (source == null)
throw new ArgumentNullException("source");
int i = 0;
foreach (T item in source)
{
if (object.Equals(itemToFind, item))
{
yield return i;
}
i++;
}
}
You need a combined select and where operator, comparing to accepted answer this will be cheaper, since won't require intermediate objects:
public static IEnumerable<TResult> SelectWhere<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, bool> filter, Func<TSource, int, TResult> selector)
{
int index = -1;
foreach (var s in source)
{
checked{ ++index; }
if (filter(s))
yield return selector(s, index);
}
}
I have a set of results coming back from a Linq to SQL query. Each result has a Name and a SeriesId. The SeriesId can be any value from 1 to N,
So the results might initially come out of the database like this (i.e. any order):
FundA1
FundA6
FundA4
FundC6
FundC3
FundC4
FundB2
FundB7
FundB8
FundB6
I need to get these ordered first by Name, and then by SeriesId but I need to show SeriesId == 6 first, then the rest in any order.
So for example, I need
**FundA6**
FundA1
FundA4
**FundB6**
FundB2
FundB7
FundB8
**FundC6**
FundC3
FundC4
I know it's possible for me to order by Name and then SeriesId by doing this:
return queryable.OrderBy(f => f.Name).ThenBy(s => s.SeriesId));
but this will order the SeriesId by the lowest value first. Is there a way for me to override this default functionality by specifying that it should order by SeriesId starting at 6 rather than 1?
Try this:
return queryable.OrderBy(f => f.Name)
.ThenBy(f => f.SeriesId == 6 ? 0 : 1)
.ThenBy(s => s.SeriesId));
That relies on "false" ordering earlier than "true" - I think it will work... it would in LINQ to Objects, at least.
return queryable
.OrderBy(f => f.Name)
.ThenBy(f => f.SerialId == 6)
.ThenBy(f => f.SeriesId);
Create your own comparer, and give it as a second parameter to OrderBy, ot ThenBy.
The way user OrderBy, you rely on default comparer, that compares strings normally. But you can create your own, that will always return "6" first.
PS. Yes this won`t work directly on LINQ2SQL. But, since, you are loading all the values anyway. You can first load them, and then sort in memory.
Here`s an example:
class Sample
{
string[] strings = new[]{ "123","123456", "12345"};
public void SampleMethod()
{
var res = strings.AsEnumerable().OrderBy(s => s.Length, new MyComparer());
}
class MyComparer : IComparer<int>
{
public int Compare(int x, int y)
{
if (x == 6) return -1;
return x - y;
}
}
}
.AsEnumerable() is needed in order for LINQ2SQL to load the data into memory.
I have 2 list objects, one is just a list of ints, the other is a list of objects but the objects has an ID property.
What i want to do is sort the list of objects by its ID in the same sort order as the list of ints.
Ive been playing around for a while now trying to get it working, so far no joy,
Here is what i have so far...
//**************************
//*** Randomize the list ***
//**************************
if (Session["SearchResultsOrder"] != null)
{
// save the session as a int list
List<int> IDList = new List<int>((List<int>)Session["SearchResultsOrder"]);
// the saved list session exists, make sure the list is orded by this
foreach(var i in IDList)
{
SearchData.ReturnedSearchedMembers.OrderBy(x => x.ID == i);
}
}
else
{
// before any sorts randomize the results - this mixes it up a bit as before it would order the results by member registration date
List<Member> RandomList = new List<Member>(SearchData.ReturnedSearchedMembers);
SearchData.ReturnedSearchedMembers = GloballyAvailableMethods.RandomizeGenericList<Member>(RandomList, RandomList.Count).ToList();
// save the order of these results so they can be restored back during postback
List<int> SearchResultsOrder = new List<int>();
SearchData.ReturnedSearchedMembers.ForEach(x => SearchResultsOrder.Add(x.ID));
Session["SearchResultsOrder"] = SearchResultsOrder;
}
The whole point of this is so when a user searches for members, initially they display in a random order, then if they click page 2, they remain in that order and the next 20 results display.
I have been reading about the ICompare i can use as a parameter in the Linq.OrderBy clause, but i can’t find any simple examples.
I’m hoping for an elegant, very simple LINQ style solution, well I can always hope.
Any help is most appreciated.
Another LINQ-approach:
var orderedByIDList = from i in ids
join o in objectsWithIDs
on i equals o.ID
select o;
One way of doing it:
List<int> order = ....;
List<Item> items = ....;
Dictionary<int,Item> d = items.ToDictionary(x => x.ID);
List<Item> ordered = order.Select(i => d[i]).ToList();
Not an answer to this exact question, but if you have two arrays, there is an overload of Array.Sort that takes the array to sort, and an array to use as the 'key'
https://msdn.microsoft.com/en-us/library/85y6y2d3.aspx
Array.Sort Method (Array, Array)
Sorts a pair of one-dimensional Array objects (one contains the keys
and the other contains the corresponding items) based on the keys in
the first Array using the IComparable implementation of each key.
Join is the best candidate if you want to match on the exact integer (if no match is found you get an empty sequence). If you want to merely get the sort order of the other list (and provided the number of elements in both lists are equal), you can use Zip.
var result = objects.Zip(ints, (o, i) => new { o, i})
.OrderBy(x => x.i)
.Select(x => x.o);
Pretty readable.
Here is an extension method which encapsulates Simon D.'s response for lists of any type.
public static IEnumerable<TResult> SortBy<TResult, TKey>(this IEnumerable<TResult> sortItems,
IEnumerable<TKey> sortKeys,
Func<TResult, TKey> matchFunc)
{
return sortKeys.Join(sortItems,
k => k,
matchFunc,
(k, i) => i);
}
Usage is something like:
var sorted = toSort.SortBy(sortKeys, i => i.Key);
One possible solution:
myList = myList.OrderBy(x => Ids.IndexOf(x.Id)).ToList();
Note: use this if you working with In-Memory lists, doesn't work for IQueryable type, as IQueryable does not contain a definition for IndexOf
docs = docs.OrderBy(d => docsIds.IndexOf(d.Id)).ToList();
With the following data
string[] data = { "a", "a", "b" };
I'd very much like to find duplicates and get this result:
a
I tried the following code
var a = data.Distinct().ToList();
var b = a.Except(a).ToList();
obviously this didn't work, I can see what is happening above but I'm not sure how to fix it.
When runtime is no problem, you could use
var duplicates = data.Where(s => data.Count(t => t == s) > 1).Distinct().ToList();
Good old O(n^n) =)
Edit: Now for a better solution. =)
If you define a new extension method like
static class Extensions
{
public static IEnumerable<T> Duplicates<T>(this IEnumerable<T> input)
{
HashSet<T> hash = new HashSet<T>();
foreach (T item in input)
{
if (!hash.Contains(item))
{
hash.Add(item);
}
else
{
yield return item;
}
}
}
}
you can use
var duplicates = data.Duplicates().Distinct().ToArray();
Use the group by stuff, the performance of these methods are reasonably good. Only concern is big memory overhead if you are working with large data sets.
from g in (from x in data group x by x)
where g.Count() > 1
select g.Key;
--OR if you prefer extension methods
data.GroupBy(x => x)
.Where(x => x.Count() > 1)
.Select(x => x.Key)
Where Count() == 1 that's your distinct items and where Count() > 1 that's one or more duplicate items.
Since LINQ is kind of lazy, if you don't want to reevaluate your computation you can do this:
var g = (from x in data group x by x).ToList(); // grouping result
// duplicates
from x in g
where x.Count() > 1
select x.Key;
// distinct
from x in g
where x.Count() == 1
select x.Key;
When creating the grouping a set of sets will be created. Assuming that it's a set with O(1) insertion the running time of the group by approach is O(n). The incurred cost for each operation is somewhat high, but it should equate to near linear performance.
Sort the data, iterate through it and remember the last item. When the current item is the same as the last, its a duplicate. This can be easily implemented either iteratively or using a lambda expression in O(n*log(n)) time.