how to sort number to desired in c#? - c#

for each element that their difference with former is less than three, order them in descending
numbers = {1,2,3,4,5,6,13,18,25,30,31,32,33}
desired = {6,5,4,3,2,1,13,18,25,33,32,31,30}
for example in numbers list ,Because difference between 6 and 5 is less than 3 sort them in descending

You can use LINQ:
var numbers = new int[] { 1, 2, 3, 4, 5, 6, 13, 18, 25, 30, 31, 32, 33 };
var result = numbers.GroupAdjacent((x, y) => y - x < 3)
.SelectMany(g => g.OrderByDescending(x => x))
.ToArray();
// result == { 6, 5, 4, 3, 2, 1, 13, 18, 25, 33, 32, 31, 30 }
with
static IEnumerable<IEnumerable<T>> GroupAdjacent<T>(
this IEnumerable<T> source, Func<T, T, bool> adjacent)
{
var g = new List<T>();
foreach (var x in source)
{
if (g.Count != 0 && !adjacent(g.Last(), x))
{
yield return g;
g = new List<T>();
}
g.Add(x);
}
yield return g;
}

hello that is too complex..
As i dont have the Visual studio so i wrote the code in javascript that will cover your requirement
<script>
var a = [1,2,3,4,5,6,13,18,25,30,31,32,33];
alert(custom_sort(a));
function custom_sort(a) {
var objArrayList = {};
var index = 0;
var where_i_am = 0;
objArrayList[index] = Array();
if(a[1]-a[0] < 3){
where_i_am = 1;
objArrayList[index].push(a[0]);
} else {
where_i_am = 2;
objArrayList[index].push(a[0]);
}
for(var i=1;i<a.length;i++) {
if(a[i]-a[i-1] < 3) {
if(where_i_am ==2) {
where_i_am = 1;
index++;
objArrayList[index] = Array();
}
if(where_i_am==1)
objArrayList[index].push(a[i]);
} else {
if(where_i_am==1) {
where_i_am =2;
index++;
objArrayList[index] = Array();
}
if(where_i_am==2) {
objArrayList[index].push(a[i]);
}
}
}
var new_array = new Array();
for(var obj in objArrayList) {
var array_val = objArrayList[obj];
if(array_val[1] - array_val[0] < 3) {
new_array = new_array.concat(sort_desc(array_val));
} else {
new_array = new_array.concat(sort_asc(array_val));
}
}
return new_array;
}
function sort_asc(array_val){
for(var i =0;i<array_val.length;i++) {
for(var j=0;j<array_val.length;j++) {
if(array_val[i] < array_val[j]) {
var temp = array_val[i];
array_val[i] = array_val[j];
array_val[j] = temp;
}
}
}
return array_val;
}
function sort_desc(array_val){
for(var i =0;i<array_val.length;i++) {
for(var j=0;j<array_val.length;j++) {
if(array_val[i] > array_val[j]) {
var temp = array_val[i];
array_val[i] = array_val[j];
array_val[j] = temp;
}
}
}
return array_val;
}
</script>

For me such algorithm looks very non standard so I feel there are few algorithms should be used to achieve good results in terms of complexity. There is one very important question - does an initial array already sorted?
Any way you can start by splitting array by sub arrays:
Split such array to multiple (with some indicator whether each one should be sorted
ASC or DESC)
Then if an initial array is not sorted:
Then sort all of sub arrays one by one using QuickSort, so you will get a set of sorted arrays
Then you can sort sub arrays using the first element of each so order of sub arrays should be saved (Hope I described it clear enough)
But if an initial array was sorted:
Merge sub arrays in initially preserved order

Related

Get the max count of an array inside array?

I have an array inside the array and I want to get the Highest count of that Array,
List<int> badnumber = new List<int>() { 5,4,2, 15 };
int lower = 1;
int upper = 10;
int count = 0;
List<int> goodnumber = new List<int>();
List<List<int>> myList = new List<List<int>>();
for (int i = lower; i <= upper; i++)
{
if (!badnumber.Contains(i))
{
if (!goodnumber.Contains(i))
goodnumber.Add(i);
}
else
{
myList.Add(goodnumber);
goodnumber = new List<int>();
}
if (i == upper) {
myList.Add(goodnumber);
}
}
in myList values are like this
Array 1 : { 1 }
Array 2 : { 3 }
Array 3 : { 0 }
Array 4 : { 6, 7, 8, 9, 10 }
I want to get the count of the highest sequence which is Array 4. and return the count of it which is 5.
how would I get that?
Use following sample
var maxCount = myList.Max(l => l.Count);
Try following :
List<List<int>> myList = new List<List<int>>() {
new List<int> { 1},
new List<int> { 3},
new List<int> { 0},
new List<int> { 6,7,8,9,10}
};
int results = myList.Max(x => x.Count);
Basically your task boils down to the following:
How to get the greatest number in a list
In order to achieve that, you have to iterate that (outer) list, get every elements member you want to compare - in your case the Count - and check if it is greater the current Max:
int max = 0;
foreach(var l in myList)
{
if(l.Count > max)
max = l.Count;
}
or even simpler using linq, see jdweng.
Try below lambda to get index position with count:
int index = 0;
var result = myList.Select(x => new { indexPos = ++index, x.Count }).OrderByDescending(x => x.Count).First();

The union of the intersects of the 2 set combinations of a sequence of sequences

How can I find the set of items that occur in 2 or more sequences in a sequence of sequences?
In other words, I want the distinct values that occur in at least 2 of the passed in sequences.
Note:
This is not the intersect of all sequences but rather, the union of the intersect of all pairs of sequences.
Note 2:
The does not include the pair, or 2 combination, of a sequence with itself. That would be silly.
I have made an attempt myself,
public static IEnumerable<T> UnionOfIntersects<T>(
this IEnumerable<IEnumerable<T>> source)
{
var pairs =
from s1 in source
from s2 in source
select new { s1 , s2 };
var intersects = pairs
.Where(p => p.s1 != p.s2)
.Select(p => p.s1.Intersect(p.s2));
return intersects.SelectMany(i => i).Distinct();
}
but I'm concerned that this might be sub-optimal, I think it includes intersects of pair A, B and pair B, A which seems inefficient. I also think there might be a more efficient way to compound the sets as they are iterated.
I include some example input and output below:
{ { 1, 1, 2, 3, 4, 5, 7 }, { 5, 6, 7 }, { 2, 6, 7, 9 } , { 4 } }
returns
{ 2, 4, 5, 6, 7 }
and
{ { 1, 2, 3} } or { {} } or { }
returns
{ }
I'm looking for the best combination of readability and potential performance.
EDIT
I've performed some initial testing of the current answers, my code is here. Output below.
Original valid:True
DoomerOneLine valid:True
DoomerSqlLike valid:True
Svinja valid:True
Adricadar valid:True
Schmelter valid:True
Original 100000 iterations in 82ms
DoomerOneLine 100000 iterations in 58ms
DoomerSqlLike 100000 iterations in 82ms
Svinja 100000 iterations in 1039ms
Adricadar 100000 iterations in 879ms
Schmelter 100000 iterations in 9ms
At the moment, it looks as if Tim Schmelter's answer performs better by at least an order of magnitude.
// init sequences
var sequences = new int[][]
{
new int[] { 1, 2, 3, 4, 5, 7 },
new int[] { 5, 6, 7 },
new int[] { 2, 6, 7, 9 },
new int[] { 4 }
};
One-line way:
var result = sequences
.SelectMany(e => e.Distinct())
.GroupBy(e => e)
.Where(e => e.Count() > 1)
.Select(e => e.Key);
// result is { 2 4 5 7 6 }
Sql-like way (with ordering):
var result = (
from e in sequences.SelectMany(e => e.Distinct())
group e by e into g
where g.Count() > 1
orderby g.Key
select g.Key);
// result is { 2 4 5 6 7 }
May be fastest code (but not readable), complexity O(N):
var dic = new Dictionary<int, int>();
var subHash = new HashSet<int>();
int length = array.Length;
for (int i = 0; i < length; i++)
{
subHash.Clear();
int subLength = array[i].Length;
for (int j = 0; j < subLength; j++)
{
int n = array[i][j];
if (!subHash.Contains(n))
{
int counter;
if (dic.TryGetValue(n, out counter))
{
// duplicate
dic[n] = counter + 1;
}
else
{
// first occurance
dic[n] = 1;
}
}
else
{
// exclude duplucate in sub array
subHash.Add(n);
}
}
}
This should be very close to optimal - how "readable" it is depends on your taste. In my opinion it is also the most readable solution.
var seenElements = new HashSet<T>();
var repeatedElements = new HashSet<T>();
foreach (var list in source)
{
foreach (var element in list.Distinct())
{
if (seenElements.Contains(element))
{
repeatedElements.Add(element);
}
else
{
seenElements.Add(element);
}
}
}
return repeatedElements;
You can skip already Intesected sequences, this way will be a little faster.
public static IEnumerable<T> UnionOfIntersects<T>(this IEnumerable<IEnumerable<T>> source)
{
var result = new List<T>();
var sequences = source.ToList();
for (int sequenceIdx = 0; sequenceIdx < sequences.Count(); sequenceIdx++)
{
var sequence = sequences[sequenceIdx];
for (int targetSequenceIdx = sequenceIdx + 1; targetSequenceIdx < sequences.Count; targetSequenceIdx++)
{
var targetSequence = sequences[targetSequenceIdx];
var intersections = sequence.Intersect(targetSequence);
result.AddRange(intersections);
}
}
return result.Distinct();
}
How it works?
Input: {/*0*/ { 1, 2, 3, 4, 5, 7 } ,/*1*/ { 5, 6, 7 },/*2*/ { 2, 6, 7, 9 } , /*3*/{ 4 } }
Step 0: Intersect 0 with 1..3
Step 1: Intersect 1 with 2..3 (0 with 1 already has been intersected)
Step 2: Intersect 2 with 3 (0 with 2 and 1 with 2 already has been intersected)
Return: Distinct elements.
Result: { 2, 4, 5, 6, 7 }
You can test it with the below code
var lists = new List<List<int>>
{
new List<int> {1, 2, 3, 4, 5, 7},
new List<int> {5, 6, 7},
new List<int> {2, 6, 7, 9},
new List<int> {4 }
};
var result = lists.UnionOfIntersects();
You can try this approach, it might be more efficient and also allows to specify the minimum intersection-count and the comparer used:
public static IEnumerable<T> UnionOfIntersects<T>(this IEnumerable<IEnumerable<T>> source
, int minIntersectionCount
, IEqualityComparer<T> comparer = null)
{
if (comparer == null) comparer = EqualityComparer<T>.Default;
foreach (T item in source.SelectMany(s => s).Distinct(comparer))
{
int containedInHowManySequences = 0;
foreach (IEnumerable<T> seq in source)
{
bool contained = seq.Contains(item, comparer);
if (contained) containedInHowManySequences++;
if (containedInHowManySequences == minIntersectionCount)
{
yield return item;
break;
}
}
}
}
Some explaining words:
It enumerates all unique items in all sequences. Since Distinct is using a set this should be pretty efficient. That can help to speed up in case of many duplicates in all sequences.
The inner loop just looks into every sequence if the unique item is contained. Thefore it uses Enumerable.Contains which stops execution as soon as one item was found(so duplicates are no issue).
If the intersection-count reaches the minum intersection count this item is yielded and the next (unique) item is checked.
That should nail it:
int[][] test = { new int[] { 1, 2, 3, 4, 5, 7 }, new int[] { 5, 6, 7 }, new int[] { 2, 6, 7, 9 }, new int[] { 4 } };
var result = test.SelectMany(a => a.Distinct()).GroupBy(x => x).Where(g => g.Count() > 1).Select(y => y.Key).ToList();
First you make sure, there are no duplicates in each sequence. Then you join all sequences to a single sequence and look for duplicates as e.g. here.

Split array with LINQ

Assuming I have a list
var listOfInt = new List<int> {1, 2, 3, 4, 7, 8, 12, 13, 14}
How can I use LINQ to obtain a list of lists as follows:
{{1, 2, 3, 4}, {7, 8}, {12, 13, 14}}
So, i have to take the consecutive values and group them into lists.
You can create extension method (I omitted source check here) which will iterate source and create groups of consecutive items. If next item in source is not consecutive, then current group is yielded:
public static IEnumerable<List<int>> ToConsecutiveGroups(
this IEnumerable<int> source)
{
using (var iterator = source.GetEnumerator())
{
if (!iterator.MoveNext())
{
yield break;
}
else
{
int current = iterator.Current;
List<int> group = new List<int> { current };
while (iterator.MoveNext())
{
int next = iterator.Current;
if (next < current || current + 1 < next)
{
yield return group;
group = new List<int>();
}
current = next;
group.Add(current);
}
if (group.Any())
yield return group;
}
}
}
Usage is simple:
var listOfInt = new List<int> { 1, 2, 3, 4, 7, 8, 12, 13, 14 };
var groups = listOfInt.ToConsecutiveGroups();
Result:
[
[ 1, 2, 3, 4 ],
[ 7, 8 ],
[ 12, 13, 14 ]
]
UPDATE: Here is generic version of this extension method, which accepts predicate for verifying if two values should be considered consecutive:
public static IEnumerable<List<T>> ToConsecutiveGroups<T>(
this IEnumerable<T> source, Func<T,T, bool> isConsequtive)
{
using (var iterator = source.GetEnumerator())
{
if (!iterator.MoveNext())
{
yield break;
}
else
{
T current = iterator.Current;
List<T> group = new List<T> { current };
while (iterator.MoveNext())
{
T next = iterator.Current;
if (!isConsequtive(current, next))
{
yield return group;
group = new List<T>();
}
current = next;
group.Add(current);
}
if (group.Any())
yield return group;
}
}
}
Usage is simple:
var result = listOfInt.ToConsecutiveGroups((x,y) => (x == y) || (x == y - 1));
This works for both sorted and unsorted lists:
var listOfInt = new List<int> { 1, 2, 3, 4, 7, 8, 12, 13 };
int index = 0;
var result = listOfInt.Zip(listOfInt
.Concat(listOfInt.Reverse<int>().Take(1))
.Skip(1),
(v1, v2) =>
new
{
V = v1,
G = (v2 - v1) != 1 ? index++ : index
})
.GroupBy(x => x.G, x => x.V, (k, l) => l.ToList())
.ToList();
External index is building an index of consecutive groups that have value difference of 1. Then you can simply GroupBy with respect to this index.
To clarify solution, here is how this collection looks without grouping (GroupBy commented):
Assuming your input is in order, the following will work:
var grouped = input.Select((n, i) => new { n, d = n - i }).GroupBy(p => p.d, p => p.n);
It won't work if your input is e.g. { 1, 2, 3, 999, 5, 6, 7 }.
You'd get { { 1, 2, 3, 5, 6, 7 }, { 999 } }.
This works:
var results =
listOfInt
.Skip(1)
.Aggregate(
new List<List<int>>(new [] { listOfInt.Take(1).ToList() }),
(a, x) =>
{
if (a.Last().Last() + 1 == x)
{
a.Last().Add(x);
}
else
{
a.Add(new List<int>(new [] { x }));
}
return a;
});
I get this result:

Iterating through c# array & placing objects sequentially into other arrays

Okay, so this seems simple, but I can't think of a straightforward solution;
Basically I have an object array in C# that contains, say, 102 elements. I then also have 4 other empty arrays. I want to iterate through the original array and distribute the 100 elements evenly, then distribute 101 and 102 to the 1st and 2nd new arrays respectively.
int i = 1,a=0, b=0, c=0, d = 0;
foreach (ReviewStatus data in routingData)
{
if (i == 1)
{
threadOneWork[a] = data;
a++;
}
if (i == 2)
{
threadTwoWork[b] = data;
b++;
}
if (i == 3)
{
threadThreeWork[c] = data;
c++;
}
if (i == 4)
{
threadFourWork[d] = data;
d++;
i = 0;
}
i++;
}
Now the above code definitely works, but I was curious, does anybody know of a better way to do this??
var workArrays = new[] {
threadOneWork,
threadTwoWork,
threadThreeWork,
threadFourWork,
};
for(int i=0; i<routingData.Length; i++) {
workArrays[i%4][i/4] = routingData[i];
}
Put the four arrays into an array of arrays, and use i%4 as an index. Assuming that thread###Work arrays have enough space to store the data, you can do this:
var tw = new[] {threadOneWork, threadTwoWork, threadThreeWork, threadFourWork};
var i = 0;
foreach (ReviewStatus data in routingData) {
tw[i%4][i/tw.Length] = data;
i++;
}
Linq is your friend! Use modulo to group the items via the total number of arrays in your case 4.
For example the code splits them up into four different lists:
var Items = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
Items.Select( ( i, index ) => new {
category = index % 4,
value = i
} )
.GroupBy( itm => itm.category, itm => itm.value )
.ToList()
.ForEach( gr => Console.WriteLine("Group {0} : {1}", gr.Key, string.Join(",", gr)));
/* output
Group 0 : 1,5,9
Group 1 : 2,6,10
Group 2 : 3,7
Group 3 : 4,8
*/

Merge elements in IEnumerable according to a condition

I was looking for some fast and efficient method do merge items in array. This is my scenario. The collection is sorted by From. Adjacent element not necessarily differ by 1, that is there can be gaps between the last To and the next From, but they never overlap.
var list = new List<Range>();
list.Add(new Range() { From = 0, To = 1, Category = "AB" });
list.Add(new Range() { From = 2, To = 3, Category = "AB" });
list.Add(new Range() { From = 4, To = 5, Category = "AB" });
list.Add(new Range() { From = 6, To = 8, Category = "CD" });
list.Add(new Range() { From = 9, To = 11, Category = "AB" }); // 12 is missing, this is ok
list.Add(new Range() { From = 13, To = 15, Category = "AB" });
I would like the above collection to be merged in such way that the first three (this number can vary, from at least 2 elements to as many as the condition is satisfied) elements become one element. Cannot merge elements with different category.
new Range() { From = 0, To = 5, Category = "AB" };
So that the resulting collection would have 4 elements total.
0 - 5 AB
6 - 8 CD
9 - 11 AB // no merging here, 12 is missing
13 - 15 AB
I have a very large collection with over 2.000.000 items and I would like to this as efficiently as possible.
Here's a generic, reusable solution rather than an ad hoc, specific solution.
(Updated based on comments)
IEnumerable<T> Merge<T>(this IEnumerable<T> coll,
Func<T,T,bool> canBeMerged, Func<T,T,T>mergeItems)
{
using(IEnumerator<T> iter = col.GetEnumerator())
{
if (iter.MoveNext())
{
T lhs = iter.Current;
while(iter.MoveNext())
{
T rhs = iter.Current;
if (canBeMerged(lhs, rhs)
lhs=mergeItems(lhs, rhs);
else
{
yield return lhs;
lhs= rhs;
}
}
yield return lhs;
}
}
}
You will have to provide method to determine if the item can be merged, and to merge them.
These really should be part of your Range class, so it would be called like them:
list.Merge((l,r)=> l.IsFollowedBy(r), (l,r)=> l.CombineWith(r));
If you don't have these method, then you would have to call it like:
list.Merge((l,r)=> l.Category==r.Category && l.To +1 == r.From,
(l,r)=> new Range(){From = l.From, To=r.To, Category = l.Category});
Well, from the statement of the problem I think it is obvious that you cannot avoid iterating through the original collection of 2 million items:
var output = new List<Range>();
var currentFrom = list[0].From;
var currentTo = list[0].To;
var currentCategory = list[0].Category;
for (int i = 1; i < list.Count; i++)
{
var item = list[i];
if (item.Category == currentCategory && item.From == currentTo + 1)
currentTo = item.To;
else
{
output.Add(new Range { From = currentFrom, To = currentTo,
Category = currentCategory });
currentFrom = item.From;
currentTo = item.To;
currentCategory = item.Category;
}
}
output.Add(new Range { From = currentFrom, To = currentTo,
Category = currentCategory });
I’d be interested to see if there is a solution more optimised for performance.
Edit: I assumed that the input list is sorted. If it is not, I recommend sorting it first instead of trying to fiddle this into the algorithm. Sorting is only O(n log n), but if you tried to fiddle it in, you easily get O(n²), which is worse.
list.Sort((a, b) => a.From < b.From ? -1 : a.From > b.From ? 1 : 0);
As an aside, I wrote this solution because you asked for one that is performance-optimised. To this end, I didn’t make it generic, I didn’t use delegates, I didn’t use Linq extension methods, and I used local variables of primitive types and tried to avoid accessing object fields as much as possible.
Here's another one :
IEnumerable<Range> Merge(IEnumerable<Range> input)
{
input = input.OrderBy(r => r.Category).ThenBy(r => r.From).ThenBy(r => r.To).ToArray();
var ignored = new HashSet<Range>();
foreach (Range r1 in input)
{
if (ignored.Contains(r1))
continue;
Range tmp = r1;
foreach (Range r2 in input)
{
if (tmp == r2 || ignored.Contains(r2))
continue;
Range merged;
if (TryMerge(tmp, r2, out merged))
{
tmp = merged;
ignored.Add(r1);
ignored.Add(r2);
}
}
yield return tmp;
}
}
bool TryMerge(Range r1, Range r2, out Range merged)
{
merged = null;
if (r1.Category != r2.Category)
return false;
if (r1.To + 1 < r2.From || r2.To + 1 < r1.From)
return false;
merged = new Range
{
From = Math.Min(r1.From, r2.From),
To = Math.Max(r1.To, r2.To),
Category = r1.Category
};
return true;
}
You could use it directly:
var mergedList = Merge(list);
But that would be very inefficient it you have many items as the complexity is O(n²). However, since only items in the same category can be merged, you can group them by category and merge each group, then flatten the result:
var mergedList = list.GroupBy(r => r.Category)
.Select(g => Merge(g))
.SelectMany(g => g);
Assuming that the list is sorted -and- the ranges are non overlapping, as you have stated in the question, this will run in O(n) time:
var flattenedRanges = new List<Range>{new Range(list.First())};
foreach (var range in list.Skip(1))
{
if (flattenedRanges.Last().To + 1 == range.From && flattenedRanges.Last().Category == range.Category)
flattenedRanges.Last().To = range.To;
else
flattenedRanges.Add(new Range(range));
}
This is assuming you have a copy-constructor for Range
EDIT:
Here's an in-place algorithm:
for (int i = 1; i < list.Count(); i++)
{
if (list[i].From == list[i - 1].To+1 && list[i-1].Category == list[i].Category)
{
list[i - 1].To = list[i].To;
list.RemoveAt(i--);
}
}
EDIT:
Added the category check, and fixed the inplace version.

Categories