In a checksum calculation algorithm I'm implementing, the input must be an even number of bytes - if it isn't an extra zero byte must be packed at the end.
I do not want to modify the input data to my method by actually adding an element (and the input might be non-modifiable). Neither do I want to create a new data structure and copy the input.
I wondered if LINQ is a good option to create a lightweight IEnumerable something like:
void Calculate(IList<byte> input)
{
IEnumerable<byte> items = (input.Count & 1 ==0) ? items : X(input,0x0);
foreach(var i in items)
{
...
}
}
i.e. what would X(...) look like?
You can use this iterator (yield return) extension method to add extra items to the end of an IEnumerable<T> without needing to initially iterate over the elements (which you would need to do in-order to get a .Count value).
Note that you should check if input is an IReadOnlyCollection<T> or an IList<T> because that means you can use a more optimal code path when the .Count can be known in-advance.
public static IEnumerable<T> EnsureModuloItems<T>( this IEnumerable<T> source, Int32 modulo, T defaultValue = default )
{
if( source is null ) throw new ArgumentNullException(nameof(source));
if( modulo < 1 ) throw new ArgumentOutOfRangeException( nameof(modulo), modulo, message: "Value must be 1 or greater." );
//
Int32 count = 0;
foreach( T item in source )
{
yield return item;
count++;
}
Int32 remainder = count % modulo;
for( Int32 i = 0; i < remainder; i++ )
{
yield return defaultValue;
}
}
Used like so:
foreach( Byte b in input.EnsureModuloItems( modulo: 2, defaultValue: 0x00 ) )
{
}
You might use Concat method for that
IEnumerable<byte> items = input.Count() % 2 == 0 ? input : input.Concat(new[] { (byte)0x0 });
I've also changed your code a little bit, there is no Count property for IEnumerable<T>, you should use Count() method.
Since Concat() accepts IEnumerable<T>, it requires to a List<T> ao array to it. You can make a simple extension method to wrap a single item as IEnumerable<T>
internal static class Ext
{
public static IEnumerable<T> Yield<T>(this T item)
{
yield return item;
}
}
and use it
IEnumerable<byte> items = input.Count() % 2 == 0 ? input : input.Concat(((byte)0x0).Yield());
However, according to comments, the better option here can be an Append method
IEnumerable<byte> items = input.Count() % 2 == 0 ? input : input.Append((byte)0x0);
Related
Is there a simple^ way of getting the value 'null' if an array element does not exist?
For example, in the code below sArray has 3 elements and the first 3 calls to SomeMethod work (prints true), however the 4th call SomeMethod(sArray[3]); gives me an IndexOutOfRangeException. Is there a way to make the 4th call to SomeMethod print false?
static void Main(string[] args)
{
int[] sArray = new int[]{1,2,3};
SomeMethod(sArray[0]);
SomeMethod(sArray[1]);
SomeMethod(sArray[2]);
SomeMethod(sArray[3]);
}
static void SomeMethod(int? s) => Console.WriteLine(s.HasValue);
^Would prefer single line expression
There is a Linq method ElementAtOrDefault
To use it the way you want to (returning null) you will need ti change the underlying type of your array to nullable int:
int?[] sArray = new int?[]{1,2,3};
SomeMethod(sArray.ElementAtOrDefault(1000));
How about an extension method?
public static T? TryGet<T>(this T[] source, int index) where T: struct
{
if (0 <= index && index < source.Length)
{
return source[index];
}
else
{
return null;
}
}
Then you could write:
static void Main(string[] args)
{
int[] sArray = new int[]{1,2,3};
SomeMethod(sArray.TryGet(0));
SomeMethod(sArray.TryGet(1));
SomeMethod(sArray.TryGet(2));
SomeMethod(sArray.TryGet(3));
}
SomeMethod(sArray.Skip(3).Select(z => (int?)z).FirstOrDefault());
is a working replacement of:
SomeMethod(sArray[3]);
The former will call SomeMethod with null (while the latter will throw an exception if the array doesn't have at least 4 entries).
In Skip(3) the 3 can be changed to whatever index you want to retrieve from the array. The Select is needed to project the int into a int? so that FirstOrDefault returns either the 4th element or null.
If you don't want to use LINQ then you could use:
SomeMethod(sArray.Length > 3 ? sArray[3] : (int?)null);
instead.
Or consider using:
foreach (var entry in sArray.Take(4))
{
SomeMethod(entry);
}
to loop through up to 4 elements of the array (it will work fine if there are fewer than 4 - it will just make fewer calls to SomeMethod).
Arrays in C# have a .Length property which you can check before trying to pass an item from one to SomeMethod, and the typical approach is to loop through each element of the array rather than guessing whether or not an index is valid:
for (int i = 0; i < sArray.Length; i++)
{
SomeMethod(sArray[i]);
}
You will not be able to avoid an IndexOutOfRangeException if you reference an index in an array that doesn't exist.
However, if you really want a method with this type of functionality, you could simply modify your existing code to check whether or not the index specified is greater than the length of the array.
Since your array is an int[] (and not an int?[]), all valid indexes will have a value. Also, we can use the ?. to handle cases where the array itself may be null:
private static void SomeMethod(int[] array, int index) =>
Console.WriteLine(index >= 0 && index < array?.Length);
Then in use, instead of passing an array item with an invalid index (which will always throw an IndexOutOfRangeException), you would pass the array itself and the index separately:
static void Main()
{
int[] sArray = new int[] { 1, 2, 3 };
SomeMethod(sArray, 0);
SomeMethod(sArray, 1);
SomeMethod(sArray, 2);
SomeMethod(sArray, 3);
SomeMethod(null, 0);
GetKeyFromUser("\nPress any key to exit...");
}
Output
in this case I'll suggest you to create a extension somewhere in your code like this
static class ArrExt
{
public static int? Get(this int[] arr, int i)
{
return (i >= 0 && i < arr.Length) ? arr[i] : default(int?);
}
}
then you can do this
int[] sArray = new int[] { 1, 2, 3 };
SomeMethod(sArray.Get(0));
SomeMethod(sArray.Get(1));
SomeMethod(sArray.Get(2));
SomeMethod(sArray.Get(3));
okay this is not a single line solution I know, but it's easier for both programmer and computer.
I'd like to pass a sub-set of a C# array to into a method. I don't care if the method overwrites the data so would like to avoid creating a copy.
Is there a way to do this?
Thanks.
Change the method to take an IEnumerable<T> or ArraySegment<T>.
You can then pass new ArraySegment<T>(array, 5, 2)
With C# 7.2 we have Span<T> . You can use the extension method AsSpan<T> for your array and pass it to the method without copying the sliced part. eg:
Method( array.AsSpan().Slice(1,3) )
You can use the following class. Note you may need to modify it depending on whether you want endIndex to be inclusive or exclusive. You could also modify it to take a start and a count, rather than a start and an end index.
I intentionally didn't add mutable methods. If you specifically want them, that's easy enough to add. You may also want to implement IList if you add the mutable methods.
public class Subset<T> : IReadOnlyList<T>
{
private IList<T> source;
private int startIndex;
private int endIndex;
public Subset(IList<T> source, int startIndex, int endIndex)
{
this.source = source;
this.startIndex = startIndex;
this.endIndex = endIndex;
}
public T this[int i]
{
get
{
if (startIndex + i >= endIndex)
throw new IndexOutOfRangeException();
return source[startIndex + i];
}
}
public int Count
{
get { return endIndex - startIndex; }
}
public IEnumerator<T> GetEnumerator()
{
return source.Skip(startIndex)
.Take(endIndex - startIndex)
.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Arrays are immutable by size (i.e. you can't change size of a array), so you can only pass a subtracted copy of the original array. As option you can pass two indexes aside original array into method and operate on the basis of additional two indexes..
you can use Linq take funktion and take as many elements from array as you want
var yournewarray = youroldarray.Take(4).ToArray();
I have a LIST<T> where T:IComparable<T>
I want to write a List<T> GetFirstNElements (IList<T> list, int n) where T :IComparable <T> which returns the first n distinct largest elements ( the list can have dupes) using expression trees.
In some performance-critical code I wrote recently, I had a very similar requirement - the candidate set was very large, and the number needed very small. To avoid sorting the entire candidate set, I use a custom extension method that simply keeps the n largest items found so far in a linked list. Then I can simply:
loop once over the candidates
if I haven't yet found "n" items, or the current item is better than the worst already selected, then add it (at the correct position) in the linked-list (inserts are cheap here)
if we now have more than "n" selected, drop the worst (deletes are cheap here)
then we are done; at the end of this, the linked-list contains the best "n" items, already sorted. No need to use expression-trees, and no "sort a huge list" overhead. Something like:
public static IEnumerable<T> TakeTopDistinct<T>(this IEnumerable<T> source, int count)
{
if (source == null) throw new ArgumentNullException("source");
if (count < 0) throw new ArgumentOutOfRangeException("count");
if (count == 0) yield break;
var comparer = Comparer<T>.Default;
LinkedList<T> selected = new LinkedList<T>();
foreach(var value in source)
{
if(selected.Count < count // need to fill
|| comparer.Compare(selected.Last.Value, value) < 0 // better candidate
)
{
var tmp = selected.First;
bool add = true;
while (tmp != null)
{
var delta = comparer.Compare(tmp.Value, value);
if (delta == 0)
{
add = false; // not distinct
break;
}
else if (delta < 0)
{
selected.AddBefore(tmp, value);
add = false;
if(selected.Count > count) selected.RemoveLast();
break;
}
tmp = tmp.Next;
}
if (add && selected.Count < count) selected.AddLast(value);
}
}
foreach (var value in selected) yield return value;
}
If i get the question right, you just want to sort the entries in the list.
Wouldn't it be possible for you to implement the IComparable and use the "Sort" Method of the List?
The code in "IComparable" can handle the tree compare and everything you want to use to compare and sort so you just can use the Sort mechnism at this point.
List<T> GetFirstNElements (IList<T> list, int n) where T :IComparable <T>{
list.Sort();
List<T> returnList = new List<T>();
for(int i = 0; i<n; i++){
returnList.Add(list[i]);
}
return returnList;
}
Wouldn't be the fastest code ;-)
The standard algorithm for doing so, which takes expected time O(list.Length) is in Wikipedia as "quickfindFirstK" on this page:
http://en.wikipedia.org/wiki/Selection_algorithm#Selecting_k_smallest_or_largest_elements
This improves on #Marc Gravell's answer because the expected running time of it is linear in the length of the input list, regardless of the value of n.
I'm a complete LINQ newbie, so I don't know if my LINQ is incorrect for what I need to do or if my expectations of performance are too high.
I've got a SortedList of objects, keyed by int; SortedList as opposed to SortedDictionary because I'll be populating the collection with pre-sorted data. My task is to find either the exact key or, if there is no exact key, the one with the next higher value. If the search is too high for the list (e.g. highest key is 100, but search for 105), return null.
// The structure of this class is unimportant. Just using
// it as an illustration.
public class CX
{
public int KEY;
public DateTime DT;
}
static CX getItem(int i, SortedList<int, CX> list)
{
var items =
(from kv in list
where kv.Key >= i
select kv.Key);
if (items.Any())
{
return list[items.Min()];
}
return null;
}
Given a list of 50,000 records, calling getItem 500 times takes about a second and a half. Calling it 50,000 times takes over 2 minutes. This performance seems very poor. Is my LINQ bad? Am I expecting too much? Should I be rolling my own binary search function?
First, your query is being evaluated twice (once for Any, and once for Min). Second, Min requires that it iterate over the entire list, even though the fact that it's sorted means that the first item will be the minimum. You should be able to change this:
if (items.Any())
{
return list[items.Min()];
}
To this:
var default =
(from kv in list
where kv.Key >= i
select (int?)kv.Key).FirstOrDefault();
if(default != null) return list[default.Value];
return null;
UPDATE
Because you're selecting a value type, FirstOrDefault doesn't return a nullable object. I have altered your query to cast the selected value to an int? instead, allowing the resulting value to be checked for null. I would advocate this over using ContainsKey, as that would return true if your list contained a value for 0. For example, say you have the following values
0 2 4 6 8
If you were to pass in anything less than or equal to 8, then you would get the correct value. However, if you were to pass in 9, you would get 0 (default(int)), which is in the list but isn't a valid result.
Writing a binary search on your own can be tough.
Fortunately, Microsoft already wrote a pretty robust one: Array.BinarySearch<T>. This is, in fact, the method that SortedList<TKey, TValue>.IndexOfKey uses internally. Only problem is, it takes a T[] argument, instead of any IList<T> (like SortedList<TKey, TValue>.Keys).
You know what, though? There's this great tool called Reflector that lets you look at .NET source code...
Check it out: a generic BinarySearch extension method on IList<T>, taken straight from the reflected code of Microsoft's Array.BinarySearch<T> implementation.
public static int BinarySearch<T>(this IList<T> list, int index, int length, T value, IComparer<T> comparer) {
if (list == null)
throw new ArgumentNullException("list");
else if (index < 0 || length < 0)
throw new ArgumentOutOfRangeException((index < 0) ? "index" : "length");
else if (list.Count - index < length)
throw new ArgumentException();
int lower = index;
int upper = (index + length) - 1;
while (lower <= upper) {
int adjustedIndex = lower + ((upper - lower) >> 1);
int comparison = comparer.Compare(list[adjustedIndex], value);
if (comparison == 0)
return adjustedIndex;
else if (comparison < 0)
lower = adjustedIndex + 1;
else
upper = adjustedIndex - 1;
}
return ~lower;
}
public static int BinarySearch<T>(this IList<T> list, T value, IComparer<T> comparer) {
return list.BinarySearch(0, list.Count, value, comparer);
}
public static int BinarySearch<T>(this IList<T> list, T value) where T : IComparable<T> {
return list.BinarySearch(value, Comparer<T>.Default);
}
This will let you call list.Keys.BinarySearch and get the negative bitwise complement of the index you want in case the desired key isn't found (the below is taken basically straight from tzaman's answer):
int index = list.Keys.BinarySearch(i);
if (index < 0)
index = ~index;
var item = index < list.Count ? list[list.Keys[index]] : null;
return item;
Using LINQ on a SortedList will not give you the benefit of the sort.
For optimal performance, you should write your own binary search.
OK, just to give this a little more visibility - here's a more concise version of Adam Robinson's answer:
return list.FirstOrDefault(kv => kv.Key >= i).Value;
The FirstOrDefault function has an overload that accepts a predicate, which selects the first element satisfying a condition - you can use that to directly get the element you want, or null if it doesn't exist.
Why not use the BinarySearch that's built into the List class?
var keys = list.Keys.ToList();
int index = keys.BinarySearch(i);
if (index < 0)
index = ~index;
var item = index < keys.Count ? list[keys[index]] : null;
return item;
If the search target isn't in the list, BinarySearch returns the bit-wise complement of the next-higher item; we can use that to directly get you what you want by re-complementing the result if it's negative. If it becomes equal to the Count, your search key was bigger than anything in the list.
This should be much faster than doing a LINQ where, since it's already sorted...
As comments have pointed out, the ToList call will force an evaluation of the whole list, so this is only beneficial if you do multiple searches without altering the underlying SortedList, and you keep the keys list around separately.
Using OrderedDictionary in PowerCollections you can get an enumerator that starts where they key you are looking for should be... if it's not there, you'll get the next closest node and can then navigate forwards/backwards from that in O(log N) time per nav call.
This has the advantage of you not having to write your own search or even manage your own searches on top of a SortedList.
Is there a good way to enumerate through only a subset of a Collection in C#? That is, I have a collection of a large number of objects (say, 1000), but I'd like to enumerate through only elements 250 - 340. Is there a good way to get an Enumerator for a subset of the collection, without using another Collection?
Edit: should have mentioned that this is using .NET Framework 2.0.
Try the following
var col = GetTheCollection();
var subset = col.Skip(250).Take(90);
Or more generally
public static IEnumerable<T> GetRange(this IEnumerable<T> source, int start, int end) {
// Error checking removed
return source.Skip(start).Take(end - start);
}
EDIT 2.0 Solution
public static IEnumerable<T> GetRange<T>(IEnumerable<T> source, int start, int end ) {
using ( var e = source.GetEnumerator() ){
var i = 0;
while ( i < start && e.MoveNext() ) { i++; }
while ( i < end && e.MoveNext() ) {
yield return e.Current;
i++;
}
}
}
IEnumerable<Foo> col = GetTheCollection();
IEnumerable<Foo> range = GetRange(col, 250, 340);
I like to keep it simple (if you don't necessarily need the enumerator):
for (int i = 249; i < Math.Min(340, list.Count); i++)
{
// do something with list[i]
}
Adapting Jared's original code for .Net 2.0:
IEnumerable<T> GetRange(IEnumerable<T> source, int start, int end)
{
int i = 0;
foreach (T item in source)
{
i++;
if (i>end) yield break;
if (i>start) yield return item;
}
}
And to use it:
foreach (T item in GetRange(MyCollection, 250, 340))
{
// do something
}
Adapting Jarad's code once again, this extention method will get you a subset that is defined by item, not index.
//! Get subset of collection between \a start and \a end, inclusive
//! Usage
//! \code
//! using ExtensionMethods;
//! ...
//! var subset = MyList.GetRange(firstItem, secondItem);
//! \endcode
class ExtensionMethods
{
public static IEnumerable<T> GetRange<T>(this IEnumerable<T> source, T start, T end)
{
#if DEBUG
if (source.ToList().IndexOf(start) > source.ToList().IndexOf(end))
throw new ArgumentException("Start must be earlier in the enumerable than end, or both must be the same");
#endif
yield return start;
if (start.Equals(end))
yield break; //start == end, so we are finished here
using (var e = source.GetEnumerator())
{
while (e.MoveNext() && !e.Current.Equals(start)); //skip until start
while (!e.Current.Equals(end) && e.MoveNext()) //return items between start and end
yield return e.Current;
}
}
}
You might be able to do something with Linq. The way I would do this is to put the objects into an array, then I can choose which items I want to process based on the array id.
If you find that you need to do a fair amount of slicing and dicing of lists and collections, it might be worth climbing the learning curve into the C5 Generic Collection Library.