Related
I have a generic method that takes a request and provides a response.
public Tres DoSomething<Tres, Treq>(Tres response, Treq request)
{/*stuff*/}
But I don't always want a response for my request, and I don't always want to feed request data to get a response. I also don't want to have to copy and paste methods in their entirety to make minor changes. What I want, is to be able to do this:
public Tre DoSomething<Tres>(Tres response)
{
return DoSomething<Tres, void>(response, null);
}
Is this feasible in some manner? It seems that specifically using void doesn't work, but I'm hoping to find something analogous.
You cannot use void, but you can use object: it is a little inconvenience because your would-be-void functions need to return null, but if it unifies your code, it should be a small price to pay.
This inability to use void as a return type is at least partially responsible for a split between the Func<...> and Action<...> families of generic delegates: had it been possible to return void, all Action<X,Y,Z> would become simply Func<X,Y,Z,void>. Unfortunately, this is not possible.
No, unfortunately not. If void were a "real" type (like unit in F#, for example) life would be a lot simpler in many ways. In particular, we wouldn't need both the Func<T> and Action<T> families - there'd just be Func<void> instead of Action, Func<T, void> instead of Action<T> etc.
It would also make async simpler - there'd be no need for the non-generic Task type at all - we'd just have Task<void>.
Unfortunately, that's not the way the C# or .NET type systems work...
Here is what you can do. As #JohnSkeet said there is no unit type in C#, so make it yourself!
public sealed class ThankYou {
private ThankYou() { }
private readonly static ThankYou bye = new ThankYou();
public static ThankYou Bye { get { return bye; } }
}
Now you can always use Func<..., ThankYou> instead of Action<...>
public ThankYou MethodWithNoResult() {
/* do things */
return ThankYou.Bye;
}
Or use something already made by the Rx team: http://msdn.microsoft.com/en-us/library/system.reactive.unit%28v=VS.103%29.aspx
You could simply use Object as others have suggested. Or Int32 which I have seen some use. Using Int32 introduces a "dummy" number (use 0), but at least you can't put any big and exotic object into an Int32 reference (structs are sealed).
You could also write you own "void" type:
public sealed class MyVoid
{
MyVoid()
{
throw new InvalidOperationException("Don't instantiate MyVoid.");
}
}
MyVoid references are allowed (it's not a static class) but can only be null. The instance constructor is private (and if someone tries to call this private constructor through reflection, an exception will be thrown at them).
Since value tuples were introduced (2017, .NET 4.7), it is maybe natural to use the struct ValueTuple (the 0-tuple, the non-generic variant) instead of such a MyVoid. Its instance has a ToString() that returns "()", so it looks like a zero-tuple. As of the current version of C#, you cannot use the tokens () in code to get an instance. You can use default(ValueTuple) or just default (when the type can be inferred from the context) instead.
I like the idea by Aleksey Bykov above, but it could be simplified a bit
public sealed class Nothing {
public static Nothing AtAll { get { return null; } }
}
As I see no apparent reason why Nothing.AtAll could not just give null
The same idea (or the one by Jeppe Stig Nielsen) is also great for usage with typed classes.
E.g. if the type is only used to describe the arguments to a procedure/function passed as an argument to some method, and it itself does not take any arguments.
(You will still need to either make a dummy wrapper or to allow an optional "Nothing". But IMHO the class usage looks nice with myClass<Nothing> )
void myProcWithNoArguments(Nothing Dummy){
myProcWithNoArguments(){
}
or
void myProcWithNoArguments(Nothing Dummy=null){
...
}
void, though a type, is only valid as a return type of a method.
There is no way around this limitation of void.
What I currently do is create custom sealed types with private constructor. This is better than throwing exceptions in the c-tor because you don't have to get until runtime to figure out the situation is incorrect. It is subtly better than returning a static instance because you don't have to allocate even once. It is subtly better than returning static null because it is less verbose on the call side. The only thing the caller can do is give null.
public sealed class Void {
private Void() { }
}
public sealed class None {
private None() { }
}
Is there a difference between these two methods?
public class A
{
public int Count { get; set; }
}
public A Increment(A instance)
{
instance.Count++;
return instance;
}
public void Increment(A instance)
{
instance.Count++;
}
I mean, apart from one method returning the same reference and the other method not returning anything, both of them accomplish the same thing, to increment the Count property of the reference being passed as argument.
Is there an advantage of using one against the other? I generally tend to use the former because of method chaining, but is there a performance tradeoff?
One of the advantages of the latter method, for example, is that one cannot create a new reference:
public void Increment(A instance)
{
instance.Count++;
instance = new A(); //This new object has local scope, the original reference is not modified
}
This could be considered a defensive approach against new implementations of an interface.
I don't want this to be opinion based, so I am explicitly looking for concrete advantages (or disadvantages), taken out from the documentation or the language's specification.
One of the advantages of the latter method, for example, is that one cannot create a new reference.
You could consider that one of the disadvantages. Consider:
public A Increment(A instance)
{
return new A { Count = instance.Count +1 };
}
Or
public A Increment()
{
return new A { Count = this.Count +1 };
}
Apply this consistently, and you can have your A classes being immutable, with all the advantages that brings.
It also allows for different types that implement the same interface to be returned. This is how Linq works:
Enumerable.Range(0, 1) // RangeIterator
.Where(i => i % 2 == 0) // WhereEnumerableIterator<int>
.Select(i => i.ToString()) // WhereSelectEnumerableIterator<int, string>
.Where(i => i.Length != 1) // WhereEnumerableIterator<string>
.ToList(); // List<string>
While each operation acts on the type IEnumerable<int> each result is implemented by a different type.
Mutating fluent methods, like you suggest, are pretty rare in C#. They are more common in languages without the sort of properties C# supports, as it's then convenient to do:
someObject.setHeight(23).setWidth(143).setDepth(10);
But in C# such setXXX methods are rare, with property setters being more common, and they can't be fluent.
The main exception is StringBuilder because its very nature means that repeatedly calling Append() and/or Insert() on it with different values is very common, and the fluent style lends itself well to that.
Otherwise the fact that mutating fluent methods aren't common means that all you really get by supplying one is the minute extra cost of returning the field. It is minute, but it's not gaining anything when used with the more idiomatic C# style that is going to ignore it.
To have an external method that both mutated and also returned the mutated object would be unusual, and that could lead someone to assume that you didn't mutate the object, since you were returning the result.
E.g upon seeing:
public static IList<T> SortedList(IList<T> list);
Someone using the code might assume that after the call list was left alone, rather than sorted in place, and also that the two would be different and could be mutated separately.
For that reason alone it would be a good idea to either return a new object, or to return void to make the mutating nature more obvious.
We could though have short-cuts when returning a new object:
public static T[] SortedArray<T>(T[] array)
{
if (array.Length == 0) return array;
T[] newArray = new T[array.Length];
Array.Copy(array, newArray, array.Length);
Array.Sort(newArray);
return newArray;
}
Here we take advantage of the fact that since empty arrays are essentially immutable (they have no elements to mutate, and they can't be added to) for most uses returning the same array is the same as returning a new array. (Compare with how string implements ICloneable.Clone() by returning this). As well as reducing the amount of work done, we reduce the number of allocations, and hence the amount of GC pressure. Even here though we need to be careful (someone keying a collection on object identity will be stymied by this), but it can be useful in many cases.
Short answer - it depends.
Long answer - I would consider returning the instance of the object if you are using a builder pattern or where you need chaining of methods.
Most of other cases do look like a code smell: if you are in control of the API and you find a lot of places where your returned object is not used, so why bother with extra effort? possibly you'll create subtle bugs.
I have been reading articles and understand interfaces to an extent however, if i wanted to right my own custom Equals method, it seems I can do this without implementing the IEquatable Interface. An example.
using System;
using System.Collections;
using System.ComponentModel;
namespace ProviderJSONConverter.Data.Components
{
public class Address : IEquatable<Address>
{
public string address { get; set; }
[DefaultValue("")]
public string address_2 { get; set; }
public string city { get; set; }
public string state { get; set; }
public string zip { get; set; }
public bool Equals(Address other)
{
if (Object.ReferenceEquals(other, null)) return false;
if (Object.ReferenceEquals(this, other)) return true;
return (this.address.Equals(other.address)
&& this.address_2.Equals(other.address_2)
&& this.city.Equals(other.city)
&& this.state.Equals(other.state)
&& this.zip.Equals(other.zip));
}
}
}
Now if i dont implement the interface and leave : IEquatable<Address> out of the code, it seems the application operates exactly the same. Therefore, I am unclear as to why implement the interface? I can write my own custom Equals method without it and the breakpoint will hit the method still and give back the same results.
Can anyone help explain this to me more? I am hung up on why include "IEquatable<Address>" before calling the Equals method.
Now if i dont implement the interface and leave : IEquatable out of the code, it seems the application operates exactly the same.
Well, that depends on what "the application" does. For example:
List<Address> addresses = new List<Address>
{
new Address { ... }
};
int index = addresses.IndexOf(new Address { ... });
... that won't work (i.e. index will be -1) if you have neither overridden Equals(object) nor implemented IEquatable<T>. List<T>.IndexOf won't call your Equals overload.
Code that knows about your specific class will pick up the Equals overload - but any code (e.g. generic collections, all of LINQ to Objects etc) which just works with arbitrary objects won't pick it up.
The .NET framework has confusingly many possibilities for equality checking:
The virtual Object.Equals(object)
The overloadable equality operators (==, !=, <=, >=)
IEquatable<T>.Equals(T)
IComparable.CompareTo(object)
IComparable<T>.CompareTo(T)
IEqualityComparer.Equals(object, object)
IEqualityComparer<T>.Equals(T, T)
IComparer.Compare(object, object)
IComparer<T>.Compare(T, T)
And I did not mention the ReferenceEquals, the static Object.Equals(object, object) and the special cases (eg. string and floating-point comparison), just the cases where we can implement something.
Additionally, the default behavior of the first two points are different for structs and classes. So it is not a wonder that a user can be confused about what and how to implement.
As a thumb of rule, you can follow the following pattern:
Classes
By default, both the Equals(object) method and equality operators (==, !=) check reference equality.
If reference equality is not right for you, override the Equals method (and also GetHashCode; otherwise, your class will not be able to be used in hashed collections)
You can keep the original reference equality functionality for the == and != operators, it is common for classes. But if you overload them, it must be consistent with Equals.
If your instances can be compared to each other in less or greater meaning, implement the IComparable interface. When Equals reports equality, CompareTo must return 0 (again, consistency).
Basically that's it. Implementing the generic IEquatable<T> and Comparable<T> interfaces for classes is not a must: as there is no boxing, the performance gain would be minimal in the generic collections. But remember, if you implement them, keep the consistency.
Structs
By default, the Equals(object) performs a value comparison for structs (checks the field values). Though normally this is the expected behavior in case of a value type, the base implementation does this by using reflection, which has a terrible performance. So do always override the Equals(object) in a public struct, even if you implement the same functionality as it originally had.
When the Equals(object) method is used for structs, a boxing happens, which have a performance cost (not as bad as the reflection in ValueType.Equals, but it matters). That's why IEquatable<T> interface exists. You should implement it on structs if you want to use them in generic collections. Have I already mentioned to keep consistency?
By default, the == and != operators cannot be used for structs so you must overload them if you want to use them. Simply call the strongly-typed IEquatable<T>.Equals(T) implementation.
Similarly to classes, if less-or-greater is meaningful for your type, implement the IComparable interface. In case of structs, you should implement the IComparable<T> as well to make things performant (eg. Array.Sort, List<T>.BinarySearch, using the type as a key in a SortedList<TKey, TValue>, etc.). If you overloaded the ==, != operators, you should do it for <, >, <=, >=, too.
A little addendum:
If you must use a type that has an improper comparison logic for your needs, you can use the interfaces from 6. to 9. in the list. This is where you can forget consistency (at least considering the self Equals of the type) and you can implement a custom comparison that can be used in hash-based and sorted collections.
If you had overridden the Equals(object obj) method, then it would only be a matter of performances, as noted here: What's the difference between IEquatable and just overriding Object.Equals()?
But as long as you didn't override Equals(object obj) but provided your own strongly typed Equals(Adddress obj) method, without implementing IEquatable<T> you do not indicate to all classes that rely on the implementation of this interface to operate comparisons, that you have your own Equals method that should be used.
So, as John Skeet noted, the EqualityComparer<Address>.Default property used by List<Address>.IndexOf to compare addresses wouldn't be able to know it should use your Equals method.
IEquatable interface just adds Equals method with whatever type we supply in the generic param. Then the funciton overloading takes care of rest.
if we add IEquatable to Employee structure, that object can be compared with Employee object without any type casting. Though the same we can achieved with default Equals method which accepts Object as param,
So converting from Object to struct involves Boxing. Hence having IEquatable <Employee> will improve performance.
for example assume we want to compare Employee structure with another employee
if(e1.Equals(e2))
{
//do some
}
For above example it will use Equals with Employee as param. So no boxing nor unboxing is required
struct Employee : IEquatable<Employee>
{
public int Id { get; set; }
public bool Equals(Employee other)
{
//no boxing not unboxing, direct compare
return this.Id == other.Id;
}
public override bool Equals(object obj)
{
if(obj is Employee)
{ //un boxing
return ((Employee)obj).Id==this.Id;
}
return base.Equals(obj);
}
}
Some more examples:
Int structure implements IEquatable <int>
Bool structure implements IEquatable <bool>
Float structure implements IEquatable <float>
So if you call someInt.Equals(1) it doesn't fires Equals(object) method. it fires Equals(int) method.
How do I create a class to store a range of any type provided that the type allows comparison operators to ensure that that the first value provided to the constructor is less than the second?
public class Range<T> where T : IComparable<T>
{
private readonly T lowerBound;
private readonly T upperBound;
/// <summary>
/// Initializes a new instance of the Range class
/// </summary>
/// <param name="lowerBound">The smaller number in the Range tuplet</param>
/// <param name="upperBound">The larger number in the Range tuplet</param>
public Range(T lowerBound, T upperBound)
{
if (lowerBound > upperBound)
{
throw new ArgumentException("lowerBlound must be less than upper bound", lowerBound.ToString());
}
this.lowerBound = lowerBound;
this.upperBound = upperBound;
}
I am getting the error:
Error 1 Operator '>' cannot be applied to operands of type 'T' and 'T' C:\Source\MLR_Rebates\DotNet\Load_MLR_REBATE_IBOR_INFO\Load_MLR_REBATE_IBOR_INFO\Range.cs 27 17 Load_MLR_REBATE_IBOR_INFO
You could use
where T : IComparable<T>
... or you could just use an IComparer<T> in your code, defaulting to Comparer<T>.Default.
This latter approach is useful as it allows ranges to be specified even for types which aren't naturally comparable to each other, but could be compared in a custom, sensible way.
On the other hand, it does mean that you won't catch incomparable types at compile time.
(As an aside, creating a range type introduces a bunch of interesting API decisions around whether you allow reversed ranges, how you step over them, etc. Been there, done that, was never entirely happy with the results...)
You cannot constrain a T to support a given set of operators, but you can constrain to IComparable<T>
where T : IComparable<T>
Which at least allows you to use first.CompareTo(second). Your basic numeric types, plus strings, DateTimes, etc., implement this interface.
To combine two suggestions already given, we combine the ability to create Ranges with a manually defined comparison rule, with an over-ride for those types that implement IComparable<T>, and with compile-time safety on the latter.
We take much the same approach as the static Tuple class' Create method. This can also offer concision in allowing us to rely upon type inference:
public static class Range // just a class to a hold the factory methods
{
public static Range<T> Create<T>(T lower, T upper) where T : IComparable<T>
{
return new Range<T>(lower, upper, Comparer<T>.Default);
}
//We don't need this override, but it adds consistency that we can always
//use Range.Create to create a range we want.
public static Range<T> Create<T>(T lower, T upper, IComparer<T> cmp)
{
return new Range<T>(lower, upper, cmp);
}
}
public class Range<T>
{
private readonly T lowerBound;
private readonly T upperBound;
private readonly IComparer<T> _cmp;
public Range(T lower, T upper, IComparer<T> cmp)
{
if(lower == null)
throw new ArgumentNullException("lower");
if(upper == null)
throw new ArgumentNullException("upper");
if((_cmp = cmp).Compare(lower, upper) > 0)
throw new ArgumentOutOfRangeException("Argument \"lower\" cannot be greater than \"upper\".");
lowerBound = lower;
upperBound = upper;
}
}
Now we can't accidentally construct a Range with the default comparer where it won't work, but can also leave out the comparer and have it compile only if it'll work.
Edit:
There are two main approaches to having items comparable in an order-giving way in .NET and this uses both.
One way is to have a type define its on way of being compared with another object of the same type*. This is done by IComparable<T> (or the non-generic IComparable, but then you have to catch type mis-matches at run-time, so it isn't as useful post .NET1.1).
int for example, implements IComparable<int>, which means we can do 3.CompareTo(5) and receive a negative number indicating that 3 comes before 5 when the two are put into order.
Another way is to have an object that implements IComparer<T> (and likewise a non-generic IComparer that is less useful post .NET1.1). This is used to compare two objects, generally of a different type to the comparer. We explicitly use this either because a type we are interested in doesn't implement IComparable<T> or because we want to override the default sorting order. For example we could create the following class:
public class EvenFirst : IComparer<int>
{
public int Compare(int x, int y)
{
int evenOddCmp = x % 2 - y % 2;
if(evenOddCmp != 0)
return evenOddCmp;
return x.CompareTo(y);
}
}
If we used this to sort a list of integers (list.Sort(new EvenFirst())), it would put all the even numbers first, and all the odd numbers last, but have the even and odd numbers in normal order within their block.
Okay, so now we've got two different ways of comparing instances of a given type, one which is provided by the type itself and which is generally the "most natural", which is great, and one which gives us more flexibility, which is also great. But this means that we will have to write two versions of any piece of code that cares about such comparisons - one that uses IComparable<T>.CompareTo() and one that uses IComparer<T>.Compare().
It gets worse if we care about two types of objects. Then we need 4 different methods!
The solution is provided by Comparer<T>.Default. This static property gives us an implementation of IComparer<T>.Compare() for a given T that calls into IComparable<T>.CompareTo.
So, now we generally only ever write our methods to make use of IComparer<T>.Compare(). Providing a version that uses CompareTo for the most common sort of comparisons is just a matter of an override that uses the default comparer. E.g. instead of:
public void SortStrings(IComparer<string> cmp)//lets caller decide about case-sensitivity etc.
{
//pretty complicated sorting code that uses cmp.Compare(string1, string2)
}
public void SortStrings()
{
//equally complicated sorting code that uses string.CompareTo()
}
We have:
public void SortStrings(IComparer<string> cmp)//lets caller decide about case-sensitivity etc.
{
//pretty complicated sorting code that uses cmp.Compare(string1, string2)
}
public void SortStrings()
{
SortStrings(Comparer<string>.Default);//simple one-line code to re-use all the above.
}
As you can see, we've the best of both worlds here. Someone who just wants the default behaviour calls SortStrings(), someone who wants a more specific comparison rule to be used calls e.g. SortStrings(StringComparer.CurrentCultureIgnoreCase), and the implementation only had to do a tiny bit of work to offer that choice.
This is what is done with the suggestion for Range here. The constructor always takes an IComparer<T> and always uses it's Compare, but there's a factory method that calls it with Comparer<T>.Default to offer the other behaviour.
Note that we don't strictly need this factory method, we can just use an overload on the constructor:
public Range(T lower, T upper)
:this(lower, upper, Comparer<T>.Default)
{
}
The downside though, is that we can't add a where clause to this to restrict it to cases where it'll work. This means that if we called it with types that didn't implement IComparer<T> we'd get an ArgumentException at runtime rather than a compiler error. Which was Jon's point when he said:
On the other hand, it does mean that you won't catch incomparable types at compile time.
The use of the factory method is purely to ensure this wouldn't happen. Personally, I'd probably just go with the constructor override and try to be sure not to call it inappropriately, but I added the bit with the factory method since it does combine two things that had come up on this thread.
*Strictly, there's nothing to stop e.g. A : IComparable<B>, but while this is of little use in the first place, one also doesn't know for most uses whether the code using it will end up calling a.CompareTo(b) or b.CompareTo(a) so it doesn't work unless we do the same on both classes. In sort, if it can't be pushed up to a common base-class it's just going to get messy fast.
You can use IComparable interface which is used widely in the .NET framework.
If I expose an IEnumerable<T> as a property of a class, is there any possibility that it can be mutated by the users of a class, and if so what is the best way of protecting against mutation, while keeping the exposed property's type IEnumerable<T>?
It depends on what you're returning. If you return (say) a mutable List<string> then the client could indeed cast it back to List<string> and mutate it.
How you protect your data depends on what you've got to start with. ReadOnlyCollection<T> is a good wrapper class, assuming you've got an IList<T> to start with.
If your clients won't benefit from the return value implementing IList<T> or ICollection<T>, you could always do something like:
public IEnumerable<string> Names
{
get { return names.Select(x => x); }
}
which effectively wraps the collection in an iterator. (There are various different ways of using LINQ to hide the source... although it's not documented which operators hide the source and which don't. For example calling Skip(0) does hide the source in the Microsoft implementation, but isn't documented to do so.)
Select definitely should hide the source though.
The user may be able to cast back to the collection class, so expose.
collection.Select(x => x)
and this will get a new IEnumerable created that can't be cast to the collection
The collection can be cast back to the original type and if it is mutable then it can then be mutated.
One way to avoid the possibility of the original being mutated is returning a copy of the list.
I would not suggest wrapping an IEnumerable in an iterator to prevent recipients from monkeying with the underlying connection. My inclination would be to use a wrapper something like:
public struct WrappedEnumerable<T> : IEnumerable<T>
{
IEnumerable<T> _dataSource;
public WrappedEnumerable(IEnumerable<T> dataSource)
{
_dataSource = dataSource;
}
public IEnumerator<T> GetEnumerator()
{
return _dataSource.GetEnumerator();
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return ((System.Collections.IEnumerable)_dataSource).GetEnumerator();
}
}
If the return type of the properties is IEnumerable<T>, the type coercion from WrappedEnumerable<T> to IEnumerable<T> would box the structure and make the behavior and performance match those of a class. If, however, the properties, were defined as returning type WrappedEnumerable<T>, then it would be possible to save a boxing step in cases where the calling code either assigns the return to a property of type WrappedEnumerable<T> (most likely as a result of something like var myKeys = myCollection.Keys;) or simply uses the property directly in a "foreach" loop. Note that if the enumerator returned by GetEnumerator() would be a struct, that will still have to be boxed in any case.
The performance advantage of using a struct rather than a class would generally be fairly slight; conceptually, however, using a struct would fit with the general recommendation that properties not create new heap object instances. Constructing a new struct instance which contains nothing but a reference to an existing heap object is very cheap. The biggest disadvantage to using a struct as defined here would be that it would lock in the behavior of the thing returned to the calling code, whereas simply returning IEnumerable<T> would allow other approaches.
Note also that it may in some cases be possible to eliminate the requirement for any boxing and exploit the duck-typing optimizations in C# and vb.net foreach loop if one used a type like:
public struct FancyWrappedEnumerable<TItems,TEnumerator,TDataSource> : IEnumerable<TItems> where TEnumerator : IEnumerator<TItems>
{
TDataSource _dataSource;
Func<TDataSource,TEnumerator> _convertor;
public FancyWrappedEnumerable(TDataSource dataSource, Func<TDataSource, TEnumerator> convertor)
{
_dataSource = dataSource;
_convertor = convertor;
}
public TEnumerator GetEnumerator()
{
return _convertor(_dataSource);
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return _convertor(_dataSource);
}
}
The convertor delegate could be a static delegate, and thus not require any heap-object creation at run-time (outside class initialization). Using this approach, if one wanted to return an enumerator from a List<int>, the property return type would be FancyWrappedEnumerable<int, List<int>.Enumerator, List>. Perhaps reasonable if the caller just used the property directly in a foreach loop, or in a var declaration, but rather icky if the caller wanted to declare a storage location of the type in a way that couldn't use var.