I have a situation where many objects of different types are constructed at the start of an App and are then invoked together using an abstract method.
Many of these objects have 1, sometimes 2, parameters that can be determined during construction. They affect part of what this object would do during invoking.
So, as I understand, when objects[i].Invoke (except in my case it's not just an array) is called, first the virtual table gets dereferenced. Then there is an additional if check.
Since the virtual table is a must, shouldn't it be faster to have two implementations of Invoke method to get rid of if? (memory usage isn't a concern)
Now, code duplication isn't nice. But I know that if I have generic MyType<T>, JIT will generate one version of MyType (and all its methods) for all reference types T, and one version per every value type T (even if they have the same size). I see it as a way to force JIT into generating two different Invoke functions, that only differ in the part where the if statement was.
I made a speed test, to make sure I'm not missing anything. And so far it seems I am:
using System.Diagnostics;
using System.Runtime.CompilerServices;
var test_size = 1000000;
var exp_res = new int[test_size];
var got_res = new int[test_size];
var oa = new Base[test_size,3];
var rng = new Random();
for (int i = 0; i < test_size; i++)
{
var b = rng.Next(2)==0;
exp_res[i] = !b ? Base.CalcRes1() : Base.CalcRes2();
oa[i,0] = !b ? new T1_1() : new T1_2();
oa[i,1] = new T2(b);
oa[i,2] = !b ? new T3<T3_h1>() : new T3<T3_h2>();
}
var sw = new int[3].Select(_=>new Stopwatch()).ToArray();
//[MethodImpl(MethodImplOptions.AggressiveInlining)]
void test_step()
{
{
// Shuffle random row, so that "oa[i,1].f1()" can't be
// optimized into "((t2)(oa[i,1])).Func1()",
// turning virtual call into non-virtual
var i = rng.Next(test_size);
var temp = oa[i, 0];
oa[i, 0] = oa[i, 1];
oa[i, 1] = oa[i, 2];
oa[i, 2] = temp;
}
for (int sw_i = 0; sw_i < sw.Length; sw_i++)
{
got_res.Initialize(); // Reset to zero
sw[sw_i].Start();
for (int i = 0; i < test_size; i++)
got_res[i] = oa[i, sw_i].Func1();
sw[sw_i].Stop();
// Test sanity + ensure "got_res[i] =" isn't optimized out
if (!got_res.SequenceEqual(exp_res)) throw new Exception();
}
}
// Dry run, to JIT-compile everything
// 10 times just in case
for (int i = 0; i < 10; i++) test_step();
for (int sw_i = 0; sw_i < sw.Length; sw_i++)
sw[sw_i].Reset();
// Testing 1000 times, to average out the background noise
var test_count = 1000;
for (int i = 1; i <= test_count; i++)
{
test_step();
Console.Title = (i/(float)test_count).ToString("P");
Console.SetCursorPosition(0, 0);
for (int sw_i = 0; sw_i < sw.Length; sw_i++)
Console.WriteLine(sw[sw_i].Elapsed);
}
void WriteMHnd(Type t) =>
Console.WriteLine(t.GetMethod("Func1")?.MethodHandle.Value.ToString("X"));
WriteMHnd(typeof(T3<T3_h1>));
WriteMHnd(typeof(T3<T3_h1>)); // Same
WriteMHnd(typeof(T3<T3_h2>)); // Different
/**
// First 2 as the same
WriteMHnd(typeof(TT<Exception>));
WriteMHnd(typeof(TT<T2>));
// Second 2 are both new methods
WriteMHnd(typeof(TT<byte>));
WriteMHnd(typeof(TT<int>));
class TT<T>
{
public void Func1() { }
};
/**/
abstract class Base
{
[MethodImpl(MethodImplOptions.NoInlining)]
public static int CalcRes1() => 5;
[MethodImpl(MethodImplOptions.NoInlining)]
public static int CalcRes2() => 9;
abstract public int Func1();
}
// Test 1: Manually implement both possibilities
sealed class T1_1 : Base
{
sealed override public int Func1() => CalcRes1();
}
sealed class T1_2 : Base
{
sealed override public int Func1() => CalcRes2();
}
// Test 2: Save choice into the field
sealed class T2 : Base
{
public bool b; // Isn't even marked as readonly
public T2(bool b) => this.b = b;
sealed public override int Func1() =>
!b ? CalcRes1() : CalcRes2();
}
// Test 3: Use generic magic, causing JIT to generate
// two versions of the "Func1" function, like in case of Test 1
interface IT3_h
{
bool GetVal();
}
struct T3_h1 : IT3_h
{
bool IT3_h.GetVal() => false;
}
struct T3_h2 : IT3_h
{
bool IT3_h.GetVal() => true;
}
sealed class T3<TVal> : Base
where TVal : struct, IT3_h
{
// Doesn't change anything
[MethodImpl(MethodImplOptions.AggressiveOptimization)]
sealed public override int Func1() =>
!new TVal().GetVal() ? CalcRes1() : CalcRes2();
}
(targeting .Net6.0)
Immediately something's not right, because T3 consistently takes ~1% less time than T1. I would expect JIT to generate exactly the same classes and methods as in the case of T1. Instead, it somehow found a way to optimize the generic case more than the direct one.
But at least new TVal().GetVal() does get turned into a constant, otherwise T3 would be much slower.
But, contrary to my belief, T2 takes ~15% less time than T3. How?
My main guess was that in the case of T2 the virtual table is somehow skipped. But the row shuffling at the start of test_step doesn't seem to have an effect.
I also played around with [MethodImpl], but didn't find anything interesting.
It seems that the simple solution of T2 is somehow smarter than T3. But I doubt JIT can somehow remove the if from T2.Func1, so it should be possible to find a better solution than both T2 and T3 if I can understand what is happening here.
Related
This question is similar to this one, but assuming that we know the member name at compile time.
Assuming that we have a class
public class MyClass
{
public string TheProperty { get; set; }
}
and in another method, we want to set the TheProperty member of an instance of that class, but we don't know the type of the instance at compile time, we only know the property name at compile time.
So, as I see it, there are two ways to do that now:
object o = new MyClass(); // For simplicity.
o.GetType().GetProperty("TheProperty").SetValue(o, "bar"); // (1)
((dynamic) o).TheProperty = "bar"; // (2)
I measured this test case using the System.Diagnostics.Stopwatch class to find out that reflection took 475 ticks and the way using dynamic took 0 ticks, therefore being about as fast as a direct call to new MyClass().TheProperty = "bar".
Since I have almost never seen the second way, I am a little confused and my questions now are:
Is there a lapse of thought or anything?
Should the second way be preferred over the first or the other way around? I don't see any disadvantages of using the second way; both (1) and (2) would throw exceptions if the property would not have been found, wouldn't they?
Why does the second way seem to be used so rarely even though seemingly being the faster?
(...)reflection took 475 ticks and the way using dynamic took 0 ticks(...)
That is simply false. The problem is that you are not understanding how dynamic works. I will assume you are correctly setting up the benchmark:
Running in Release mode with optimizations turned on and without the debugger.
You are jitting the methods before actually measuring times.
And here comes the key part you are probably not doing:
Jit the dynamic test without actually performing the dynamic runtime binding.
And why is 3 important? Because the runtime will cache the dynamic call and reuse it! So in a naive benchmark implementation, if you are doing things right, you will incurr the cost of the initial dynamic call jitting the method and therefore you won't measure it.
Run the following benchmark:
public static void Main(string[] args)
{
var repetitions = 1;
var isWarmup = true;
var foo = new Foo();
//warmup
SetPropertyWithDynamic(foo, isWarmup); //JIT method without caching the dynamic call
SetPropertyWithReflection(foo); //JIT method
var s = ((dynamic)"Hello").Substring(0, 2); //Start up the runtime compiler
for (var test = 0; test < 10; test++)
{
Console.WriteLine($"Test #{test}");
var watch = Stopwatch.StartNew();
for (var i = 0; i < repetitions; i++)
{
SetPropertyWithDynamic(foo);
}
watch.Stop();
Console.WriteLine($"Dynamic benchmark: {watch.ElapsedTicks}");
watch = Stopwatch.StartNew();
for (var i = 0; i < repetitions; i++)
{
SetPropertyWithReflection(foo);
}
watch.Stop();
Console.WriteLine($"Reflection benchmark: {watch.ElapsedTicks}");
}
Console.WriteLine(foo);
Console.ReadLine();
}
static void SetPropertyWithDynamic(object o, bool isWarmup = false)
{
if (isWarmup)
return;
((dynamic)o).TheProperty = 1;
}
static void SetPropertyWithReflection(object o)
{
o.GetType().GetProperty("TheProperty").SetValue(o, 1);
}
public class Foo
{
public int TheProperty { get; set; }
public override string ToString() => $"Foo: {TheProperty}";
}
Spot the difference between the first run and the subsequent ones?
I have a method that takes a DateTime and returns the date marking the end of that quarter. Because of some complexity involving business days and holiday calendars, I want to cache the result to speed up subsequent calls. I'm using a SortedSet<DateTime> to maintain a cache of data, and I use the GetViewBetween method in order to do cache lookups as follows:
private static SortedSet<DateTime> quarterEndCache = new SortedSet<DateTime>();
public static DateTime GetNextQuarterEndDate(DateTime date)
{
var oneDayLater = date.AddDays(1.0);
var fiveMonthsLater = date.AddMonths(5);
var range = quarterEndCache.GetViewBetween(oneDayLater, fiveMonthsLater);
if (range.Count > 0)
{
return range.Min;
}
// Perform expensive calc here
}
Now I want to make my cache threadsafe. Rather than use a lock everywhere which would incur a performance hit on every lookup, I'm exploring the new ImmutableSortedSet<T> collection which would allow me to avoid locks entirely. The problem is that ImmutableSortedSet<T> doesn't have the method GetViewBetween. Is there any way to get similar functionality from the ImmutableSortedSet<T>?
[EDIT]
Servy has convinced me just using a lock with a normal SortedSet<T> is the easiest solution. I'll leave the question open though just because I'm interested to know whether the ImmutableSortedSet<T> can handle this scenario efficiently.
Let's divide the question into two parts:
How to get a functionality similar to GetViewBetween with ImmutableSortedSet<T>? I'd suggest using the IndexOf method. In the snippet below, I created an extension method GetRangeBetween which should do the job.
How to implement lock-free, thread-safe updates with data immutable data structures? Despite this is not the original question, there are some skeptical comments with respect to this issue.
The immutables framework implements a method for exactly that purpose: System.Collections.Immutable.Update<T>(ref T location, Func<T, T> transformer) where T : class; The method internally relies on atomic compare/exchange operations. If you want to do this by hand, you'll find an alternative implementation below which should behave the same like Immutable.Update.
So here is the code:
public static class ImmutableExtensions
{
public static IEnumerable<T> GetRangeBetween<T>(
this ImmutableSortedSet<T> set, T min, T max)
{
int i = set.IndexOf(min);
if (i < 0) i = ~i;
while (i < set.Count)
{
T x = set[i++];
if (set.KeyComparer.Compare(x, min) >= 0 &&
set.KeyComparer.Compare(x, max) <= 0)
{
yield return x;
}
else
{
break;
}
}
}
public static void LockfreeUpdate<T>(ref T item, Func<T, T> fn)
where T: class
{
T x, y;
do
{
x = item;
y = fn(x);
} while (Interlocked.CompareExchange(ref item, y, x) != x);
}
}
Usage:
private static volatile ImmutableSortedSet<DateTime> quarterEndCache =
ImmutableSortedSet<DateTime>.Empty;
private static volatile int counter; // test/verification purpose only
public static DateTime GetNextQuarterEndDate(DateTime date)
{
var oneDayLater = date.AddDays(1.0);
var fiveMonthsLater = date.AddMonths(5);
var range = quarterEndCache.GetRangeBetween(oneDayLater, fiveMonthsLater);
if (range.Any())
{
return range.First();
}
// Perform expensive calc here
// -> Meaningless dummy computation for verification purpose only
long x = Interlocked.Increment(ref counter);
DateTime test = DateTime.FromFileTime(x);
ImmutableExtensions.LockfreeUpdate(
ref quarterEndCache,
c => c.Add(test));
return test;
}
[TestMethod]
public void TestIt()
{
var tasks = Enumerable
.Range(0, 100000)
.Select(x => Task.Factory.StartNew(
() => GetNextQuarterEndDate(DateTime.Now)))
.ToArray();
Task.WaitAll(tasks);
Assert.AreEqual(100000, counter);
}
Would anyone be so kind to post the equivalent Java code for a closure like this one (obtained using C#) with anonymous inner classes?
public static Func<int, int> IncrementByN()
{
int n = 0; // n is local to the method
Func<int, int> increment = delegate(int x)
{
n++;
return x + n;
};
return increment;
}
static void Main(string[] args)
{
var v = IncrementByN();
Console.WriteLine(v(5)); // output 6
Console.WriteLine(v(6)); // output 8
}
Furthermore, can anyone explain how partial applications can be obtained if lexical closures are available and viceversa? For this second question, C# would be appreciated but it's your choice.
Thanks so much.
There is no closure yet in Java. Lambda expressions are coming in java 8. However, the only issue with what you're trying to translate is that it has state, which not something that lamba expressions will support i don't think. Keep in mind, it's really just a shorthand so that you can easily implement single method interfaces. You can however still simulate this I believe:
final AtomicInteger n = new AtomicInteger(0);
IncrementByN v = (int x) -> x + n.incrementAndGet();
System.out.println(v.increment(5));
System.out.println(v.increment(6));
I have not tested this code though, it's just meant as an example of what might possibly work in java 8.
Think of the collections api. Let's say they have this interface:
public interface CollectionMapper<S,T> {
public T map(S source);
}
And a method on java.util.Collection:
public interface Collection<K> {
public <T> Collection<T> map(CollectionMapper<K,T> mapper);
}
Now, let's see that without closures:
Collection<Long> mapped = coll.map(new CollectionMapper<Foo,Long>() {
public Long map(Foo foo) {
return foo.getLong();
}
}
Why not just write this:
Collection<Long> mapped = ...;
for (Foo foo : coll) {
mapped.add(foo.getLong());
}
Much more concise right?
Now introduce lambdas:
Collection<Long> mapped = coll.map( (Foo foo) -> foo.getLong() );
See how much nicer the syntax is? And you can chain it too (we'll assume there's an interface to do filtering which which returns boolean values to determine whether to filter out a value or not):
Collection<Long> mappedAndFiltered =
coll.map( (Foo foo) -> foo.getLong() )
.filter( (Long val) -> val.longValue() < 1000L );
This code is equivalent I believe (at least it produces the desired output):
public class Test {
static interface IncrementByN {
int increment(int x);
}
public static void main(String[] args) throws InterruptedException {
IncrementByN v = new IncrementByN() { //anonymous class
int n = 0;
#Override
public int increment(int x) {
n++;
return x + n;
}
};
System.out.println(v.increment(5)); // output 6
System.out.println(v.increment(6)); // output 8
}
}
Assuming we have a generic function interface:
public interface Func<A, B> {
B call A();
}
Then we can write it like this:
public class IncrementByN {
public static Func<Integer, Integer> IncrementByN()
{
final int n_outer = 0; // n is local to the method
Func<Integer, Integer> increment = new Func<Integer, Integer>() {
int n = n_outer; // capture it into a non-final instance variable
// we can really just write int n = 0; here
public Integer call(Integer x) {
n++;
return x + n;
}
};
return increment;
}
public static void main(String[] args) {
Func<Integer, Integer> v = IncrementByN();
System.out.println(v.call(5)); // output 6
System.out.println(v.call(6)); // output 8
}
}
Some notes:
In your program, you capture the variable n by reference from the enclosing scope, and can modify that variable from the closure. In Java, you can only capture final variables (thus capture is only by value).
What I did here is capture the final variable from the outside, and then assign it into a non-final instance variable inside the anonymous class. This allows "passing info" into the closure and at the same time having it be assignable inside the closure. However, this information flow only works "one way" -- changes to n inside the closure is not reflected in the enclosing scope. This is appropriate for this example because that local variable in the method is not used again after being captured by the closure.
If, instead, you want to be able to pass information "both ways", i.e. have the closure also be able to change things in the enclosing scope, and vice versa, you will need to instead capture a mutable data structure, like an array, and then make changes to elements inside that. That is uglier, and is rarer to need to do.
Please note this question related to performance only. Lets skip design guidelines, philosophy, compatibility, portability and anything what is not related to pure performance. Thank you.
Now to the question. I always assumed that because C# getters/setters are really methods in disguise then reading public field must be faster than calling a getter.
So to make sure I did a test (the code below). However this test only produces expected results (ie fields are faster than getters at 34%) if you run it from inside Visual Studio.
Once you run it from command line it shows pretty much the same timing...
The only explanation could be is that the CLR does additional optimisation (correct me if I am wrong here).
I do not believe that in real application where those properties being used in much more sophisticated way they will be optimised in the same way.
Please help me to prove or disprove the idea that in real life properties are slower than fields.
The question is - how should I modify the test classes to make the CLR change behaviour so the public field outperfroms the getters. OR show me that any property without internal logic will perform the same as a field (at least on the getter)
EDIT: I am only talking about Release x64 build.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
using System.Runtime.InteropServices;
namespace PropertyVsField
{
class Program
{
static int LEN = 20000000;
static void Main(string[] args)
{
List<A> a = new List<A>(LEN);
List<B> b = new List<B>(LEN);
Random r = new Random(DateTime.Now.Millisecond);
for (int i = 0; i < LEN; i++)
{
double p = r.NextDouble();
a.Add(new A() { P = p });
b.Add(new B() { P = p });
}
Stopwatch sw = new Stopwatch();
double d = 0.0;
sw.Restart();
for (int i = 0; i < LEN; i++)
{
d += a[i].P;
}
sw.Stop();
Console.WriteLine("auto getter. {0}. {1}.", sw.ElapsedTicks, d);
sw.Restart();
for (int i = 0; i < LEN; i++)
{
d += b[i].P;
}
sw.Stop();
Console.WriteLine(" field. {0}. {1}.", sw.ElapsedTicks, d);
Console.ReadLine();
}
}
class A
{
public double P { get; set; }
}
class B
{
public double P;
}
}
As others have already mentioned, the getters are inlined.
If you want to avoid inlining, you have to
replace the automatic properties with manual ones:
class A
{
private double p;
public double P
{
get { return p; }
set { p = value; }
}
}
and tell the compiler not to inline the getter (or both, if you feel like it):
[MethodImpl(MethodImplOptions.NoInlining)]
get { return p; }
Note that the first change does not make a difference in performance, whereas the second change shows a clear method call overhead:
Manual properties:
auto getter. 519005. 10000971,0237547.
field. 514235. 20001942,0475098.
No inlining of the getter:
auto getter. 785997. 10000476,0385552.
field. 531552. 20000952,077111.
Have a look at the Properties vs Fields – Why Does it Matter? (Jonathan Aneja) blog article from one of the VB team members on MSDN. He outlines the property versus fields argument and also explains trivial properties as follows:
One argument I’ve heard for using fields over properties is that
“fields are faster”, but for trivial properties that’s actually not
true, as the CLR’s Just-In-Time (JIT) compiler will inline the
property access and generate code that’s as efficient as accessing a
field directly.
The JIT will inline any method (not just a getter) that its internal metrics determine will be faster inlined. Given that a standard property is return _Property; it will be inlined in every case.
The reason you are seeing different behavior is that in Debug mode with a debugger attached, the JIT is significantly handicapped, to ensure that any stack locations match what you would expect from the code.
You are also forgetting the number one rule of performance, testing beats thinking. For instance even though quick sort is asymptotically faster than insertion sort, insertion sort is actually faster for extremely small inputs.
The only explanation could be is that the CLR does additional optimisation (correrct me if I am wrong here).
Yes, it is called inlining. It is done in the compiler (machine code level - i.e. JIT). As the getter/setter are trivial (i.e. very simple code) the method calls are destroyed and the getter/setter written in the surrounding code.
This does not happen in debug mode in order to support debugging (i.e. the ability to set a breakpoint in a getter or setter).
In visual studio there is no way to do that in the debugger. Compile release, run without attached debugger and you will get the full optimization.
I do not believe that in real application where those properties being used in much more sophisticated way they will be optimised in the same way.
The world is full of illusions that are wrong. They will be optimized as they are still trivial (i.e. simple code, so they are inlined).
It should be noted that it's possible to see the "real" performance in Visual Studio.
Compile in Release mode with Optimisations enabled.
Go to Debug -> Options and Settings, and uncheck "Suppress JIT optimization on module load (Managed only)".
Optionally, uncheck "Enable Just My Code" otherwise you may not be able to step in the code.
Now the jitted assembly will be the same even with the debugger attached, allowing you to step in the optimised dissassembly if you so please. This is essential to understand how the CLR optimises code.
After read all your articles, I decide to make a benchmark with these code:
[TestMethod]
public void TestFieldVsProperty()
{
const int COUNT = 0x7fffffff;
A a1 = new A();
A a2 = new A();
B b1 = new B();
B b2 = new B();
C c1 = new C();
C c2 = new C();
D d1 = new D();
D d2 = new D();
Stopwatch sw = new Stopwatch();
long t1, t2, t3, t4;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
a1.P = a2.P;
}
sw.Stop();
t1 = sw.ElapsedTicks;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
b1.P = b2.P;
}
sw.Stop();
t2 = sw.ElapsedTicks;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
c1.P = c2.P;
}
sw.Stop();
t3 = sw.ElapsedTicks;
sw.Restart();
for (int i = COUNT - 1; i >= 0; i--)
{
d1.P = d2.P;
}
sw.Stop();
t4 = sw.ElapsedTicks;
long max = Math.Max(Math.Max(t1, t2), Math.Max(t3, t4));
Console.WriteLine($"auto: {t1}, {max * 100d / t1:00.00}%.");
Console.WriteLine($"field: {t2}, {max * 100d / t2:00.00}%.");
Console.WriteLine($"manual: {t3}, {max * 100d / t3:00.00}%.");
Console.WriteLine($"no inlining: {t4}, {max * 100d / t4:00.00}%.");
}
class A
{
public double P { get; set; }
}
class B
{
public double P;
}
class C
{
private double p;
public double P
{
get => p;
set => p = value;
}
}
class D
{
public double P
{
[MethodImpl(MethodImplOptions.NoInlining)]
get;
[MethodImpl(MethodImplOptions.NoInlining)]
set;
}
}
When test in debug mode, I got this result:
auto: 35142496, 100.78%.
field: 10451823, 338.87%.
manual: 35183121, 100.67%.
no inlining: 35417844, 100.00%.
but when switch to release mode, the result is different than before.
auto: 2161291, 873.91%.
field: 2886444, 654.36%.
manual: 2252287, 838.60%.
no inlining: 18887768, 100.00%.
seems auto property is a better way.
I wish to create the following test in NUnit for the following scenario: we wish to test the a new calculation method being created yields results similar to that of an old system. An acceptable difference (or rather a redefinition of equality) between all values has been defined as
abs(old_val - new_val) < 0.0001
I know that I can loop through every value from the new list and compare to values from the old list and test the above condition.
How would achieve this using Nunit's CollectionAssert.AreEqual method (or some CollectionAssert method)?
The current answers are outdated. Since NUnit 2.5, there is an overload of CollectionAssert.AreEqual that takes a System.Collections.IComparer.
Here is a minimal implementation:
public class Comparer : System.Collections.IComparer
{
private readonly double _epsilon;
public Comparer(double epsilon)
{
_epsilon = epsilon;
}
public int Compare(object x, object y)
{
var a = (double)x;
var b = (double)y;
double delta = System.Math.Abs(a - b);
if (delta < _epsilon)
{
return 0;
}
return a.CompareTo(b);
}
}
[NUnit.Framework.Test]
public void MyTest()
{
var a = ...
var b = ...
NUnit.Framework.CollectionAssert.AreEqual(a, b, new Comparer(0.0001));
}
Well there is method from the NUnit Framework that allows me to do tolerance checks on collections. Refer to the Equal Constraint. One uses the AsCollection and Within extension methods. On that note though I am not 100% sure regarding the implications of this statement made
If you want to treat the arrays being compared as simple collections,
use the AsCollection modifier, which causes the comparison to be made
element by element, without regard for the rank or dimensions of the
array.
[Test]
//[ExpectedException()]
public void CheckLists_FailsAt0()
{
var expected = new[] { 0.0001, 0.4353245, 1.3455234, 345345.098098 };
var result1 = new[] { -0.0004, 0.43520, 1.3454, 345345.0980 };
Assert.That(result1, Is.EqualTo(expected).AsCollection.Within(0.0001), "fail at [0]"); // fail on [0]
}
[Test]
//[ExpectedException()]
public void CheckLists_FailAt1()
{
var expected = new[] { 0.0001, 0.4353245, 1.3455234, 345345.098098 };
var result1a = new[] { 0.0001000000 , 0.4348245000 , 1.3450234000 , 345345.0975980000 };
Assert.That(result1a, Is.EqualTo(expected).AsCollection.Within(0.0001), "fail at [1]"); // fail on [3]
}
[Test]
public void CheckLists_AllPass_ForNegativeDiff_of_1over10001()
{
var expected = new[] { 0.0001, 0.4353245, 1.3455234, 345345.098098 };
var result2 = new[] { 0.00009900 , 0.43532350 , 1.34552240 , 345345.09809700 };
Assert.That(result2, Is.EqualTo(expected).AsCollection.Within(0.0001)); // pass
}
[Test]
public void CheckLists_StillPass_ForPositiveDiff_of_1over10001()
{
var expected = new[] { 0.0001, 0.4353245, 1.3455234, 345345.098098 };
var result3 = new[] { 0.00010100 , 0.43532550 , 1.34552440 , 345345.09809900 };
Assert.That(result3, Is.EqualTo(expected).AsCollection.Within(0.0001)); // pass
}
NUnit does not define any delegate object or interface to perform custom checks to lists, and determine that a expected result is valid.
But I think that the best and simplest option is writing a small static method that achieve your checks:
private const float MIN_ACCEPT_VALUE = 0.0001f;
public static void IsAcceptableDifference(IList collection, IList oldCollection)
{
if (collection == null)
throw new Exception("Source collection is null");
if (oldCollection == null)
throw new Exception("Old collection is null");
if (collection.Count != oldCollection.Count)
throw new Exception("Different lenghts");
for (int i = 0; i < collection.Count; i++)
{
float newValue = (float)collection[i];
float oldValue = (float)oldCollection[i];
float difference = Math.Abs(oldValue - newValue);
if (difference < MIN_ACCEPT_VALUE)
{
throw new Exception(
string.Format(
"Found a difference of {0} at index {1}",
difference,
i));
}
}
}
You've asked how to achieve your desired test using a CollectionAssert method without looping through the list. I'm sure this is obvious, but looping is exactly what such a method would do...
The short answer to your exact question is that you can't use CollectionAssert methods to do what you want. However, if what you really want is an easy way to compare lists of floating point numbers and assert their equality, then read on.
The method Assert.AreEqual( double expected, double actual, double tolerance ) releases you from the need to write the individual item assertions yourself. Using LINQ, you could do something like this:
double delta = 0.0001;
IEnumerable<double> expectedValues;
IEnumerable<double> actualValues;
// code code code
foreach (var pair in expectedValues.Zip(actualValues, Tuple.Create))
{
Assert.AreEqual(pair.Item1, pair.Item2, delta, "Collections differ.");
}
If you wanted to get fancier, you could pull this out into a method of its own, catch the AssertionException, massage it and rethrow it for a cleaner interface.
If you don't care about which items differ:
var areEqual = expectedValues
.Zip(actualValues, Tuple.Create)
.Select(tup => Math.Abs(tup.Item1 - tup.Item2) < delta)
.All(b => b);
Assert.IsTrue(areEqual, "Collections differ.");