What is the complexity of subscribing (+=) and unsubscribing (-=) a delegate in c#?
namespace MulticastDelegateDemo
{
public delegate void MathDelegate(int No1, int No2);
public class Program
{
public static void Add(int x, int y)
{
Console.WriteLine("THE SUM IS : " + (x + y));
}
public static void Sub(int x, int y)
{
Console.WriteLine("THE SUB IS : " + (x - y));
}
public void Mul(int x, int y)
{
Console.WriteLine("THE MUL IS : " + (x * y));
}
public void Div(int x, int y)
{
Console.WriteLine("THE DIV IS : " + (x / y));
}
static void Main(string[] args)
{
Program p = new Program();
MathDelegate del1 = new MathDelegate(Add);
MathDelegate del2 = new MathDelegate(Program.Sub);
MathDelegate del3 = p.Mul;
MathDelegate del4 = new MathDelegate(p.Div); ;
//In this example del5 is a multicast delegate. We can use +(plus)
// operator to chain delegates together and -(minus) operator to remove.
MathDelegate del5 = del1 + del2 + del3 + del4;
del5.Invoke(20, 5);
Console.WriteLine();
del5 -= del2;
del5(22, 7);
Console.ReadKey();
}
}
}
The implementation will be compiler dependent; however based on the interface, restrictions and use cases it would be easiest and most likely most efficient to implement the backend using linked lists. The main impact would be on unsubscribe lookups.
You can subscribe the same delegate multiple times in Visual Studio 19 (huge performance hit if you leak that dozens of times) so the implementation is obviously just appending what you give it.
Doing a simple loop test subscribe is significantly faster and unsubscribe is kicking the fan in on my laptop
This is using the StopWatch class and ElapsedMilliseconds - The classes are empty outside of the standard event declaration and methods.
Looping 50000 times
Subscribe: 15
Unsubscribe: 9202
static void Main(string[] args)
{
EventSubscirber ms = new EventSubscirber();
MyEventClass myEventClass = new MyEventClass();
int loops = 50000;
Stopwatch swsubscribe = new Stopwatch();
Stopwatch swunsubscribe = new Stopwatch();
swsubscribe.Start();
for (int i = 0; i < loops; i++)
{
myEventClass.SampleEvent += ms.SampleEventReceiver;
}
swsubscribe.Stop();
Console.WriteLine($"Looping {loops} times");
Console.WriteLine($"Subscribe: {swsubscribe.ElapsedMilliseconds}");
swunsubscribe.Start();
for (int i = 0; i < loops; i++)
{
myEventClass.SampleEvent -= ms.SampleEventReceiver;
}
swunsubscribe.Stop();
Console.WriteLine($"Unsubscribe: {swunsubscribe.ElapsedMilliseconds}");
}
Just guessing, but based on timing, its iterating the full list each time and unsubscribing the last one that matches.
Subscribing (+=) and unsubscribing (-=) are actually shorthands for System.Delegate.Combine and System.Delegate.Remove static methods invocations. They internally call correspondingly CombineImpl and RemoveImpl for passed delegates pairs (with some exclusions for cases with null parameters passed).
You can check the actual implementation yourself on source.dot.net. There you can see that for MulticastDelegate.CombineImpl complexity is O(m) where m is number of items in the right operand invocation list:
Action act = null; int count = 25000; var sw = new Stopwatch();
sw.Restart();
for (int i = 0; i < count; i++)
{
Action a = () => Console.WriteLine();
act = (a += act);
}
Console.WriteLine($"Subscribe - {count}: {sw.ElapsedMilliseconds}"); // prints "Subscribe - 25000: 1662" on my machine
sw.Restart();
for (int i = 0; i < count; i++)
{
act += () => Console.WriteLine();
}
Console.WriteLine($"Subscribe - {count}: {sw.ElapsedMilliseconds}"); // prints "Subscribe - 25000: 2" on my machine
As for MulticastDelegate.RemoveImpl it get's harder to quickly estimate the complexity, but it seems to be O(n) (where n is number of items in the right operand invocation list) for cases where n > m.
Related
Im looking to call the same method every x seconds in my c# console app, but I also want to only call this method a certain number of times (say 5).
I need each method to run after each other (cant have them over lapping)
My biggest issue is the console app closing before being completed
The current code I have works but is kinda messy (the while loop)
static void Main(string[] args)
{
for (int t = 0; t < 3; t++)
{
InitTimer(t);
}
}
public static void InitTimer(int t)
{
Console.WriteLine("Init" + t);
int x = 0;
var timer = new System.Threading.Timer(
e => x = MyMethod(x),
null,
TimeSpan.Zero,
//delay between seconds
TimeSpan.FromSeconds(5));
//number of times called
while (x < 5)
{
}
timer.Dispose();
}
public static int MyMethod(int x)
{
Console.WriteLine("Test" + x);
//call post method
x += 1;
return x;
}
}
Is there a neater way to create the same functionality ?
I've searched for the differences between the * operator and the Math.BigMul method, and found nothing. So I've decided I would try and test their efficiency against each other. Consider the following code :
public class Program
{
static void Main()
{
Stopwatch MulOperatorWatch = new Stopwatch();
Stopwatch MulMethodWatch = new Stopwatch();
MulOperatorWatch.Start();
// Creates a new MulOperatorClass to perform the start method 100 times.
for (int i = 0; i < 100; i++)
{
MulOperatorClass mOperator = new MulOperatorClass();
mOperator.start();
}
MulOperatorWatch.Stop();
MulMethodWatch.Start();
for (int i = 0; i < 100; i++)
{
MulMethodClass mMethod = new MulMethodClass();
mMethod.start();
}
MulMethodWatch.Stop();
Console.WriteLine("Operator = " + MulOperatorWatch.ElapsedMilliseconds.ToString());
Console.WriteLine("Method = " + MulMethodWatch.ElapsedMilliseconds.ToString());
Console.ReadLine();
}
public class MulOperatorClass
{
public void start()
{
List<long> MulOperatorList = new List<long>();
for (int i = 0; i < 15000000; i++)
{
MulOperatorList.Add(i * i);
}
}
}
public class MulMethodClass
{
public void start()
{
List<long> MulMethodList = new List<long>();
for (int i = 0; i < 15000000; i++)
{
MulMethodList.Add(Math.BigMul(i,i));
}
}
}
}
To sum it up : I've created two classes - MulMethodClass and MulOperatorClass that performs both the start method, which fills a varible of type List<long with the values of i multiply by i many times. The only difference between these methods are the use of the * operator in the operator class, and the use of the Math.BigMul in the method class.
I'm creating 100 instances of each of these classes, just to prevent and overflow of the lists (I can't create a 1000000000 items list).
I then measure the time it takes for each of the 100 classes to execute. The results are pretty peculiar : I've did this process about 15 times and the average results were (in milliseconds) :
Operator = 20357
Method = 24579
That about 4.5 seconds difference, which I think is a lot. I've looked at the source code of the BigMul method - it uses the * operator, and practically does the same exact thing.
So, for my quesitons :
Why such method even exist? It does exactly the same thing.
If it does exactly the same thing, why there is a huge efficiency difference between these two?
I'm just curious :)
Microbenchmarking is art. You are right the method is around 10% slower on x86. Same speed on x64. Note that you have to multiply two longs, so ((long)i) * ((long)i), because it is BigMul!
Now, some easy rules if you want to microbenchmark:
A) Don't allocate memory in the benchmarked code... You don't want the GC to run (you are enlarging the List<>)
B) Preallocate the memory outside the timed zone (create the List<> with the right capacity before running the code)
C) Run at least once or twice the methods before benchmarking it.
D) Try to not do anything but what you are benchmarking, but to force the compiler to run your code. For example checking for an always true condition based on the result of the operation, and throwing an exception if it is false is normally good enough to fool the compiler.
static void Main()
{
// Check x86 or x64
Console.WriteLine(IntPtr.Size == 4 ? "x86" : "x64");
// Check Debug/Release
Console.WriteLine(IsDebug() ? "Debug, USELESS BENCHMARK" : "Release");
// Check if debugger is attached
Console.WriteLine(System.Diagnostics.Debugger.IsAttached ? "Debugger attached, USELESS BENCHMARK!" : "Debugger not attached");
// High priority
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
Stopwatch MulOperatorWatch = new Stopwatch();
Stopwatch MulMethodWatch = new Stopwatch();
// Prerunning of the benchmarked methods
MulMethodClass.start();
MulOperatorClass.start();
{
// No useless method allocation here
MulMethodWatch.Start();
for (int i = 0; i < 100; i++)
{
MulMethodClass.start();
}
MulMethodWatch.Stop();
}
{
// No useless method allocation here
MulOperatorWatch.Start();
for (int i = 0; i < 100; i++)
{
MulOperatorClass.start();
}
MulOperatorWatch.Stop();
}
Console.WriteLine("Operator = " + MulOperatorWatch.ElapsedMilliseconds.ToString());
Console.WriteLine("Method = " + MulMethodWatch.ElapsedMilliseconds.ToString());
Console.ReadLine();
}
public class MulOperatorClass
{
// The method is static. No useless memory allocation
public static void start()
{
for (int i = 2; i < 15000000; i++)
{
// This condition will always be false, but the compiler
// won't be able to remove the code
if (((long)i) * ((long)i) == ((long)i))
{
throw new Exception();
}
}
}
}
public class MulMethodClass
{
public static void start()
{
// The method is static. No useless memory allocation
for (int i = 2; i < 15000000; i++)
{
// This condition will always be false, but the compiler
// won't be able to remove the code
if (Math.BigMul(i, i) == i)
{
throw new Exception();
}
}
}
}
private static bool IsDebug()
{
// Taken from http://stackoverflow.com/questions/2104099/c-sharp-if-then-directives-for-debug-vs-release
object[] customAttributes = Assembly.GetExecutingAssembly().GetCustomAttributes(typeof(DebuggableAttribute), false);
if ((customAttributes != null) && (customAttributes.Length == 1))
{
DebuggableAttribute attribute = customAttributes[0] as DebuggableAttribute;
return (attribute.IsJITOptimizerDisabled && attribute.IsJITTrackingEnabled);
}
return false;
}
E) If you are really sure your code is ok, try changing the order of the tests
F) Put your program in higher priority
but be happy :-)
at least another persons had the same question, and wrote a blog article: http://reflectivecode.com/2008/10/mathbigmul-exposed/
He did the same errors you did.
I was optimizing my code, and I noticed that using properties (even auto properties) has a profound impact on the execution time. See the example below:
[Test]
public void GetterVsField()
{
PropertyTest propertyTest = new PropertyTest();
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
propertyTest.LoopUsingCopy();
Console.WriteLine("Using copy: " + stopwatch.ElapsedMilliseconds / 1000.0);
stopwatch.Restart();
propertyTest.LoopUsingGetter();
Console.WriteLine("Using getter: " + stopwatch.ElapsedMilliseconds / 1000.0);
stopwatch.Restart();
propertyTest.LoopUsingField();
Console.WriteLine("Using field: " + stopwatch.ElapsedMilliseconds / 1000.0);
}
public class PropertyTest
{
public PropertyTest()
{
NumRepet = 100000000;
_numRepet = NumRepet;
}
int NumRepet { get; set; }
private int _numRepet;
public int LoopUsingGetter()
{
int dummy = 314;
for (int i = 0; i < NumRepet; i++)
{
dummy++;
}
return dummy;
}
public int LoopUsingCopy()
{
int numRepetCopy = NumRepet;
int dummy = 314;
for (int i = 0; i < numRepetCopy; i++)
{
dummy++;
}
return dummy;
}
public int LoopUsingField()
{
int dummy = 314;
for (int i = 0; i < _numRepet; i++)
{
dummy++;
}
return dummy;
}
}
In Release mode on my machine I get:
Using copy: 0.029
Using getter: 0.054
Using field: 0.026
which in my case is a disaster - the most critical loop just can't use any properties if I want to get maximum performance.
What am I doing wrong here? I was thinking that these would be inlined by the JIT optimizer.
Getters/Setters are syntactic sugar for methods with a few special conventions ("value" variable in a setter", and no visible parameter list).
According to this article, "If any of the method's formal arguments are structs, the method will not be inlined." -- ints are structs. Therefore, I think this limitation applies.
I haven't looked at the IL produced by the following code, but I did get some interesting results that I think shows this working this way...
using System;
using System.Diagnostics;
public static class Program{
public static void Main()
{
PropertyTest propertyTest = new PropertyTest();
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
propertyTest.LoopUsingField();
Console.WriteLine("Using field: " + stopwatch.ElapsedMilliseconds / 1000.0);
stopwatch.Restart();
propertyTest.LoopUsingBoxedGetter();
Console.WriteLine("Using boxed getter: " + stopwatch.ElapsedMilliseconds / 1000.0);
stopwatch.Restart();
propertyTest.LoopUsingUnboxedGetter();
Console.WriteLine("Using unboxed getter: " + stopwatch.ElapsedMilliseconds / 1000.0);
}
}
public class PropertyTest
{
public PropertyTest()
{
_numRepeat = 1000000000L;
_field = 1;
Property = 1;
IntProperty = 1;
}
private long _numRepeat;
private object _field = null;
private object Property {get;set;}
private int IntProperty {get;set;}
public void LoopUsingBoxedGetter()
{
for (long i = 0; i < _numRepeat; i++)
{
var f = Property;
}
}
public void LoopUsingUnboxedGetter()
{
for (long i = 0; i < _numRepeat; i++)
{
var f = IntProperty;
}
}
public void LoopUsingField()
{
for (long i = 0; i < _numRepeat; i++)
{
var f = _field;
}
}
}
This produces.. ON MY MACHINE, OS X (recent version of Mono), these results (in seconds):
Using field: 2.606
Using boxed getter: 2.585
Using unboxed getter: 2.71
You say you are optimizing your code, but I am curious as to how, what the functionality is supposed to be, and what the source data coming into this is as well as it's size as this is clearly not "real" code. If you are parsing a large list of data in consider utilizing the BinarySearch functionality. This is significantly faster than, say the .Contains() function with very large sets of data.
List<int> myList = GetOrderedList();
if (myList.BinarySearch(someValue) < 0)
// List does not contain data
Perhaps you are simply looping through data. If you are looping through data and returning a value perhaps you may want to utilize the yield keyword. Additionally consider the potential use of the parallel library if you can, or utilize your own thread management.
This does not seem like what you want judging by the posted source but it was very generic so I figured this was worth mentioning.
public IEnumerable<int> LoopUsingGetter()
{
int dummy = 314;
for (int i = 0; i < NumRepet; i++)
{
dummy++;
yield return dummy;
}
}
[ThreadStatic]
private static int dummy = 314;
public static int Dummy
{
get
{
if (dummy != 314) // or whatever your condition
{
return dummy;
}
Parallel.ForEach (LoopUsingGetter(), (i)
{
//DoWork(), not ideal for given example, but due to the generic context this may help
dummy += i;
});
}
return dummy;
}
Follow the 80/20 performance rule instead of micro-optimizing.
Write code for maintainability, instead of performance.
Perhaps Assembly language is the fastest but that does not mean we should use Assembly language for all purposes.
You are running the loop 100 million times and the difference is 0.02 millisecond or 20 microseconds. Calling a function will have some overhead but in most cases it does not matter. You can trust the compiler to inline or do advanced things.
Directly accessing the field will be problematic in 99% of the cases as you will not have control of where all your variables are referenced and fixing at too many places when you find something is wrong.
You should stop the stop watch when it completes the loop, your stopwatch is still running when you are writing to console this can add additional time that can skew your results.
[Test]
public void GetterVsField()
{
PropertyTest propertyTest = new PropertyTest();
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
propertyTest.LoopUsingCopy();
stopwatch.Stop();
Console.WriteLine("Using copy: " + stopwatch.ElapsedMilliseconds / 1000.0);
stopwatch.Reset();
stopwatch.Start();
propertyTest.LoopUsingGetter();
stopwatch.Stop();
Console.WriteLine("Using getter: " + stopwatch.ElapsedMilliseconds / 1000.0);
stopwatch.Reset();
stopwatch.Start();
propertyTest.LoopUsingField();
stopwatch.Stop();
Console.WriteLine("Using field: " + stopwatch.ElapsedMilliseconds / 1000.0);
}
You have to check if optimize code checkbox is checked.
If it is not checked, access to the property is still method call
If it is checked the property is in-lined and the performance is the same as with direct field access because the JITed code will be the same
There is more restriction about inlinig in X64 JIT compiler. More information about JIT64 inlining optimization is there:
David Broman's CLR Profiling API Blog: Tail call JIT conditions.
please see point #3 The caller or callee return a value type.
If your property will return reference type, the property getter will be in-lined.
It means that the property int NumRepet { get; set; } is not inlined but object NumRepet { get; set; } will be inlined if you don't break another restriction.
The optimization of X64 JIT is poor and this is why new one will be introduced as John mention
I am currently working on a program to traverse through a list of numbers with two different functions to find the sum and a specific value. Here is the code that I have implemented
class Program
{
static int i, sum;
static List<int> store = new List<int>();
static void Main(string[] args)
{
for (i = 0; i < 100; i++)
{
store.Add(i);
}
i = 0;
TraverseList();
Console.ReadLine();
}
static void TraverseList()
{
while (i < store.Count)
{
FindValue();
FindSum();
i++;
}
Console.WriteLine("The sum is {0}", sum);
}
static void FindValue()
{
if (store[i] == 40)
{
Console.WriteLine("Value is 40");
}
}
static void FindSum()
{
sum = sum + store[i];
}
}
I was thinking of separating FindSum and FindValue into two different functions and not calling them in TraverseList. Is there any other way of doing it rather the duplicating the common code of list traversal in those two functions as I have done here
class Program
{
static int i, sum;
static List<int> store = new List<int>();
static void Main(string[] args)
{
for (i = 0; i < 100; i++)
{
store.Add(i);
}
i = 0;
FindValue();
i = 0;
FindSum();
Console.ReadLine();
}
static void FindValue()
{
while (i < store.Count)
{
if (store[i] == 40)
{
Console.WriteLine("Value is 40");
}
i++;
}
}
static void FindSum()
{
while (i < store.Count)
{
sum = sum + store[i];
i++;
}
Console.WriteLine("The sum is {0}", sum);
}
}
To find the sum of a series of numbers you can use the simple LINQ function:
List<int> numbers = new List<int>();
int sum = numbers.Sum();
I am not sure what you mean by find a value. If you want to check if one of the numbers in a series is equal to a certain value you can use the LINQ function Any:
int myValue = 40;
bool hasMyValue = numbers.Any(i => i == myValue);
This uses a lambda expression which executes a function and passes each element in the collection to the function. The function returns true or false to indicate that the element is a match for the Any test.
If instead you want to check for how many numbers in a sequence match a certain value you can instead use the Count function like so:
int numberOfMatches = numbers.Count(i => i == myValue);
First thing - I would use foreach instead of while, regarding the duplicate code (assuming you are not using Linq) - I think it's fine
A taste how Linq can simplify your code:
var FindSum = store.Sum();
var FindValue = store.FindAll(x => x == 40);
I cannot stress enough how bad it is to have i and sum as class members. Especially i. It will make your code very fragile, and hard to work with. Try to make each method as isolated from the rest of the code as possible.
Try something like this instead:
static void Main( string[] args )
{
List<int> store = new List<int>();
for( int i = 0; i < 100; i++ )
store.Add( i );
FindValue( store );
FindSum( store );
Console.ReadLine();
}
static void FindValue( List<int> list )
{
for( int i = 0; i < list.Count; i++ )
{
if( list[i] == 40 )
Console.WriteLine( "Value is 40" );
}
}
static void FindSum( List<int> list )
{
int sum = 0;
for( int i = 0; i < list.Count; i++ )
sum += list[i];
Console.WriteLine( "The sum is {0}", sum );
}
It is perfectly fine (and normal) to duplicate the looping, it's just a single line. You could also use a foreach here.
Also, disregard everyone telling you to use LINQ. You're obviously new to programming and you should learn the basics first.
i was making some optimizations to an algorithm that finds the smallest number that is bigger than X, in a given array, but then a i stumbled on a strange difference. On the code bellow, the "ForeachUpper" ends in 625ms, and the "ForUpper" ends in, i believe, a few hours (insanely slower). Why so?
class Teste
{
public double Valor { get; set; }
public Teste(double d)
{
Valor = d;
}
public override string ToString()
{
return "Teste: " + Valor;
}
}
private static IEnumerable<Teste> GetTeste(double total)
{
for (int i = 1; i <= total; i++)
{
yield return new Teste(i);
}
}
static void Main(string[] args)
{
int total = 1000 * 1000*30 ;
double test = total/2+.7;
var ieTeste = GetTeste(total).ToList();
Console.WriteLine("------------");
ForeachUpper(ieTeste.Select(d=>d.Valor), test);
Console.WriteLine("------------");
ForUpper(ieTeste.Select(d => d.Valor), test);
Console.Read();
}
private static void ForUpper(IEnumerable<double> bigList, double find)
{
var start1 = DateTime.Now;
double uppper = 0;
for (int i = 0; i < bigList.Count(); i++)
{
var toMatch = bigList.ElementAt(i);
if (toMatch >= find)
{
uppper = toMatch;
break;
}
}
var end1 = (DateTime.Now - start1).TotalMilliseconds;
Console.WriteLine(end1 + " = " + uppper);
}
private static void ForeachUpper(IEnumerable<double> bigList, double find)
{
var start1 = DateTime.Now;
double upper = 0;
foreach (var toMatch in bigList)
{
if (toMatch >= find)
{
upper = toMatch;
break;
}
}
var end1 = (DateTime.Now - start1).TotalMilliseconds;
Console.WriteLine(end1 + " = " + upper);
}
Thanks
IEnumerable<T> is not indexable.
The Count() and ElementAt() extension methods that you call in every iteration of your for loop are O(n); they need to loop through the collection to find the count or the nth element.
Moral: Know thy collection types.
The reason for this difference is that your for loop will execute bigList.Count() at every iteration. This is really costly in your case, because it will execute the Select and iterate the complete result set.
Furthermore, you are using ElementAt which again executes the select and iterates it up to the index you provided.