Performance of very big classes - c#

I have a class called "simulation" and a method for solving simulation.
class sim
{
void Step()
{
}
other methods (20+)...
}
Sim class is only instantiated once during the program.
Step method is called in the order of millions during the program.
Step method uses a lot of local variables (100+). None of those locals are used in other methods.
Is it better to make those local variables a member of the class or keep them as local in Step() for better performance?

It depends. What kinds of variables: primitive types or objects? If the latter, your IL code is still going to chase their pointers. If the former, it depends on what order you access them in, and what CPU you are targeting. Optimizing the layout of variables should come pretty far down the list of performance tuning activities, especially in C# when you're dependent on what assembly your IL is translated into.
As usual with optimizations: first measure performance to identify bottlenecks. Then consider what you can do to remove them. Until you know that you need to do something like that, just write your code as clearly as possible: don't expose local variables unnecessarily by lifting them into the class level. And do consider splitting that Step() method: a large method is hard to understand and therefore even harder to optimize.

As a general rule, you should minimise the scope of variables and only increase the scope if it proves absolutely necessary. Converting locals to member variables is a poor design choice, and thus needs a very strong justification.
Also note that local variables only have a cost if they have non-trivial constructors. A local variable with a noop constructor or no constructor at all has no setup cost, so it would be pointless to expand its scope.

If some of the variables contain objects that can be reused in between multiple method calls, you'd skip the overhead of disposing the old instance and creating a new instance with each method call. For example, lookups in a dependency injection container that won't change in between method calls could be 'cached' in a field.
But aside from that, you probably won't get much of a performance increase. You could give it a try and use a profiler to measure your code performance. A profiler may also help you to identify other bottlenecks in your code.

Related

Function call for small method will consume for memory or not..... in C#

I have one question. Instead of writing big method(including big business logic), i preferred to divide this method in small methods and call them in one method because for me it looks so neat and easy to maintain. But my Team Lead said that "Don't write small methods and call them in one because it consumes more memory while you call small methods." Is that correct ?
Please suggest what should i do in this case ? and once again thank you for your valuable time
There are many factors that come into play here. More context of your project would be required to give any strict conclusions.
Generally speaking though, C#, VB and managed languages in general were devised to prioritize developer productivity over performance. In that light, worrying about method call memory consumption seems questionable.
Additionally, IL-based languages (C#, VB, ...) use a JIT that compiles the intermediate code to CPU-specific assembly in runtime. JIT's unit of work is a method. The bigger the method, the less optimizations JIT can do. Therefore a big method may yield worse performance than many small methods doing the same work. In addition, JIT can also do an optimization called inlining where a small method code is generated inside its caller, eliding the function call altogether.
Function call takes very little memory by C#/VB's terms. Unless you're working in a very constrained environment (e.g. embedded), such optimization doesn't really make sense, especially when not backed by any reasonable arguments.
You are both mistaken.
OOP is built on a concept of divide and conquer so you should divide your method into small methods for the sake of reuse ability and maintainability.
About the memory consumed, I don't think it will consume more memory but this may happen when you create methods for each small task.
So yes divide them into small methods only if need to, with respect of resources and sharing variables.

Are static methods always held in memory?

My whole development team thinks, static methods are a terrible thing to use.
I really don't see any disadvantages in some cases. When I needed a stateless method before, I always used static methods for that purpose.
I agree with some of their points, e.g. I know they are quite hard to test (although it's not impossible).
What I don't get is, that they claim, static methods are always held in memory and will fill the basic memory usage. So, if you are using 100 static methods in your program, when the program starts all methods are loaded into memory and will fill the memory unnecessarily. Furthermore static methods increase the risk of memory leaks.
Is that true?
It's quite inconvenient to have to create a new instance of a class just to call the method. But thats how they do it right now, the create a instance in mid of a method and call that method, that could be just a static one.
There is no distinction between static and instance methods as far as memory is concerned. Instance methods only have an extra argument, this. Everything else is exactly the same. Also the basic way in which extension methods were easy to add to C#, all that was needed was syntax to expose that hidden this argument.
Methods occupy space for their machine code, the actual code that the processor executes. And a table that describes how the method stores objects, that helps the garbage collector to discover object roots held in local variables and CPU registers. This space is taken from the "loader heap", an internal data structure that the CLR creates that is associated with the AppDomain. Happens just once, when the method first executes, just-in-time. Releasing that space requires unloading that appdomain. Static variables are also allocated in the loader heap.
Do not throw away the big advantage of static methods. They can greatly improve the readability and maintainability of code. Thanks to their contract, they cannot alter the object state. They can therefore have very few side-effects, makes it really easy to reason about what they do. However, if they make you add static variables then they do the exact opposite, global variables are evil.
It's quite inconvenient to have to create a new instance of a class just to call the method. But thats how they do it right now
They are being ridiculous, to be blunt.
As commenter Groo has pointed out, at the native compiled level, an instance method isn't even all that different from a static method. It's just that there's an implicit parameter being passed to the instance method, while with the static method "what you see is what you get".
The runtime may optimize access to a method. It may not JIT-compile the method until the first time it's executed. It may not even load the IL from your assembly into memory until the IL is actually needed. But it can perform these optimizations with static and instance methods equally well.
In fact, forcing all methods to be instance methods is worse than using static method, because it means that for some methods, one is arbitrarily creating an otherwise-useless object. While the runtime may be able to detect the object reference is unused and so cause it to have a minimum lifespan, it can't avoid allocating the object altogether, even a degenerate object will take up some memory while it's alive, and it will add to the cost of garbage collection.
And beyond all that, let's suppose for a moment your colleagues were correct. What would you have gained? Any measurable performance difference at all? Doubtful. Let's face it: managed code, and especially C#, has the potential to hide any number of performance-draining implementation details from us. The main reason we use a managed code language like C# is that we gain so much in productivity and code-correctness, that these possible inefficiencies are well worth it. Most of the time, and especially on modern computers, they are practically invisible.
As it happens, your colleagues are not correct, and it is not in any way beneficial to make into instance methods, methods that otherwise could be static methods. But even if that wasn't the case, to spend time writing obfuscated, unexpressive code to avoid some unmeasured, unproven performance cost is a waste. (And I know no one bothered to compare the actual performance difference, because if they had, they'd have found no improvement in performance by eliminating all static methods).

Should variables be reused to optimize resource utilization?

I am Using Microsoft Visual C# 2010. I have several methods that use a large bitmap for local processing, and each method can be called several times.I can declare a global variable and reuse it:
Bitmap workPic, editPic;
...
void Method1() {
workPic = new Bitmap(editPic);
...
}
void Method2() {
workPic = new Bitmap(editPic.Width * 2, editPic.Height * 2);
...
}
or declare a local variable in each method:
Bitmap editPic;
...
void Method1() {
Bitmap workPic = new Bitmap(editPic);
...
}
void Method2() {
Bitmap workPic = new Bitmap(editPic.Width * 2, editPic.Height * 2);
...
}
The second way is better for code clarity (local variables for local use). Is there a difference in terms of resource utilization?
If you intend to keep the memory allocated to you can use workPic again after the method, you should register it as class variable. If not, you can free memory (always a good idea) by letting it go out of scope.
Allocating one variable doesn't matter much to the framework which manages memory. Only if you recreate a variable inside a tight loop you may benefit by reusing the variable. If you have basic types, you even reuse the same memory. Else, only the reference to the allocated memory is kept, so not that much benefit you have from there.
Note it is very important to Dispose your workPic since now you have a memory leak in the unmanaged memory behind Bitmap. Preferably use using.
Why Global Variables Should Be Avoided When Unnecessary
Non-locality -- Source code is easiest to understand when the scope of its individual elements are limited. Global variables can be
read or modified by any part of the program, making it difficult to
remember or reason about every possible use.
No Access Control or Constraint Checking -- A global variable can be get or set by any part of the program, and any rules regarding its
use can be easily broken or forgotten. (In other words, get/set
accessors are generally preferable over direct data access, and this
is even more so for global data.) By extension, the lack of access
control greatly hinders achieving security in situations where you may
wish to run untrusted code (such as working with 3rd party plugins).
Implicit coupling -- A program with many global variables often has tight couplings between some of those variables, and couplings
between variables and functions. Grouping coupled items into cohesive
units usually leads to better programs.
Concurrency issues -- if globals can be accessed by multiple threads of execution, synchronization is necessary (and too-often
neglected). When dynamically linking modules with globals, the
composed system might not be thread-safe even if the two independent
modules tested in dozens of different contexts were safe.
Namespace pollution -- Global names are available everywhere. You may unknowingly end up using a global when you think you are using a
local (by misspelling or forgetting to declare the local) or vice
versa. Also, if you ever have to link together modules that have the
same global variable names, if you are lucky, you will get linking
errors. If you are unlucky, the linker will simply treat all uses of
the same name as the same object.
Memory allocation issues -- Some
environments have memory allocation schemes that make allocation of
globals tricky. This is especially true in languages where
"constructors" have side-effects other than allocation (because, in
that case, you can express unsafe situations where two globals
mutually depend on one another). Also, when dynamically linking
modules, it can be unclear whether different libraries have their own
instances of globals or whether the globals are shared.
Testing and Confinement - source that utilizes globals is somewhat more difficult to test because one cannot readily set up a 'clean'
environment between runs. More generally, source that utilizes global
services of any sort (e.g. reading and writing files or databases)
that aren't explicitly provided to that source is difficult to test
for the same reason. For communicating systems, the ability to test
system invariants may require running more than one 'copy' of a system
simultaneously, which is greatly hindered by any use of shared
services - including global memory - that are not provided for sharing
as part of the test.
reference: http://c2.com/cgi/wiki?GlobalVariablesAreBad
The main thing to understand here is that field and variable is only holding a reference, the memory will be allocated to the object(s) created by "new".
So in both cases all created bitmap objects need to go through garbage collection.
Difference is that object only referenced in method will be ready to be collected right after method execution, when the object which still have reference in a field will be ready to be collected only when the object containing the field also will be ready to be collected.
The only case when it make sense to introduce the field is when you have the same object reused through the life cycle of the host object.
In cases when you recreate object in the beginning of the method definitely variable is recommended.

Why don't all member variables need volatile for thread safety even when using Monitor? (why does the model really work?)

(I know they don't but I'm looking for the underlying reason this actually works without using volatile since there should be nothing preventing the compiler from storing a variable in a register without volatile... or is there...)
This question stems from the discord of thought that without volatile the compiler (can optimize in theory any variable in various ways including storing it in a CPU register.) While the docs say that is not needed when using synchronization like lock around variables. But there is really in some cases seemingly no way the compiler/jit can know you will use them or not in your code path. So the suspicion is something else is really happening here to make the memory model "work".
In this example what prevents the compiler/jit from optimizing _count into a register and thus having the increment done on the register rather then directly to memory (later writing to memory after the exit call)? If _count were volatile it would seem everything should be fine, but a lot of code is written without volatile. It makes sense the compiler could know not to optimize _count into a register if it saw a lock or synchronization object in the method.. but in this case the lock call is in another function.
Most documentation says you don't need to use volatile if you use a synchronization call like lock.
So what prevents the compiler from optimizing _count into a register and potentially updating just the register within the lock? I have a feeling that most member variables won't be optimized into registers for this exact reason as then every member variable would really need to be volatile unless the compiler could tell it shouldn't optimize (otherwise I suspect tons of code would fail). I saw something similar when looking at C++ years ago local function variables got stored in registers, class member variables did not.
So the main question is, is it really the only way this possibly works without volatile that the compiler/jit won't put class member variables in registers and thus volatile is then unnecessary?
(Please ignore the lack of exception handling and safety in the calls, but you get the gist.)
public class MyClass
{
object _o=new object();
int _count=0;
public void Increment()
{
Enter();
// ... many usages of count here...
count++;
Exit();
}
//lets pretend these functions are too big to inline and even call other methods
// that actually make the monitor call (for example a base class that implemented these)
private void Enter() { Monitor.Enter(_o); }
private void Exit() { Monitor.Exit(_o); } //lets pretend this function is too big to inline
// ...
// ...
}
Entering and leaving a Monitor causes a full memory fence. Thus the CLR makes sure that all writing operations before the Monitor.Enter / Monitor.Exit become visible to all other threads and that all reading operations after the method call "happen" after it. That also means that statements before the call cannot be moved after the call and vice versa.
See http://www.albahari.com/threading/part4.aspx.
The best guess answer to this question would appear to be that that any variables that are stored in CPU registers are saved to memory before any function would be called. This makes sense because compiler design viewpoint from a single thread would require that, otherwise the object might appear to be inconsistent if it were used by other functions/methods/objects.
So it may not be so much as some people/articles claim that synchronization objects/classes are detected by the compilers and non-volatile variables are made safe through their calls. (Perhaps they are when a lock is used or other synchronization objects in the same method, but once you have calls in another method that calls those synchronization objects probably not), instead it is likely that just the fact of calling another method is probably enough to cause the values stored in CPU registers to be saved to memory. Thus not requiring all variables to be volatile.
Also I suspect and others have suspected too that fields of a class are not optimized as much due to some of the threading concerns.
Some notes (my understanding):
Thread.MemoryBarrier() is mostly a CPU instruction to insure writes/reads don't bypass the barrier from a CPU perspective. (This is not directly related to values stored in registers) So this is probably not what directly causes to save variables from registers to memory (except just by the fact it is a method call as per our discussion here, would likely cause that to happen- It could have really been any method call though perhaps to affect all class fields that were used being saved from registers)
It is theoretically possible the JIT/Compiler could also take that method into an account in the same method to ensure variables are stored from CPU registers. But just following our simple proposed rule of any calls to another method or class would result in saving variables stored in registers to memory. Plus if someone wrapped that call in another method (maybe many methods deep), the compiler wouldn't likely analyze that deep to speculate on execution. The JIT could do something but again it likely wouldn't analyze that deep, and both cases need to ensure locks/synchronization work no matter what, thus the simplest optimization is the likely answer.
Unless we have anyone that writes the compilers that can confirm this its all a guess, but its likely the best guess we have of why volatile is not needed.
If that rule is followed synchronization objects just need to employ their own call to MemoryBarrier when they enter and leave to ensure the CPU has the most up to date values from its write caches so they get flushed so proper values can be read. On this site you will see that is what is suggested implicit memory barriers: http://www.albahari.com/threading/part4.aspx
So what prevents the compiler from optimizing _count into a register
and potentially updating just the register within the lock?
There is nothing in the documentation that I am aware that would preclude that from happening. The point is that the call to Monitor.Exit will effectively guarantee that the final value of _count will be committed to memory upon completion.
It makes sense the compiler could know not to optimize _count into a
register if it saw a lock or synchronization object in the method..
but in this case the lock call is in another function.
The fact that the lock is acquired and released in other methods is irrelevant from your point of view. The model memory defines a pretty rigid set of rules that must be adhered to regarding memory barrier generators. The only consequence of putting those Monitor calls in another method is that JIT compiler will have a harder time complying with those rules. But, the JIT compiler must comply; period. If the method calls get to complex or nested too deep then I suspect the JIT compiler punts on any heuristics it might have in this regard and says, "Forget it, I'm just not going to optimize anything!"
So the main question is, is it really the only way this possibly works
without volatile that the compiler/jit won't put class member
variables in registers and thus volatile is then unnecessary?
It works because the protocol is to acquire the lock prior to reading _count as well. If the readers do not do that then all bets are off.

If reflection is inefficient, when is it most appropriate?

I find a lot of cases where I think to myself that I could use relfection to solve a problem, but I usually don't because I hear a lot along the lines of "don't use reflection, it's too inefficient".
Now I'm in a position where I have a problem where I can't find any other solution than to use reflection with new T(), as outlined in this question & answer.
So I'm wondering if somebody can tell me reflection's specific intended usage, and if there's a set of guidelines to indicate when it's appropriate and when it isn't?
It is often "fast enough", and if you need faster (for tight loops etc) you can do meta-programming with Expression or ILGenerator (perhaps via DynamicMethod), to make extremely fast code (including some tricks you can't do in C#).
Reflection is more commonly used for framework/library scenarios, where the library by definition knows nothing about the caller, and must work based on configuration, attributes or patterns.
If there's one thing that I hate hearing it's "don't use reflection, it's too inefficient".
Too inefficient for what? If you're writing a console application that's run once a month and isn't time critical, does it really matter if it takes 30 seconds instead of 28, because of you using reflection?
Guidelines for when it's inappropriate to use are ones that only you can really put together as they're heavily dependent on what you're doing and how efficient/performant alternatives are.
A useful abstraction for code efficiency is to partition it in three categories of time, each about 3 orders of magnitude apart.
First is human-time. There's a lot you can do when you only need to keep a person happy with the performance of your code. Humans cannot perceive the difference between code that needs 10 milliseconds or 20 milliseconds, both look instant. And a human is forgiving when a program needs 6 seconds instead of 5, roughly 3 billion machine instructions more. Common examples of programs that run at human-time are compilers and point-and-click designers. Using reflection is never a problem.
Then there is I/O-time. When your program needs to hit the disk or the network. I/O is slow, restricted by mechanical motion in the case of the disk, bandwidth and latency in the case of a network. You can always tell when I/O is the bottleneck, your program is running but it isn't driving up the CPU load much. The operating system is constantly blocking the thread, making it wait until the I/O request is complete.
Reflection operates at I/O-time. To retrieve type data, the CLR must read the assembly metadata. And when that wasn't done before, your program will cause a page-fault, requiring the operating system to read the data from disk. What follows is that, roughly, reflection can make I/O bound code only twice as slow. Usually better because after the first perf hit, the metadata is cached and can be retrieved a lot quicker. Reflection is thus often an acceptable trade-off. The canonical examples are serialization and dbase ORMs.
Then there's machine-time. The raw performance of a CPU core is stupendous. A property getter can execute in somewhere between 0 and 1/2 a nanosecond. This does not compare favorably with, say, PropertyInfo.GetValue(). Both will keep the CPU busy, you'll see the CPU load for the core at 100%. But GetValue() costs hundreds if not thousands of machine code instructions. Not counting the time needed to page in the metadata. While not much an incremental time, it builds up fast when you loop.
If you cannot classify your reflection code in the human-time or I/O-time categories then reflection is unlikely to be an appropriate substitute for regular code.
The key to keeping reflection from slowing down your program is to not use it inside a loop. If you want to read a property from an object during startup (happens once), use reflection. You want to read a property from a list of 10,000 objects of unknown type, use reflection to get the property getter delegate once (search term: PropertyInfo.GetGetMethod), then call the delegate 10,000 types. There are plenty of examples of this on StackOverflow.
Reflection is not inefficient. It is less efficient than direct calls. So personnaly I use reflection when there's no equivalent compile time safe method. IMHO the problem with reflection is not so much the efficiency but the fragility of the code as it uses magic strings which are very refactor unfriendly.
I use it for plugin architecture - looking through assemblies in the plugin folder for methods marked with a custom attribute indicating info about the plugin - and in a logging framework. The framework detects a custom attribute on the assembly itself which holds information about the author of the assembly, the project, version information, and other tags that are logged along with everything in the stack trace.
Going to give away a 'trade secret', but it's a good one. The framework allows you to tag each method or class with a 'Story ref', e.g.
[StoryRef(Ref="ImportCSV1")]
...and the idea is it would integrate into our agile project management framework: if there were any exceptions thrown within that class/method, the logging method would use reflection to check for a StoryRef attribute in the stack trace, and if so that would be logged as an exception against that story. In the PM software you could see exceptions by Story (a story is like an extreme/agile use case).
I think that's a valid use, at least! Basically, when it just seems the most neat, and appropriate way to do it, I use reflection. Nothing else really comes into it - I can't think of an occasion you'd be using reflection to make that many calls that efficiency would come into it.
So I'm wondering if somebody can tell
me reflection's specific intended
usage, and if there's a set of
guidelines to indicate when it's
appropriate and when it isn't?
A bad example of reflection is this one from Wikipedia:
//Without reflection
Foo foo = new Foo();
foo.Hello();
//With reflection
Type t = Type.GetType("FooNamespace.Foo");
object foo = Activator.CreateInstance(t);
t.InvokeMember("Hello", BindingFlags.InvokeMethod, null, foo, null);
Here, there is no advantage to using reflection: The non-reflection-using code is not only more efficient, but easier to understand.
Good uses of reflection are things like serialization and object-relational mapping, which are easy to implement if you have a list of a class's properties, but otherwise require a custom-written function for each class.

Categories