how to make software faster - c#

It seem that when I re run my .net application , it became much faster than before , why ?
Also is there anyway for my software to be run faster on startup ?
regards

If it's the first .NET application running in your system, then the first time you run it, all the .NET libraries and the CLR have to be loaded from physical disk. The second time you run, everything will be in the file system cache, so it'll be loading it from memory. There may well be other caching effects in play beyond the file system cache, but that's the most obvious one.
The same is true of your specific application, although that's likely to be a lot smaller than the framework itself.
One option to try to bootstrap this is to have a small no-op application (e.g. a WinForms app that never actually launches a window) which runs on startup. Of course, this will slow down the rest of your startup a bit - and if the computer doesn't run any .NET applications for a long time, the framework will be ejected from the cache eventually.

The first time you run your .NET app the following happens:
1) Loading of your application, the runtime, and the framework from hard disk (which is slow) to the memory (which is much faster)
2) Then your application and the associated libraries are just-in-time JIT compiled to native code...as needed. This native code stays around in the memory, and the runtime infrastructure keeps a record of the code that it has compiled to native code.
3) Only in the third step does this native code actually executed by the processor.
If you dont shut down your computer and rerun your application. The following happens:
1) When the run time encounters your managed code that has already been compiled to native by the JIT compiler , it does not recompile it. It simply executes the already compile native in memory.
2) Only the code that was not JIT compiled to native in the first run is now compile from managed to native...and thats only if needed.
So on a second run of your application two things get real fast:
1) loading either doesnt happen at all or its far smaller than the first one.
2) compilation from managed to native either doesnt happen or its minimal
Thats why your second run of the application is almost always faster then the first run.

This is almost certainly because the OS has loaded needed DLLs which stay in memory (unless the memory is needed elsewhere) after your application exits.
You can run your program in a special mode that just loads and exits) so that those DLLs will load up and this is a trick used by a few applications (MS Office and OpenOffice.org are two that spring to mind immediately).
Some people will run their programs at startup to make their first invocation seem faster but it's my opinion that this should be left to the user. It is their machine after all. By all means show them how they can do it (e.g., add yourprogram.exe /loadandexit to your startup folder) but leave it up to them.
I, for one, don't want every application I run slowing down my boot time.

Related

Is machine-specific binary code produced by JIT saved permanently to disk as well? [duplicate]

A .NET program is first compiled into MSIL code. When it is executed, the JIT compiler will compile it into native machine code.
I am wondering:
Where is these JIT-compiled machine code stored? Is it only stored in address space of the process? But since the second startup of the program is much faster than the first time, I think this native code must have been stored on disk somewhere even after the execution has finished. But where?
Memory. It can be cached, that's the job of ngen.exe. It generates a .ni.dll version of the assembly, containing machine code and stored in the GAC. Which automatically gets loaded afterward, bypassing the JIT step.
But that has little to do with why your program starts faster the 2nd time. The 1st time you have a so-called "cold start". Which is completely dominated by the time spent on finding the DLLs on the hard drive. The second time you've got a warm start, the DLLs are already available in the file system cache.
Disks are slow. An SSD is an obvious fix.
Fwiw: this is not a problem that's exclusive to managed code. Large unmanaged programs with lots of DLLs have it too. Two canonical examples, present on most dev machines are Microsoft Office and Acrobat Reader. They cheat. When installed, they put an "optimizer" in the Run registry key or the Startup folder. All that these optimizers do is load all the DLLs that the main program uses, then exit. This primes the file system cache, when the user subsequently uses the program, it will start up quickly since its warm start is fast.
Personally, I find this extraordinarily annoying. Because what they really do is slow down any other program that I may want to start after logging in. Which is rarely Office or Acrobat. I make it a point to delete these optimizers, repeatedly if necessary when a blasted update puts it back.
You can use this trick too, but use it responsibly please.
As others have pointed out, code is JIT'd on a per process basis in your case, and is not cached - the speed-up you are seeing on second load is OS disk caching (i.e. in-memory) of the assemblies.
However, whilst there is no caching (apart from OS disk caching) in the desktop\server version of the framework, there is caching of JIT'd machine code in another version of the framework.
Of interest is what is happening in the .Net Compact Framework (NETCF for Windows Phone 7 relase). Recent advances see sharing of some JIT'd framework code between processes where the JIT'd code is indeed cached. This has been primarily carried out for better performance (load time and memory usage) in constrained devices such as mobile phones.
So in answer to the question there is no direct framework caching of JIT'd code in the desktop\server version of the CLR, but there will be in the latest version of the compact framework i.e. NETCF.
Reference: We Believe in Sharing
Link
JIT compiled machine code is cached in memory per-method, each time that a method is executed for the first time. I don't think it is ever cached to disk.
You may find that the process is faster to load the second time because Windows cached (in memory) the files used by your process (dlls, resources etc etc) on the first run. On the second run there is no need to go to disk, where this may have been done on the first run.
You could confirm this by running NGen.exe to actually pre-compile the machine code for your architecture, and compare the performance of the first and second runs. My bet is that the second run would still be faster, due to caching in the OS.
In short, the IL is JIT-compiled for each invocation of the program and is maintained in code pages of the process address space. See Chapter 1 of Richter for great coverage of the .NET execution model.
I believe that the JIT compiled code is never stored or swapped out of memory. The performance boost you perceive on a second execution of an assembly is due to dependant assemblies already being in memory or disc cache.
Yes, NGEN.EXE will place a JIT compiled version of a .NET executable in the GAC, even when
the MSIL version is not there. I have tried that, but to no avail.
I believe, unless the original MSIL version is also in the GAC and would be loaded
from there, the JIT version in the GAC will not be used.
I also believe that on-the-fly JIT compiles (not NGEN) are never cached; they occupy process
memory only.
I believe this from reading the MS doc and from various experiments. I would welcome either
a confirmation or rebuttal of my assertions from those "who know".

Why does VS2012 run identical tests at different speeds?

I'm working on a project at work where there's a performance issue with the code.
I've got some changes I think will improve performance, but no real way of gauging how my changes affect it.
I wrote a unit test that does things the way they're currently implemented, with a Stopwatch to monitor how fast the function runs. I've also wrote a similar unit test that does things slightly differently.
If the tests are ran together, one takes 1s to complete, the other takes 73 ms.
If the tests are ran separately, they both take around 1s to complete (yea.. that change i made didn't seem to change much).
If the tests are identical, I have the same issue, one runs faster than the other.
Is visual studio doing something behind the scenes to improve performance? Can I turn it off if it is?
I've tried moving tests into different files, which didn't fix the issue I'm having.
I'd like to be able to run all the tests, but have them run as if there's only one test running at a time.
My guess: it's likely down to dll loading and JIT compiling
1. Assembly loading.
.NET lazily loads assemblies (dll's). If you add reference to FooLibrary, it doesn't mean it gets loaded when your code loads.
Instead, what happens is that the first time you call a function or instantiate a class from FooLibrary, then the CLR will go and load the dll it lives in. This involves searching for it in the filesystem, possible security checks, etc.
If your code is even moderately complex, then the "first test" can often end up causing dozens of assemblies to get loaded, which obviously takes some time.
Subsequent tests appear fast because everything's already loaded.
2. JIT Compiling
Remember, your .NET assemblies don't contain code that the CPU can directly execute. Whenever you call any .NET function, the CLR takes the MSIL bytecode and compiles it into executable machine code, and then it goes and runs this machine code. It does this on a per-function basis.
So, if you consider that the first time you call any function, there will be a small delay while it JIT compiles, these things can add up. This can be particularly bad if you're calling a lot of functions or initializing a big third party library (think entity framework, etc).
As above, subsequent tests appear fast, because many of the functions will have already been JIT compiled, and cached in memory.
So, how can you get around this?
You can improve the assembly loading time by having fewer assemblies. This means fewer file searches and so on. The microsoft .NET performance guidelines go into more detail.
Also, I believe installing them in the global assembly cache may (??) help, but I haven't tested that at all so please take it with a large grain of salt.
Installing into the GAC requires administrative permissions and is quite a heavyweight operation. You don't want to be doing it during development, as it will cause you problems (assemblies get loaded from the GAC in preference to the filesystem, so you can end up loading old copies of your code without realizing it).
You can improve the JIT time by using ngen to pre-compile your assemblies. However, like with the GAC, this requires administrative permissions and takes some time, so you do not want to do it during development either.
My advice?
Firstly, measuring performance in unit tests is not a particularly good or reliable thing to be doing. Who knows what else visual studio is doing in the background that may or may not affect your tests.
Once you've got your code you're trying to benchmark out into a standalone app, have it loop and run all the tests twice, and discard the first result :-)
"Premature optimization is the root of all evil."
If you didn't measure before, how do you know you are fixing anything now? How do you even know you had a problem that needed to be solved?
Unit tests are for operational correctness. They could be used for performance, but I would not depend on that because many other factors come into play at run-time.
Your best bet is to get a profiler (or use one that comes with VS) and start measuring.

Measure startup performance c# application

I noticed that sometimes a .net 4.0 c# application takes a long time to start, without any apparent reason. Can can I determine what's actually happening, what modules are loaded? I'm using a number of external assemblies. Can putting them into the GAC improve performances?
Is .NET 4 slower than .NET 2?
.NET programs have two distinct start-up behaviors. They are called cold-start and warm-start. The cold-start is the slow one, you'll get it when no .NET program was started before. Or when the program you start is large and was never run before. The operating system has to find the assembly files on disk, they won't be available in the file system cache (RAM). That takes a while, hard disks are slow and there are a lot of files to find. A small do-nothing Winforms app has to load 51 DLLs to get started. A do-nothing WPF app weighs in at 77 DLLs.
You get a warm start when the assembly files were loaded before, not too long ago. The assembly file data now comes from RAM instead of the slow disk, that's zippedy-doodah. The only startup overhead is now the jitter.
There's little you can do about cold starts, the assemblies have to come of the disk one way or another. A fast disk makes a Big difference, SSDs are especially effective. Using ngen.exe to pre-jit an assembly actually makes the problem worse, it creates another file that needs to be found and loaded. Which is the reason that Microsoft recommends not prejitting small assemblies. Seeing this problem with .NET 4 programs is also highly indicated, you don't have a lot of programs that bind to the version 4 CLR and framework assemblies. Not yet anyway, this solves itself over time.
There's another way this problem automatically disappears. The Windows SuperFetch feature will start to notice that you often load the CLR and the jitted Framework assemblies and will start to pre-load them into RAM automatically. The same kind of trick that the Microsoft Office and Adobe Reader 'optimizers' use. They are also programs that have a lot of DLL dependencies. Unmanaged ones, the problem isn't specific to .NET. These optimizers are crude, they preload the DLLs when you login. Which is the 'I'm really important, screw everything else' approach to working around the problem, make sure you disable them so they don't crowd out the RAM space that SuperFetch could use.
The startup time is most likely due to the runtime JIT compiling assembly IL into machine code for execution. It can also be affected by the debugger - as another answerer has suggested.
Excluding that - I'll talk about an application ran 'in the wild' on a user's machine, with no debugger etc.
The JIT compiler in .Net 4 is, I think it's fair to say, better than in .Net 2 - so no; it's not slower.
You can improve this startup time significantly by running ngen on your application's assemblies - this pre-compiles the EXEs and DLLs into native images. However you lose some flexibility by doing this and, in general, there is not much point.
You should see the startup time of some MFC apps written in C++ - all native code, and yet depending on how they are linked they can take just as long.
It does, of course, also depend on what an application is actually doing at startup!
I dont think putting your assemblies in GAC will boot the performance.
If possible do logging for each instruction you have written on Loading or Intialize events which may help you to identify which statement is actually taking time and with this you can identify the library which is taking time in loading.

Where is the .NET JIT-compiled code cached?

A .NET program is first compiled into MSIL code. When it is executed, the JIT compiler will compile it into native machine code.
I am wondering:
Where is these JIT-compiled machine code stored? Is it only stored in address space of the process? But since the second startup of the program is much faster than the first time, I think this native code must have been stored on disk somewhere even after the execution has finished. But where?
Memory. It can be cached, that's the job of ngen.exe. It generates a .ni.dll version of the assembly, containing machine code and stored in the GAC. Which automatically gets loaded afterward, bypassing the JIT step.
But that has little to do with why your program starts faster the 2nd time. The 1st time you have a so-called "cold start". Which is completely dominated by the time spent on finding the DLLs on the hard drive. The second time you've got a warm start, the DLLs are already available in the file system cache.
Disks are slow. An SSD is an obvious fix.
Fwiw: this is not a problem that's exclusive to managed code. Large unmanaged programs with lots of DLLs have it too. Two canonical examples, present on most dev machines are Microsoft Office and Acrobat Reader. They cheat. When installed, they put an "optimizer" in the Run registry key or the Startup folder. All that these optimizers do is load all the DLLs that the main program uses, then exit. This primes the file system cache, when the user subsequently uses the program, it will start up quickly since its warm start is fast.
Personally, I find this extraordinarily annoying. Because what they really do is slow down any other program that I may want to start after logging in. Which is rarely Office or Acrobat. I make it a point to delete these optimizers, repeatedly if necessary when a blasted update puts it back.
You can use this trick too, but use it responsibly please.
As others have pointed out, code is JIT'd on a per process basis in your case, and is not cached - the speed-up you are seeing on second load is OS disk caching (i.e. in-memory) of the assemblies.
However, whilst there is no caching (apart from OS disk caching) in the desktop\server version of the framework, there is caching of JIT'd machine code in another version of the framework.
Of interest is what is happening in the .Net Compact Framework (NETCF for Windows Phone 7 relase). Recent advances see sharing of some JIT'd framework code between processes where the JIT'd code is indeed cached. This has been primarily carried out for better performance (load time and memory usage) in constrained devices such as mobile phones.
So in answer to the question there is no direct framework caching of JIT'd code in the desktop\server version of the CLR, but there will be in the latest version of the compact framework i.e. NETCF.
Reference: We Believe in Sharing
Link
JIT compiled machine code is cached in memory per-method, each time that a method is executed for the first time. I don't think it is ever cached to disk.
You may find that the process is faster to load the second time because Windows cached (in memory) the files used by your process (dlls, resources etc etc) on the first run. On the second run there is no need to go to disk, where this may have been done on the first run.
You could confirm this by running NGen.exe to actually pre-compile the machine code for your architecture, and compare the performance of the first and second runs. My bet is that the second run would still be faster, due to caching in the OS.
In short, the IL is JIT-compiled for each invocation of the program and is maintained in code pages of the process address space. See Chapter 1 of Richter for great coverage of the .NET execution model.
I believe that the JIT compiled code is never stored or swapped out of memory. The performance boost you perceive on a second execution of an assembly is due to dependant assemblies already being in memory or disc cache.
Yes, NGEN.EXE will place a JIT compiled version of a .NET executable in the GAC, even when
the MSIL version is not there. I have tried that, but to no avail.
I believe, unless the original MSIL version is also in the GAC and would be loaded
from there, the JIT version in the GAC will not be used.
I also believe that on-the-fly JIT compiles (not NGEN) are never cached; they occupy process
memory only.
I believe this from reading the MS doc and from various experiments. I would welcome either
a confirmation or rebuttal of my assertions from those "who know".

Can the working set of a managed app be reduced by unloading unmanaged libraries with AfxFreeLibrary?

I have a managed Windows application that loads a managed C++ component that uses AfxLoadLibrary to load a third party component if present on the client machine. Once detected, I'm unloading the component using AfxFreeLibrary in an attempt to lower the working set of the managed parent application.
The call to AfxFreeLibrary is successful (verified using Process Explorer), but no memory is freed up. Is this due to the nature of a managed application, or is there a way to free up this process space?
I'm not looking for alternative ways to tackle this problem in general, since the code is already in production, rather I would like to find out if the approach of unloading is worthwhile.
It should do, you can prove it by writing a pure native app and seeing the working set.
However, working set is the size of the memory required to run the app, so if the code used by the dll can be swapped out, then the working set will not be reduced - Windows doesn't count it as part of the working set.
If the dll has private memory allocated to the process, that cannot be swapped, then that does count and will reduce the working set.
so the answer is that it depends. Its not guaranteed to make any difference, and if the dll is not used, then it will have been swapped out and isn't part of the current working set. You might as well not bother unloading it, unless you like to keep things tidy.
The only way to reduce the working set is to have your app use less memory. As its a .NET app, chances are you don't have much control over it at all (as the GC will make its own mind up about how much memory is 'active' and needed in the working set)

Categories