I programmed my own string matching algorithm, and I want to measure its time accuratly,
to compare it with other algorithms to check if my implementation is better.
I tried (StopWatch), but it gives different time in each run, because of multiple processes running of the Windows OS. I heared about (RDTSC) that can get the number of
cycles consumed, but I do not know if it gives different cycles number in each excution too ?
Please help me; Can (RDTSC) give an accurate and same measurment of cycles for a C# function, or it is similar to (StopWatch) ? Which is the best way to get cycles number for a C# function alone without the other running processes ? and thanks alot for any help or hint
it gives different time in each run, because of multiple processes running of the Windows OS.
That is in the nature of all benchmarks.
Good benchmarks offset this by statistical means, i.e. measuring often enough to offset any side-effects from other running programs. This is the way to go. As far as precision goes, StopWatch is more than enough for benchmarks.
This requires several things (without getting into statistical details, which I’m not too good at either):
An individual should last long enough to offset measurement imprecisions introduced by the measuring method (even RDTSC isn’t completely precise), and to offset calling overhead. After all, you want to measure your algorithm, not the time it takes to run the testing loop and invoking your testing method.
Enough test runs to have confidence in the result: the more data, the higher the robustness of your statistic.
Minimize external influences, in particular systematic bias. That is to say, run all your tests on the same machine under same conditions, otherwise the results cannot be compared. At all.
Furthermore, if you run multiple runs of your tests (and you should!) interleave the different methods.
I think to have the most accurate info you should interop with GetThreadTimes():
http://msdn.microsoft.com/en-us/library/ms683237%28v=vs.85%29.aspx
In the link there is down the signature for use the function in C#.
Related
I'm using a parallel for loop in my code to run a long running process on a large number of entities (12,000).
The process parses a string, goes through a number of input files (I've read that given the number of IO based things the benefits of threading could be questionable, but it seems to have sped things up elsewhere) and outputs a matched result.
Initially, the process goes quite quickly - however it ends up slowing to a crawl. It's possible that it's just hit a number of particularly tricky input data, but this seems unlikely looking closer at things.
Within the loop, I added some debug code that prints "Started Processing: " and "Finished Processing: " when it begins/ends an iteration and then wrote a program that pairs a start and a finish, initially in order to find which ID was causing a crash.
However, looking at the number of unmatched ID's, it looks like the program is processing in excess of 400 different entities at once. This seems like, with the large number of IO, it could be the source of the issue.
So my question(s) is(are) this(these):
Am I interpreting the unmatched ID's properly, or is there some clever stuff going behind the scenes I'm missing, or even something obvious?
If you'd agree what I've spotted is correct, how can I limit the number it spins off and does at once?
I realise this is perhaps a somewhat unorthodox question and may be tricky to answer given there is no code, but any help is appreciated and if there's any more info you'd like, let me know in the comments.
Without seeing some code, I can guess at the answers to your questions:
Unmatched IDs indicate to me that the thread that is processing that data is being de-prioritized. This could be due to IO or the thread pool trying to optimize, however it seems like if you are strongly IO bound then that is most likely your issue.
I would take a look at Parallel.For, specifically using ParallelOptions.MaxDegreesOfParallelism to limit the maximum number of tasks to a reasonable number. I would suggest trial and error to determine the optimum number of degrees, starting around the number of processor cores you have.
Good luck!
Let me start by confirming that is indeed a very bad idea to read 2 files at the same time from a hard drive (at least until the majority of HDs out there are SSDs), let alone whichever number your whole thing is using.
The use of parallelism serves to optimize processing using an actually paralellizable resource, which is the CPU power. If you paralellized process reads from a hard drive then you're losing most of the benefit.
And even then, even the CPU power is not prone to infinite paralellization. A normal desktop CPU has the capacity to run up to 10 threads at the same time (depends of the model obviously, but that's the order of magnitude).
So two things
first, I am going to make the assumption that your entities use all your files, but your files are not too big to be loaded into memory. If it's the case, you should read your files into objects (i.e. into memory), then paralellize the processing of your entities using those objects. If not, you're basically relying on your hard drive's cache to not reread your files every time you need them, and your hard drive's cache is far smaller than your memory (1000-fold).
second, you shouldn't be running Parallel.For on 12.000 items. Parallel.For will actually (try to) create 12.000 threads, and that is actually worse than 10 threads, because of the big overhead that paralellizing will create, and the fact your CPU will not benefit from it at all since it cannot run more than 10 threads at a time.
You should probably use a more efficient method, which is the IEnumerable<T>.AsParallel() extension (comes with .net 4.0). This one will, at runtime, determine what is the optimal thread number to run, then divide your enumerable into as many batches. Basically, it does the job for you - but it creates a big overhead too, so it's only useful if the processing of one element is actually costly for the CPU.
From my experience, using anything parallel should always be evaluated against not using it in real-life, i.e. by actually profiling your application. Don't assume it's going to work better.
I'm curious how does a typical C# profiler work?
Are there special hooks in the virtual machine?
Is it easy to scan the byte code for function calls and inject calls to start/stop timer?
Or is it really hard and that's why people pay for tools to do this?
(as a side note i find a bit interesting bec it's so rare - google misses the boat completely on the search "how does a c# profiler work?" doesn't work at all - the results are about air conditioners...)
There is a free CLR Profiler by Microsoft, version 4.0.
https://www.microsoft.com/downloads/en/details.aspx?FamilyID=be2d842b-fdce-4600-8d32-a3cf74fda5e1
BTW, there's a nice section in the CLR Profiler doc that describes how it works, in detail, page 103. There's source as part of distro.
Is it easy to scan the byte code for
function calls and inject calls to
start/stop timer?
Or is it really hard and that's why
people pay for tools to do this?
Injecting calls is hard enough that tools are needed to do it.
Not only is it hard, it's a very indirect way to find bottlenecks.
The reason is what a bottleneck is is one or a small number of statements in your code that are responsible for a good percentage of time being spent, time that could be reduced significantly - i.e. it's not truly necessary, i.e. it's wasteful.
IF you can tell the average inclusive time of one of your routines (including IO time), and IF you can multiply it by how many times it has been called, and divide by the total time, you can tell what percent of time the routine takes.
If the percent is small (like 10%) you probably have bigger problems elsewhere.
If the percent is larger (like 20% to 99%) you could have a bottleneck inside the routine.
So now you have to hunt inside the routine for it, looking at things it calls and how much time they take. Also you want to avoid being confused by recursion (the bugaboo of call graphs).
There are profilers (such as Zoom for Linux, Shark, & others) that work on a different principle.
The principle is that there is a function call stack, and during all the time a routine is responsible for (either doing work or waiting for other routines to do work that it requested) it is on the stack.
So if it is responsible for 50% of the time (say), then that's the amount of time it is on the stack,
regardless of how many times it was called, or how much time it took per call.
Not only is the routine on the stack, but the specific lines of code costing the time are also on the stack.
You don't need to hunt for them.
Another thing you don't need is precision of measurement.
If you took 10,000 stack samples, the guilty lines would be measured at 50 +/- 0.5 percent.
If you took 100 samples, they would be measured as 50 +/- 5 percent.
If you took 10 samples, they would be measured as 50 +/- 16 percent.
In every case you find them, and that is your goal.
(And recursion doesn't matter. All it means is that a given line can appear more than once in a given stack sample.)
On this subject, there is lots of confusion. At any rate, the profilers that are most effective for finding bottlenecks are the ones that sample the stack, on wall-clock time, and report percent by line. (This is easy to see if certain myths about profiling are put in perspective.)
1) There's no such thing as "typical". People collect profile information by a variety of means: time sampling the PC, inspecting stack traces, capturing execution counts of methods/statements/compiled instructions, inserting probes in code to collect counts and optionally calling contexts to get profile data on a call-context basis. Each of these techniques might be implemented in different ways.
2) There's profiling "C#" and profiling "CLR". In the MS world, you could profile CLR and back-translate CLR instruction locations to C# code. I don't know if Mono uses the same CLR instruction set; if they did not, then you could not use the MS CLR profiler; you'd have to use a Mono IL profiler. Or, you could instrument C# source code to collect the profiling data, and then compile/run/collect that data on either MS, Mono, or somebody's C# compatible custom compiler, or C# running in embedded systems such as WinCE where space is precious and features like CLR-built-ins tend to get left out.
One way to instrument source code is to use source-to-source transformations, to map the code from its initial state to code that contains data-collecting code as well as the original program. This paper on instrumenting code to collect test coverage data shows how a program transformation system can be used to insert test coverage probes by inserting statements that set block-specific boolean flags when a block of code is executed. A counting-profiler substitutes counter-incrementing instructions for those probes. A timing profiler inserts clock-snapshot/delta computations for those probes. Our C# Profiler implements both counting and timing profiling for C# source code both ways; it also collect the call graph data by using more sophisticated probes that collect the execution path. Thus it can produce timing data on call graphs this way. This scheme works anywhere you can get your hands on a halfway decent resolution time value.
This is a link to a lengthy article that discusses both instrumentation and sampling methods:
http://smartbear.com/support/articles/aqtime/profiling/
I am currently working on a (legacy) programme that has been written in C++ and C#; it executes some heavyweight calculations but should be completly deterministic. i.e. the same inputs will yeild the same outputs... Problem is that 2 runs (on the same computer, using the same compiled executable) produce slightly different outputs.
The application reads and writes to a SQL server database (it has unique access to the DB so nothing else should be interfering with the DB values).
The only obvious difference between runs is that they are each assigned a unique name (just a string variable).
There are no random objects within the code and all loops run for either a pre-determined number of iterations or until a condition is met, they don't run for a certain amount of time.There is a small amount of multi-threading, which I have been assured is thread-safe, but I will check this for myself.
Are there any other obvious things that I should be looking for, which would cause this deviant behaviour?
Two ideas occur to me:
uninitiased variables.
floating point arithmetic is not associative.
The latter point can yield machine accuracy level differences under multi-threading. It's much more likely to be uninitiased variables though!
If it is C++ then another thing is memory allocation. It's possible that a value isn't being initialized somewhere and therefore taking whatever value happens to be in memory at that time.
some possible causes come into mind:
floating point math may give slightly different results on 32 bit vers. 64 bit
some iterative algorithms may use some sort of randomness to initialize some start vectors or so
some implementations may use 3rd party libraries - preinstalled on a system. LAPACK or FFTW are some candidates. They might have different version and could cause that too.
Determinism is set according to the defined inputs.
Was your database reset between reruns? Perhaps a backup and restore of the database is in order, and your subsequent tests should be performed from fresh restores of the database. If that works, then you need to go back to the design documentation to determine if differing output is permissible depending on database input.
If the design document doesn't allow differing output depending on the database input, then you're program was not built to spec.
If your program produces differing output with the same database as the input, then it's probably reading the time somewhere (perhaps to store a timestamp), in which case, it cannot be considered 100% deterministic at all.
Either way, you likely have more inputs into the algorithm than you are tracking, hence the indeterminacy.
I wish (I dont know if its possible) to build profiling support into my code instead of using some external profiler. I have heard that there is some profiler api that is used by most of the profiler writers. Can that api be used to profile from within the code that is being executed? Are there any other considerations?
If you don't want to use a regular profiler, you could have your application output performance counters.
You may find this blog entry useful to get started: Link
The EQATEC Profiler builds an instrumented version of your app that will run and collect profiling statistics entirely by itself - you don't need to attach the profiler. By default your app will simply dump the statistics into plaintext xml-files.
This means that you can build a profiled version of your app, deploy it at your customer's site, and have them run it and send back the statistics-reports to you. No need for them to install anything special or run a profiler or anything.
Also, if you can reach your deployed app's machine via a network-connection and it allows incoming connections then you can even take snapshots of the running profiled app yourself, sitting at home with the profiler. All you need is a socket-connection - you decide the port-number yourself and the control-protocol itself is plain http, so it's pretty likely to make it past even content-filtering gateways.
The .NET framework profiler API is a COM object that intercepts calls before .NET handles them. My understanding is that it cannot be hosted in managed (C#) code.
Depending on what you want to do, you can insert Stopwatch timers to measure length of calls, or add Performance Counters to your application so that you can monitor the performance of the application from the Performance Monitor.
There's a GameDev article that discusses how to build profiling infrastructure in a C++ program. You may be able to adapt this approach to work with C# provided objects created on the stack are freed on exit instead of left for the garbage collector
http://www.gamedev.net/reference/programming/features/enginuity3/
Even if you can't take the whole technique, there may be some useful ideas.
What I've done when I can't use my favorite technique is this. It's clumsy and gives low-resolution information, but it works. First, have a global stack of strings. This is in C, but you can adapt it to C#:
int nStack = 0;
char* stack[10000];
Then, on entry and exit to each routine you have source code for, push/pop the name of the routine:
void EveryFunction(){
int iStack = nStack++; stack[iStack] = "EveryFunction";
... code inside function
nStack = iStack; stack[iStack] = NULL;
}
So now stack[0..nStack] keeps a running call stack (minus the line numbers of where functions are called from), so it's not as good as a real call stack, but better than nothing.
Now you need a way to take snapshots of it at random or pseudo-random times. Have another global variable and a routine to look at it:
time_t timeToSnap;
void CheckForSnap(){
time_t now = time(NULL);
if (now >= timeToSnap){
if (now - timeToSnap > 10000) timeToSnap = now; // don't take snaps since 1970
timeToSnap += 1; // setup time for next snapshot
// print stack to snapshot file
}
}
Now, sprinkle calls to CheckForSnap throughout your code, especially in the low-level routines. When the run is finished, you have a file of stack samples. You can look at those for unexpected behavior. For example, any function showing up on a significant fraction of samples has inclusive time roughly equal to that fraction.
Like I said, this is better than nothing. It does have shortcomings:
It does not capture line-numbers where calls come from, so if you find a function with suspiciously large time, you need to rummage within it for the time-consuming code.
It adds significant overhead of it's own, namely all the calls to time(NULL), so when you have removed all your big problems, it will be harder to find the small ones.
If your program spends significant time waiting for I/O or for user input, you will see a bunch of samples piled up after that I/O. If it's file I/O, that's useful information, but if it's user input, you will have to discard those samples, because all they say is that you take time.
It is important to understand a few things:
Contrary to popular accepted wisdom, accuracy of time measurement (and thus a large number of samples) is not important. What is important is that samples occur during the time when you are waiting for the program to do its work.
Also contrary to accepted wisdom, you are not looking for a call graph, you don't need to care about recursion, you don't need to care about how many milliseconds any routine takes or how many times it is called, and you don't need to care about the distinction between inclusive and exclusive time, or the distinction between CPU and wall-clock time. What you do need to care about is, for any routine, what percent of time it is on the stack, because that is how much time it is responsible for, in the sense that if you could somehow make that routine take no time, that is how much your total time would decrease.
I am trying to make the loading part of a C# program faster. Currently it takes like 15 seconds to load up.
On first glimpse, things that are done during the loading part includes constructing many 3rd Party UI components, loading layout files, xmls, DLLs, resources files, reflections, waiting for WndProc... etc.
I used something really simple to see the time some part takes,
i.e. breakpointing at a double which holds the total milliseconds of a TimeSpan which is the difference of a DateTime.Now at the start and a DateTime.Now at the end.
Trying that a few times will give me sth like,
11s 13s 12s 12s 7s 11s 12s 11s 7s 13s 7s.. (Usually 12s, but 7s sometimes)
If I add SuspendLayout, BeginUpdate like hell; call things in reflections once instead of many times; reduce some redundant redundant computation redundancy. The time are like 3s 4s 3s 4s 3s 10s 4s 4s 3s 4s 10s 3s 10s.... (Usually 4s, but 10s sometimes)
In both cases, the times are not consistent but more like, a bimodal distribution? It really made me unsure whether my correction of the code is really making it faster.
So I would like to know what will cause such result.
Debug mode?
The "C# hve to compile/interpret the code on the 1st time it runs, but the following times will be faster" thing?
The waiting of WndProc message?
The reflections? PropertyInfo? Reflection.Assembly?
Loading files? XML? DLL? resource file?
UI Layouts?
(There are surely no internet/network/database access in that part)
Thanks.
Profiling by stopping in the debugger is not a reliable way to get timings, as you've discovered.
Profiling by writing times to a log works fine, although why do all this by hand when you can just launch the program in dotTrace? (Free trial, fully functional).
Another thing that works when you don't have access to a profiler is what I call the binary approach - look at what happens in the code and try to disable about half of it by using comments. Note the effect on the running time. If it appears significant, repeat this process with half of that half, and so on recursively until you narrow in on the most significant piece of work. The difficulty is in simulating the side effects of the missing code so that that the remaining code can still work, so this is still harder than using a debugger, but can be quicker than adding a lot of manually time logging, because the binary approach lets you zero in on the slowest place in logarithmic time.
Raymond Chen's advise is good here. When people ask him "How can I make my application start up faster?" he says "Do less stuff."
(And ALWAYS profile the release build - profiling the debug build is generally a wasted effort).
Profile it. you can use eqatec its free
Well, the best thing is to run your application through a profiler and see what the bottlenecks are. I've personally used dotTrace, there are plenty of others you can find on the web.
Debug mode turns off a lot of JIT optimizations, so apps will run a lot slower than release builds. Whatever the mode, JITting has to happen, so I'd discount that as a significant factor. Time to read files from disk can vary based on the OS's caching mechanism, and whether you're doing a cold start or a warm start.
If you have to use timers to profile, I'd suggest repeating the experiment a large number of times and taking the average.
Profiling you code is definitely the best way to identify which areas are taking the longest to run.
As for the other part of your question about the inconsistent timings: timings in an multitasking O/S are inherently inconsistent, and working with managed code throws the garbage collector into the equation too. It could be that the GC is kicking in during your timing which will obviously slow things down.
If you want to try and get a "purer" timing try putting a GC collect before you start your timers, this way it is less likely to start in your timing section. Do remember to remove the timers after, as second guessing when the GC should run normally results in poorer performance.
Apart from the obvious (profiling), which will tell you precisely where time is being spent, there are some other points that spring to mind:
To get reasonable timing results with the approach you are using, run a release build of your program, and have it dump the timing results to a file (e.g. with Trace.WriteLine). Timing a debug version will give you spurious results. When running the timing tests, quit all other applications (including your debugger) to minimise the load on your computer and get more consistent results. Run the program many times and look at the average timings. Finally, bear in mind that Windows caches a lot of stuff, so the first run will be slow and subsequent runs will be much faster. This will at least give you a more consistent basis to tell whether your improvements are making a significant difference.
Don't try and optimise code that shouldn't be run in the first place - Can you defer any of the init tasks? You may find that some of the work can simply be removed from the init sequence. e.g. if you are loading a data file, check whether it is needed immediately - if not, then you could load it the first time it is needed instead of during program startup.