C# Debug and Release .exe behaving differently because of Long? [closed] - c#

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
After reading these questions:
Code is behaving differently in Release vs Debug Mode
C# - Inconsistent math operation result on 32-bit and 64-bit
Double precision problems on .NET
Why does this floating-point calculation give different results on different machines?
I suspect that the reason my method for determining FPS which works while in Debug mode and no longer works in Release mode is because I'm using Long to hold time values. Here's the relevant code:
public void ActualFPS()
{
if (Stopwatch.GetTimestamp() >= lastTicks + Stopwatch.Frequency)
{
actualFPS = runsThisSecond;
lastTicks = Stopwatch.GetTimestamp();
runsThisSecond = 0;
}
}
runsThisSecond is incremented by one every time the method I'm tracing is called. Granted this isn't an overly accurate way to determine FPS, but it works for what I need it to.
lastTicks is a variable of type Long, and I believe that Stopwatch.GetTimestamp() is returned as a Long as well(?). Is this my problem? If so: any suggestions as to how to work around this?
EDIT: Stopwatch is using the High Resolution timer.
EDIT2: The problem has resolved itself. Without any changes to any of my code. At all. None. I have no idea what caused it to break, or to fix itself. Perhaps my computer decided to spontaneously consider my feelings?

You have a very accurate interval measurement available (gettimestamp - lastticks), but you are not using it all to compute the frame rate. You assume the interval is a second, it won't be. It will be more, by a random amount that's determined by how often you call ActualFPS(). In Release mode you'll call ActualFPS() more frequently so the error is less.
Divide runsThisSecond by (gettimestamp - lastticks) converted to seconds.

Related

Do something at a given (odd) BPM [duplicate]

This question already has answers here:
High resolution timer in C#
(5 answers)
Closed 5 years ago.
No matter where I look, I can't find a good answer to this question. I'd like to have something happen at a given BPM (in my example, I'm using BPM), but the basic C# Timer class isn't working for me. Since it only measures in milliseconds, any actions performed within the timer get noticeably unsynced from the music. I've attempted to use this MicroTimer Library but with no luck! Though it can be quite fine grained, it's resource heavy and it doesn't have the resolution necessary. I understand I can have a function with a counter, but is there a good way to do this with Visual Studio's libraries (like the basic timer)? I hear those aren't as processor hungry.
I doubt you'll get the kind of time resolution you're looking for in a managed language like C#.
Hell, even if you were writing in C the OS could decide another process is more important and just like that you're out of sync.
Maybe consider using the timer, but resyncing every second or half second? I'd default to another user if they have experience in this area, but I'd at least give that a shot. Or go by the system clock ticks?

Simple calculation [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
In Excel I have a simple calculation:
total = £103000
percent: 2.14%
charges = £2,199.05 - Excel formula (total*percent)/100
In C# I can't get this to calculate correctly:
double percent = 2.14;
double total = 103000;
double charges = (total * percent) / 100;
returns £2,204.20
I'm sure there is some rounding going on somewhere which is making the calculation incorrect.
I wouldnt expect the spreadsheet to be incorrect, as it was provided by a financial advisor/expert!
I've uploaded a version of the spreadsheet here:
See Page/Tab 2 for the calculations, cell K20 is where to charges appear
I did the algebra and the real value is 2.135%.
Examining the spreadsheet provided via Google Docs confirms that the actual percentage is 2.135%. 2.14% is displayed due to format settings.
You should double-check your Excel values and formula because it is giving you the wrong results. Floats or not, it shoudln't be off that much on those numbers in that calculation.
You could change it to a float for more accuracy if required, but according to my calc.exe 2,204.20 is spot on.
This is likely because Excel uses 32-bit floats for their calculations.
The answer you received in C# is correct.
If you want the Excel answer to match, you need to change the data type of the cell you are looking at.

C# Errors Need Fixing and Info [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
How do I fix the errors in this code?
PS3TMAPI.GetProcessList(0, out processIDs);
ulong uProcess = processIDs[0];
ProcessID = Convert.ToUInt32(uProcess);
PS3TMAPI.ProcessAttach(0, PS3TMAPI.UnitType.PPU, ProcessID);
PS3TMAPI.ProcessContinue(0, ProcessID);
Info = "The Process" + ProcessID.ToString("") + " Has Been Attached !";
For this line PS3TMAPI.GetProcessList(0, out processIDs); I'm getting "the best overloaded method match for PS3TMAPI.GetProcessList(int, out uint[]) has some imvalid arguments"
Argument 2: cannot convert from out processIDs to out uint[]
For all the processIDs I'm getting doesn't exist in current context
And for all the ProcessID I'm getting doesn't exist in current context
I'm getting Info doesn't exist in current context
Also how do I do this in this video for example in bottom left hand corner the guy presses the button and the not connected in red turns green after it connects I connected a letterbox in my program but to let me know if it connected successfully I want to do that, in the video it's in the bottom left from 1:22 - 1:27 http://www.youtube.com/watch?v=uUI5IIhrj78
You need to post more (all?) of the relevant code to get any real help with this. Without more to go on the best you'll likely get is this.
processIDs is not a uint[] (see answer 3 below).
see answer to 1.
processIDs is declared elsewhere (outside this method) or not at all.
ProcessID is declared elsewhere (outside this method) or not at all.
Info is declared elsewhere (outside this method) or not at all.
you can fix several of the errors by adding
uint[] processIDs = null;
at the start
But I agree with Jon (duuh) that the question is not very clear.

How to estimate a project's length in man-hours: Commenting Code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Does anyone have any suggestions about how to estimate how long it would take to comment and reorganize the code of an entire project?
I just started doing contract work for a company along with another programmer: he's fixing bugs and I'm reorganizing the code. I'd like to make an estimate of how long it would take me to go through the code line-by-line and comment and/or reorganize the code is I go. It's a very large solution: it has two projects, each with multiple files.
It was suggested to me to count all the lines of code in each file, sum it all up, and double your hours to give you padding. It was also suggested that I square the size of the code before estimating. But I don't know how long it's supposed to take to comment, say, 100 lines of code.
Any suggestions would be more than welcome. If it makes a difference, the project is made in Windows Forms using Visual Studio 2012.
Thanks!
I suggest you pick a small random sample (20 lines) and try to reorganize it.
That would give you an idea of how long it takes (if you multiply), and it won't be underestimated, since the randomness of the sample will actually make the work slightly more complicated.
You can also do it two or three times, and see if the variance is small.
Any other method that is not based on you might be less expensive in time, but will not yield results that are tailored on you.
In my opinion this method is a good investment.
First, this is an estimate - estimates are not exact numbers, they are approximations and ranges. The estimate you have at the start may be wildly off once you start getting into it and the estimate would need to be refined. Take into consideration the code of uncertainty when giving an estimate.
There exist many well established models for software estimation. COCOMO II is one of these. I believe that this particular approach has added value to its techniques in that it can work from an already known amount.
A web tool for the COCOMO II can be found at usc. Count up the lines of code you have now. Make an approximation of how many lines of new comments you will need, how much will be able to be reused, and how much will need to be modified. Plug those numbers in. The definitions of how COCOMO II works and all the terms can be found in the usc ftp site
Lets say you have a 10k SLOC existing code of which 75% can be reused, 25% will need to be modified (75% design modification, 100% code modification, ...) and an additional 10% for commenting. There are arguments for and against tweaking the various cost drivers (changing things from 'Nominal' or leaving them as they are).
Plug this in and you get an equivalent size of 2825 SLOC which then translates to 9.2 person months of effort (remember, this isn't just going thorugh the code line by line - but also making sure you have the redesign correct and testing it). Nine months is approximately 1500 work hours.
Consider getting Software Estimation: Demystifying the Black Art which goes into more aspects of what an estimate is and other techniques to do an estimate (this is an estimate by proxy and just one of many techniques).
Remember to save the data on how long it took and your estimate - over time, historical data can help refine estimates.
Further reading on Wikipedia Cost estimation in software engineering
You might be able to find nasty bits by using some sort of code analysis that shows you the complexity of the classes.
Checking for code that is not covered by unit test will also help you to find code that is harder to refactor.
In terms of commenting the code, presumably that means reading every line and describing each method/interesting piece? Ignoring all the usual debate about this, firstly, you have my sympathy, and secondly, I would suggest picking a few classes, and doing the work, and measuring how long it takes: you should be able to extrapolate from there (and add 30% afterwards)
As to reorganising, unless you already know of certain specific large scale refactors you need, my usual answer to "how long will it take" is "how much better do you want it to be?" - and the final answer, always, ends up being a time-boxed amount that is agreeable to both you, and the boss/client. Make a suggestion of x days, and do as much good as you can in that time. Also, someone mentioned integration tests: if this is a viable option, then it will certainly help control the time

what's faster - the if statement or a call of a function? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
i'm writing an interesting program, and every performance hit is very painful for me.
so i'm wondering what is better - to make an extra "if" statement to reduce a number of function calls, or to avoid those "if" adn get more funciton calls. the function is virtual method, that overrides IEqualityComparer's method Equals, all it does is comparing the size and the hash of 2 files.
the if statement compares the size of these 2 files. i think you got the point of this logic.
as you can see i'm writing this program in C#. So maybe anyone can answer me, because this is not the first time i'm wondering what to choose. thanks
If you really need that much performance so badly, why don't you program in assembly language?
If you are still sure you absolutely need to worry about this, first check for other optimization opportunities that have more potential (a better algorithm can make orders of magnitude more differnece than any microoptimization).
If you optimized the living shit out of everything else, the only way to be sure is to profile. Really. No matter how hard anyone of us tries to guess, they will likely underestimate the JIT.
Still I have an opinion on this: Generally speaking, branch misprediction can hurt much more than a function call, since it screws the cache. But who says it compiled down to code that is likely to blow the cache? Edit: But since it seems like you're comparing file contents for strict equality, short-circuiting in case the length differs can save much time (Consider: how long does it take the filesystem to tell you the length? It likely already knows, so nearly none. How long does it take you to hash a 10 MB file? VERY long, n comparision). So if I guessed that correctly, then go for the short-circuiting, for crying out loud.
Have you tried profiling to find out? Are you sure that either of these is the bottleneck in your application?
Keep if - it will run much faster.
It is clear that creating hash of an file will take considerably more time than if.
In the old days, back in the 486 and older days, when CPUs were "dumb", branching logic (e.g. an if()) would cause a pipeline and/or cache flush, which would slow things down. These days, with modern compilers and out-of-order branch-predicting wash-your-dishses-for-you CPUs, such overhead is minimal.
The only proper way to answer your question is: benchmark both methods and see which is faster.
Is the pain caused by the actual performance you observe while testing or just by the fact that you think about possibility of wasting a few cycles? If it's the second case the only sane way to fix the problem is by working on attitude.
The cost of a branch is very hard to predict, because modern processors use some very clever techniques to speed the execution. They store some special data structures that are used to predict the branch target. The branch is very cheap if the prediction is correct and pretty costly otherwise. The rate of incorrect predictions is low, but of course not zero. I don't think You can get a definitive answer for your question
My guess would be that an if statement is better, but with today's advanced compilers you can never really tell. Your best bet is to try both and compare the performance.
It's really hard to know without profiling. But either way, I can tell you that your algorithms are generally going to be much more important than if vs function, and going with functions usually makes it easier to change out and update implementations much more easily, rapidly, and safely, allowing you ultimately to do more to improve the more important parts of your algorithms. And, again, the way to know how you're doing there is to profile.
The answer depends on one thing: "am I using a completely braindead compiler"
And since you're not, the answer is "it doesn't matter". The compiler and JIT'er heavily transforms your code, so what is actually executed looks nothing like the code you wrote.
For example, function calls can be inlined, eliminating all the overhead of the function call.
Therefore: write code that is easy to understand for yourself, and as a side bonus, it also becomes easier to understand for the compiler when it optimizes your code.
if can have a cost due to branching. The cost depends on the code run in the if case, the code run in the else case, the size of the CPU cache, and compiler decisions.
Function call can have a cost due to, well, to the cost of calling a function. This can be much bigger than for if or it can be zero (because the call was inlined - and inlining can even happen with virtual calls when the compiler could "see" which form was going to be called at compile time), or it can be in between.
Because of this, there really is no general answer to this question. Even if you profile, there is nothing to say that it won't be different on a different architecture even with a binary copy of the assembly (because the jitter will be different) or with a different version of the .NET environment (and here "different version" includes service packs, hot-fixes and patches that touch on it).

Categories