I have a C# program with an interrupt that processes part of a list I'd like to have run as often as every 40 ms, but the math inside the interrupt can freeze up the program for lists with certain sizes and properties.
I'm tempted to try speeding it by removing the TimeSpan adds and subtracts from the math and converting them all to TotalMilliseconds before performing the arithmetic rather than after. Does anyone know what the overhead is on adding and subtracting TimeSpans compared to geting the TotalMilliseconds and adding and subtracting that?
Thanks.
That would be unwise, Timespan.TotalMilliseconds is a property of type double with a unit of one millisecond. Which is highly unrelated to the underlying structure value, Ticks is a property getter for the underlying field of type long with a unit of 100 nanoseconds. The TotalMilliseconds property getter goes through some gymnastics to convert the long to a double, it makes sure that converting back and forth produces the same number.
Which is a problem for TimeSpan, it can cover 10,000 years with a precision of 100 nanoseconds. A double however has 15 significant digits, that's not enough to cover that many years with that kind of precision. The TotalMilliseconds property performs rounding, not just conversion, it makes sure the returned value is accurate to one millisecond. Not 100 nanoseconds. So converting it back and forth always produces the same value.
Which does work: 10,000 years x 365.4 days x 24 hours x 60 minutes x 60 seconds x 1000 milliseconds = 315,705,600,000,000 milliseconds. Count the digits, exactly 15 so exactly good enough to store in a double without loss of accuracy. Happy coincidence, isn't it?
Answering the question: if you care about speed then always use Ticks, never TotalMilliseconds. That's a very fast 64-bit integer operation. Way faster than an integer-to-float + rounding conversion.
Related
I am using .Net version 4.6 and I am using DateTimeOffset.FromUnixTimeMilliseconds to convert nanoseconds to DateTimeOffset.
long j = 1580122686878258600;
var X = DateTimeOffset.FromUnixTimeMilliseconds(Convert.ToInt64(j * 0.000001));
I am storing the nanoseconds as long, still I have to do conversion to Int64 while multiplying with 0.000001 to convert nano seconds to milli seconds.
Is there other better way ?
Yes, if you don't want to convert this to (long) again, you can instead divide by 1000000 instead of multiplying by 0.000001.
But if you need to multiple, then no, you must convert the result of multiplying a long and a double back to a long in this case. First, the value 0.000001 is of type double. Second, the compiler will imply a conversion of long to double for the multiplication between these two types, and the result will be a double as well. The reason for the implied conversion is that there will be loss of precision (the decimal places) when converting back to long. The method DateTimeOffset.FromUnixTimeMilliseconds() only accepts a single long parameter (long and Int64 are the same data type, long is just an alias for Int64), so you have to convert your result back.
In the case of dividing by 1000000, division by two long values still results in the division, but any decimal places are truncated.
In both cases, you may want to consider the effect of rounding and precision loss, if desired. You get a different value if you use a nano value of 1580122686878999999. For multiplication of long and double (i.e. (long)(1580122686878999999 * 0.000001)) results in 1580122686879. But using division of long's, you instead get 1580122686878.
I have some side comments on the implementation, as well, offering some alternatives:
If you don't like the Convert.ToInt64() notation/call itself, then you can use a standard cast (i.e. (long)(j * 0.000001)). If you don't like doing this, you can construct the DateTimeOffset using a constructor that accepts "ticks" instead, which you can get from a TimeSpan struct, which has a FromMilliseconds() method that accepts a double, (e.g. new DateTimeOffset(TimeSpan.FromMilliseconds(j * 0.000001).Ticks, TimeSpan.Zero)). The cast seems to be the most straightforward and concise code, though.
Further, expanding on the "ticks" constructor above, the best solution might be that you instead divide down to "ticks", which are 100-nanosecond and more precise than milliseconds. In this case you can achieve "ticks" by dividing nanoseconds by 100 (multiplying by 0.01) to gain even more precision, e.g. new DateTimeOffset((long)(j * 0.01), TimeSpan.Zero). I only offer this with the thought that you may want the most precision possible from the initial nanoseconds value.
Hello I'm currently following the Computing with C# and the .NET Framework book and I'm having difficulty on one of the exercises which is
Write a C# program to make change. Enter the cost of an item that is less than one dollar. Output
the coins given as change, using quarters, dimes, nickels, and pennies. Use the fewest coins
possible. For example, if the item cost 17 cents, the change would be three quarters, one nickel,
and three pennies
Since I'm still trying to grasp c# programming the best method I came up with is using the while loop.
while(costOfItem >= 0.50)
{
costOfItem -= 0.50;
fiftyPence++;
}
I have these for each of the pences 20,10,5 etc..
I'm checking if the amount is greater than or equal to 50 pence, if so, i reduce 50 pence from the amount given by the user and add 1 to the fiftypence variable.
then it moves onto the next while loop, which I have for each pences. The problem is, somewhere along the line one of the loops takes away, lets say 20 pence, and the costOfItem becomes something like "0.1999999999999" then it never drops down to 0, which it should to get the correct amount of change.
Any help is appreciated, please don't suggest over complex procedures that I have yet covered.
Never use double or float for money operations. Use Decimal.
For all other problems of calculation accuracy you have to use "Double Epsilon Comparison" like Double.Epsilon for equality, greater than, less than, less than or equal to, greater than or equal to
If you do the calculation in cents, you can use integers and then you don't get into floating point rounding problems
Sounds to me you are using float or double as datatype for costOfItem.
float and double store their values in binary. But not all decimal values have an exact representation in binary. Therefore, a very close approximation is stored. This is why you get values like "0.1999999999999".
For money calculations, you should always use decimal, since they avoid those small inaccuracies.
You can read more about the difference in Jon Skeets awesome answer to Difference between Decimal, Float and Double in .NET?
Thank you all for the fast replies. The issue was that i used double instead of decimal the links provided have people explaining why it's wrong to do so in some cases. Using double should be avoided in arithmetic since apparently some float numbers do not have exact a binary representation, so instead of .23 it gives 0.2299999999999. Changing the variable to decimal fixed my issue
I'm trying to create a clock for my game. My hours and seconds are both float values so I am using Math.Round to round them off to the nearest whole number. The problem is that the Hours and Seconds variables aren't changing at all. Am I using Math.Round wrong?
public void Update()
{
Hours = (float)Math.Round(Hours, 0);
ClockTime = Hours + ":" + Seconds;
if (Hours >= 24)
Hours = 0;
if (Seconds >= 60)
Seconds = 0;
}
In my update method for my day/night class.
float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;
clock.Hours += (float)elapsed;
clock.Update();
When I print the numbers on the screen, nothing is changing. If I take away the (float) cast to the Math.Round I get an error cannot convert double to float.
Don't use floating point in this case, there's absolutely no reason for an hour, minute or second to be non-integral.
What's almost certainly happening is that you're ending up with a float value like 59.9999 despite the fact you think you're rounding it.
There are real dangers in assuming floating point values have more precision than they actually do.
If you hold your number of seconds in an unsigned integral 32-bit type, you can represent elapsed time from now until about the year 2150 AD, should anyone still be playing your game at that point :-)
Then you simply use integer calculations to work out hours and seconds (assuming you're not interested in minutes as seems to be the case), pseudo-code such as:
hours = elapsed_secs / 3600
secs = elapsed_secs % 3600
print hours ":" seconds
Beyond that advice, what you're doing seems a tad strange. You are adding an elapsed seconds field (which I assume you're checked isn't always set to zero) to the hours variable. That's going to make gameplay a little difficult as time speeds by at three and a half thousand times its normal rate.
Actually, you should used DateTime to track your time and use the DateTime properties to get the hours and seconds correctly instead trying it yourself using float for seconds and hours. DateTime is long based and supports from fractions of milliseconds to millenias and of course seconds. It has all the functions built in to add milliseconds or years or seconds or ... correctly, which is actually rather difficult.
I am trying to calculate a video framerate in a program. To do this I take
DateTime.Now
at the beginning of a stream, and then again after every frame whilst also incrementing a framecounter.
Then I calculate the FPS like so:
int fps = (int)(frames / (TimeSpan.FromTicks(CurrentTime.Ticks).Seconds - TimeSpan.FromTicks(StartTime.Ticks).Seconds));
The problem is that I occassionally get a negative number out, meaning the start time must be later than the current time. How can this be the case? Does anyone know anough about these functions to explain?
Seconds gives you the seconds part of the TimeSpan, not the total duration of the TimeSpan converted in seconds. This means that Seconds will never be greater than 60.
Use TotalSeconds instead
You should consider using StopWatch for such needs, It has much better precision
The datetime functions are probably not precise enough for your needs, you may want to look into performance counters instead. I think the StopWatch class is what your looking for. System.Diagnostics.StopWatch. that is using the QueryPerformanceFrequency and QueryPerformanceCounter functions for the timing.
I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?
(edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal).
(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)
Very, very unsuitable. Use decimal.
double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false
(example from Jon's page here - recommended reading ;-p)
You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.
Here's a concrete example:
using System;
class Test
{
static void Main()
{
double x = 0.1;
double y = x + x + x;
Console.WriteLine(y == 0.3); // Prints False
}
}
Yes it's unsuitable.
If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.
You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..
edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.
#Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.
Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).
A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't.
My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.
IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.
Using double when you don't know what you are doing is unsuitable.
"double" can represent an amount of a trillion dollars with an error of 1/90th of a cent. So you will get highly precise results. Want to calculate how much it costs to put a man on Mars and get him back alive? double will do just fine.
But with money there are often very specific rules saying that a certain calculation must give a certain result and no other. If you calculate an amount that is very very very close to $98.135 then there will often be a rule that determines whether the result should be $98.14 or $98.13 and you must follow that rule and get the result that is required.
Depending on where you live, using 64 bit integers to represent cents or pennies or kopeks or whatever is the smallest unit in your country will usually work just fine. For example, 64 bit signed integers representing cents can represent values up to 92,223 trillion dollars. 32 bit integers are usually unsuitable.
No a double will always have rounding errors, use "decimal" if you're on .Net...
Actually floating-point double is perfectly well suited to representing amounts of money as long as you pick a suitable unit.
See http://www.idinews.com/moneyRep.html
So is fixed-point long. Either consumes 8 bytes, surely preferable to the 16 consumed by a decimal item.
Whether or not something works (i.e. yields the expected and correct result) is not a matter of either voting or individual preference. A technique either works or it doesn't.