Nanoseconds to DateTimeOffset - c#

I am using .Net version 4.6 and I am using DateTimeOffset.FromUnixTimeMilliseconds to convert nanoseconds to DateTimeOffset.
long j = 1580122686878258600;
var X = DateTimeOffset.FromUnixTimeMilliseconds(Convert.ToInt64(j * 0.000001));
I am storing the nanoseconds as long, still I have to do conversion to Int64 while multiplying with 0.000001 to convert nano seconds to milli seconds.
Is there other better way ?

Yes, if you don't want to convert this to (long) again, you can instead divide by 1000000 instead of multiplying by 0.000001.
But if you need to multiple, then no, you must convert the result of multiplying a long and a double back to a long in this case. First, the value 0.000001 is of type double. Second, the compiler will imply a conversion of long to double for the multiplication between these two types, and the result will be a double as well. The reason for the implied conversion is that there will be loss of precision (the decimal places) when converting back to long. The method DateTimeOffset.FromUnixTimeMilliseconds() only accepts a single long parameter (long and Int64 are the same data type, long is just an alias for Int64), so you have to convert your result back.
In the case of dividing by 1000000, division by two long values still results in the division, but any decimal places are truncated.
In both cases, you may want to consider the effect of rounding and precision loss, if desired. You get a different value if you use a nano value of 1580122686878999999. For multiplication of long and double (i.e. (long)(1580122686878999999 * 0.000001)) results in 1580122686879. But using division of long's, you instead get 1580122686878.
I have some side comments on the implementation, as well, offering some alternatives:
If you don't like the Convert.ToInt64() notation/call itself, then you can use a standard cast (i.e. (long)(j * 0.000001)). If you don't like doing this, you can construct the DateTimeOffset using a constructor that accepts "ticks" instead, which you can get from a TimeSpan struct, which has a FromMilliseconds() method that accepts a double, (e.g. new DateTimeOffset(TimeSpan.FromMilliseconds(j * 0.000001).Ticks, TimeSpan.Zero)). The cast seems to be the most straightforward and concise code, though.
Further, expanding on the "ticks" constructor above, the best solution might be that you instead divide down to "ticks", which are 100-nanosecond and more precise than milliseconds. In this case you can achieve "ticks" by dividing nanoseconds by 100 (multiplying by 0.01) to gain even more precision, e.g. new DateTimeOffset((long)(j * 0.01), TimeSpan.Zero). I only offer this with the thought that you may want the most precision possible from the initial nanoseconds value.

Related

TimeSpan FromHours With Decimal

Is it possible to use the TimeSpan.FromHours with values of type decimal? I need it to be as precise as possible. I know I could cast it to double but that will make the result inaccurate. I was thinking about writing an overloaded extension method but I am not sure how to do that.
Edit
A little background information. The database is using decimal as the type which is decimal in C#. If I am using double, I would have to use float in the database which is known to be bad.
Multiple the decimal value by the number of ticks per hour (TimeSpan.TicksPerHour).
Cast the result to long, and then use TimeSpan.FromTicks.
The risks of this are:
You have a decimal representing a number of hours that can't be represented in a TimeSpan
You have a decimal which requires sub-tick precision
In neither case can you actually end up with a TimeSpan which accurately represents your value - so you'd need to look for an alternative type. (If you want to use my Noda Time library, the Duration type has a precision of nanoseconds and a range of just under 46,000 years - both positive and negative. But you'd really need to move to Noda Time everywhere.)
A System.TimeSpan is backed by long which still would not hold enough precision for the full width of a decimal. If you need that precision, I'd suggest writing your own TimeSpan.

C# decimal and double

From what I understand decimal is used for precision and is recommended for monetary calculations. Double gives better range, but less precision and is a lot faster than decimal.
What if I have time and rate, I feel like double is suited for time and decimal for rate. I can't mix the two and run calculations without casting which is yet another performance bottleneck. What's the best approach here? Just use decimal for time and rate?
Use double for both. decimal is for currency or other situations where the base-10 representation of the number is important. If you don't care about the base-10 representation of a number, don't use decimal. For things like time or rates of change of physical quantities, the base-10 representation generally doesn't matter, so decimal is not the most appropriate choice.
The important thing to realize is that decimal is still a floating-point type. It still suffers from rounding error and cannot represent certain "simple" numbers (such as 1/3). Its one advantage (and one purpose) is that it can represent decimal numbers with fewer than 29 significant digits exactly. That means numbers like 0.1 or 12345.6789. Basically any decimal you can write down on paper with fewer than 29 digits. If you have a repeating decimal or an irrational number, decimal offers no major benefits.
The rule of thumb is to use the type that is more suitable to the values you will handle. This means that you should use DateTime or TimeSpan for time, unless you only care about a specific unit, like seconds, days, etc., in which case you can use any integer type. Usually for time you need precision and don't want any error due to rounding, so I wouldn't use any floating point type like float or double.
For anything related to money, of course you don't want any rounding error either, so you should really use decimal here.
Finally, only if for some very specific requirements you need absolute speed in a calculation that is done millions of times and for which decimal happens not to be fast enough, only then I would think of using another faster type. I would first try with integer values (maybe multiplying your value by a power of 10 if you have decimals) and only divide by this power of 10 at the end. If this can't be done, only then I would think of using a double. Don't do a premature optimization if you are not sure it's needed.

How time consuming is TImeSpan arithmetic compared to normal arithmetic?

I have a C# program with an interrupt that processes part of a list I'd like to have run as often as every 40 ms, but the math inside the interrupt can freeze up the program for lists with certain sizes and properties.
I'm tempted to try speeding it by removing the TimeSpan adds and subtracts from the math and converting them all to TotalMilliseconds before performing the arithmetic rather than after. Does anyone know what the overhead is on adding and subtracting TimeSpans compared to geting the TotalMilliseconds and adding and subtracting that?
Thanks.
That would be unwise, Timespan.TotalMilliseconds is a property of type double with a unit of one millisecond. Which is highly unrelated to the underlying structure value, Ticks is a property getter for the underlying field of type long with a unit of 100 nanoseconds. The TotalMilliseconds property getter goes through some gymnastics to convert the long to a double, it makes sure that converting back and forth produces the same number.
Which is a problem for TimeSpan, it can cover 10,000 years with a precision of 100 nanoseconds. A double however has 15 significant digits, that's not enough to cover that many years with that kind of precision. The TotalMilliseconds property performs rounding, not just conversion, it makes sure the returned value is accurate to one millisecond. Not 100 nanoseconds. So converting it back and forth always produces the same value.
Which does work: 10,000 years x 365.4 days x 24 hours x 60 minutes x 60 seconds x 1000 milliseconds = 315,705,600,000,000 milliseconds. Count the digits, exactly 15 so exactly good enough to store in a double without loss of accuracy. Happy coincidence, isn't it?
Answering the question: if you care about speed then always use Ticks, never TotalMilliseconds. That's a very fast 64-bit integer operation. Way faster than an integer-to-float + rounding conversion.

Wondering why the book we're using is using doubles and decimals instead of just decimals

I'm wondering why the book we're using's example is using a mix of decimals and doubles with a cast to decimal when necessary. Would it not be simpler to just make everything decimals and avoid the casts altogether?
Here's the main chunk of code that I'm curious about.
decimal amount;
decimal principal = 1000;
double rate = 0.05;
for (int year = 1; year <= 10; year++)
{
amount = principal * ((decimal) Math.Pow(1.0 + rate, year));
}
Are there any performance or accuracy issues that I'm overlooking?
There's no overload of Math.Pow that works with decimal - exponentiation on reals is inherently inaccurate, so it wouldn't make all that much sense for such an overload to exist.
Now since the only place where rate is used is in calculating the compound factor (the Pow call), it makes sense for it to be declared double - higher precision wouldn't help if it's going to just be 'cast away'.
The rest of the variables represent money, which is the use-case for decimal (that's why the suffix for decimal literals is M). One could imagine another computation where commmision of $2.10 were added to amount - we don't want such calculations to become inaccurate all for the sake of type-consistency in one pesky (inaccurate) compound-interest calculation.
Hence the inconsistency.
Decimal has a smaller range, but is more accurate than double. Decimal is what you should use to represent money in .Net. In the example you gave, amount and principal are monetary values. Rate is not. Thus, they are using the correct data types.
Here is the MSDN on Decimal: MSDN Decimal
You should read this topic
What it boils down to is: decimal is used for money/when you need precision in calculations, whereas double is more suited for scientific calculation because it performs better.

Store infinite decimals C#

I need to do some calculations with fractions which result have infinite decimals.
For example:
240/360=0.666666...
The output has infinite decimals and when I multiply this by an integer the result must be an integer. So I've coded this way:
result = someInteger * decimal.divide(240/360);
(Try with someInteger = 2,700)
But the result has decimal values and some calculators or even spreadsheets would output an integer result.
How do I get same results in c#?
Thanks
The output has infinite decimals and when I multiply this by an integer the result must be an integer.
That won't really happen.
What you can do is careful rounding (after the multiplication).
Or instead of
result = someInteger * decimal.divide(240/360);
you can use
result = someInteger * 240 / 360;
what's the problem with that?
When you're really serious about working with fractions and keeping the precision you'll need a special type:
struct Rational
{
public readonly int Numerator;
public readonly int Denominator;
// lots of members, including operators
}
There are libraries and examples for how to do this.
But note that you still won't be able to represent π exactly.
you could store the numerator and denominator separately as integers and multiply the numerator first before dividing, this wont fix every problem though as 1/3 * 2 will still give an infinite decimal. This also only works for rational numbers.
use Math.Round()
result = someInteger * decimal.divide(240/360);
result = Math.Round(result);
Returning a whole number is what you're asking to do right?
Many applications perform pseudo fractional mathematics by doing intermediary calculations to a higher degree of accuracy than what will eventually be needed/output to the user (Office does this). If this isn't what you want, and you want to perform true fractional mathematics then search around for a Fractions implementation. There seems to be a decent implementation here. The problem with such implementations is that the onus is on being functionally correct over being efficient. Thus, if your application needs to be performing millions of operations with fractions then its performance may suffer.

Categories